<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" media="screen" href="/~d/styles/rss2full.xsl"?><?xml-stylesheet type="text/css" media="screen" href="http://feeds.feedburner.com/~d/styles/itemcontent.css"?><rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:feedburner="http://rssnamespace.org/feedburner/ext/1.0" version="2.0">

<channel>
	<title>Kenneth Truyers</title>
	
	<link>https://www.kenneth-truyers.net</link>
	<description>My life as a software developer</description>
	<lastBuildDate>Fri, 27 Jan 2017 13:12:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.4.8</generator>
	<atom10:link xmlns:atom10="http://www.w3.org/2005/Atom" rel="self" type="application/rss+xml" href="http://feeds.feedburner.com/KennethTruyers" /><feedburner:info uri="kennethtruyers" /><atom10:link xmlns:atom10="http://www.w3.org/2005/Atom" rel="hub" href="http://pubsubhubbub.appspot.com/" /><item>
		<title>Refactoring taken too far</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/4TVzsmtnxuM/</link>
		<comments>https://www.kenneth-truyers.net/2017/01/27/refactoring-taken-too-far/#respond</comments>
		<pubDate>Fri, 27 Jan 2017 01:42:36 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1591</guid>
		<description><![CDATA[<p>I came across a tweet today about refactoring badly written code. I’m always interested in that, so I saw a few fellow devs had taken some badly written code and then applied refactoring to it, following good software design principles. It all started with this article on CodeProject from April last year: https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code The author [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2017/01/27/refactoring-taken-too-far/">Refactoring taken too far</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I came across a tweet today about refactoring badly written code. I’m always interested in that, so I saw a few fellow devs had taken some badly written code and then applied refactoring to it, following good software design principles.</p>
<p>It all started with this article on CodeProject from April last year: <a title="https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code" href="https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code">https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code</a></p>
<p>The author shows a piece of badly written and then goes through a slew of refactorings following Liskov, strategy patterms, dependency injection and many other well-known principles.</p>
<p>While I’m a big fan of good software practices, the number 1 principle I like to adhere to is KISS: (Keep it simple, stupid). If you have been reading my blog before, you’ll certainly have come across the theme. I’m not the only though, there were a few follow up posts as well:</p>
<ul>
<li><a href="http://ralfw.de/2016/03/dont-let-cleaning-up-go-overboard/">Don’t let cleaning go overboard</a> (by Ralf Westpahl)</li>
<li><a href="http://functionalsoftware.net/fsharp-rewrite-of-a-fully-refactored-csharp-clean-code-example-612/">An F# rewrite</a> (by Roman Bassart)</li>
<li><a href="http://www.davidarno.org/2017/01/26/using-c-7-and-succinct-to-give-f-a-run-for-its-money/">Using C# 7 and Succin&lt;T&gt; to give F# a run for its money</a> (by David Arno)</li>
</ul>
<p>I like many of the ideas expressed in the above posts, but I couldn’t stop to think about how this code could be made much simpler. Looking at the posts I still see things that are complicating matters much more than necessary.</p>
<h2>Code before refactoring</h2>
<p>For reference, this is the initial code from the first post:</p>
<pre class="brush: bash; auto-links: false;">public class Class1
{
  public decimal Calculate(decimal amount, int type, int years)
  {
    decimal result = 0;
    decimal disc = (years &gt; 5) ? (decimal)5/100 : (decimal)years/100; 
    if (type == 1)
    {
      result = amount;
    }
    else if (type == 2)
    {
      result = (amount - (0.1m * amount)) - disc * (amount - (0.1m * amount));
    }
    else if (type == 3)
    {
      result = (0.7m * amount) - disc * (0.7m * amount);
    }
    else if (type == 4)
    {
      result = (amount - (0.5m * amount)) - disc * (amount - (0.5m * amount));
    }
    return result;
  }
}
</pre>
<p>This is indeed not very nice code. What I do like about it though, is that it’s compact. When I read this, I can probably figure out relatively quickly what this code does. It has many problems though (as discussed in the original post).</p>
<p>The “issue” I have with the refactorings in the other posts is that they try to cater for use cases that simply aren’t specified. From what I can tell, these are the requirements:</p>
<ul>
<li>Give a discount based on what type of customer it is</li>
<li>Give a loyalty discount which equals the amount of years the customer has been active, with a maximum of 5</li>
<li>Both discounts are cumulable</li>
</ul>
<h2>The simplest possible solution</h2>
<blockquote><p>UPDATE: after comments on twitter / reddit, I noticed that the tests were incorrect. I had taken them from one of the refactorings and assumed they were correct. I have modified the data and updated the tests to reflect what the original code does.</p>
<p>UPDATE 2: I went against my own adage: going to far. Smuggling the discount info into the enum was too much. I have refactored it to a dictionary, which is easier to maintain.</p></blockquote>
<pre class="brush: bash; auto-links: false;">static readonly Dictionary&lt;Status, int&gt; Discounts = new Dictionary&lt;Status, int&gt;
{
    {Status.NotRegistered, 0 },
    {Status.SimpleCustomer, 10 },
    {Status.ValuableCustomer, 30 },
    {Status.MostValuableCustomer, 50 }
};
decimal applyDiscount(decimal price, Status accountStatus, int timeOfHavingAccountInYears)
{
    
    price = price - Discounts[accountStatus] * price/100;
    return price - Math.Min(timeOfHavingAccountInYears, 5)* price/100;
}
</pre>
<p>It’s a simple calculation, so why not express is at as a simple calculation? While I see the power of functional programming, sometimes discriminated unions, partial application and the likes are just overkill for simple problems.</p>
<p>For a coding kata like this, I understand that people want to provide a most elegant solution for future needs, but unfortunately I see these patterns arise far too often, complicating already complex problems even more.</p>
<h3>Bonus 1: Tests</h3>
<pre class="brush: bash; auto-links: false;">[Fact]
public void Tests()
{
    applyDiscount(100m, Status.MostValuableCustomer, 1).Should().Be(49.5000m);
    applyDiscount(100m, Status.ValuableCustomer, 6).Should().Be(66.5000m);
    applyDiscount(100m, Status.SimpleCustomer, 1).Should().Be(89.1000m);
    applyDiscount(100m, Status.NotRegistered, 0).Should().Be(100.0m);
}
</pre>
<h3>Bonus 2:</h3>
<p>Want a UI with that? <a href="https://www.kenneth-truyers.net/wp-content/uploads/2017/01/DiscountCalculator.xlsx" rel="">Here you go!</a> <img src="https://www.kenneth-truyers.net/wp-includes/images/smilies/simple-smile.png" alt=":-)" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2017/01/27/refactoring-taken-too-far/">Refactoring taken too far</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=4TVzsmtnxuM:wilxNCHfhLU:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=4TVzsmtnxuM:wilxNCHfhLU:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=4TVzsmtnxuM:wilxNCHfhLU:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=4TVzsmtnxuM:wilxNCHfhLU:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=4TVzsmtnxuM:wilxNCHfhLU:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=4TVzsmtnxuM:wilxNCHfhLU:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/4TVzsmtnxuM" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2017/01/27/refactoring-taken-too-far/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2017/01/27/refactoring-taken-too-far/</feedburner:origLink></item>
		<item>
		<title>Git as a NoSql database</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/o7mkkH52A1Y/</link>
		<pubDate>Thu, 13 Oct 2016 10:24:25 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1540</guid>
		<description><![CDATA[<p>Git’s man-pages state that it’s a stupid content tracker. It’s probably the most used version control system in the world. Which is very strange, since it doesn’t describe itself as being a source control system. And in fact, you can use git to track any type of content. You can create a Git NoSQL database [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/10/13/git-nosql-database/">Git as a NoSql database</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Git’s man-pages state that it’s a <em>stupid content tracker</em>. It’s probably the most used version control system in the world. Which is very strange, since it doesn’t describe itself as being a source control system. And in fact, you can use git to track any type of content. You can create a Git NoSQL database for example.</p>
<p>The reason why it says <em>stupid</em> in the man-pages is that it makes no assumptions about what content you store in it. The underlying git model is rather basic. In this post I want to explore the possibilities of using git as a NoSQL database (a key-value store). You could use the file system as a data store and then use <span style="font-family: 'Courier New';">git add</span> and <span style="font-family: 'Courier New';">git commit</span> to save your files:</p>
<pre class="brush: bash; auto-links: false;"># saving a document
echo '{"id": 1, "name": "kenneth"}' &gt; 1.json
git add 1.json
git commit -m "added a file"

# reading a document
git show master:1.json
=&gt; {"id": 1, "name": "kenneth"}
</pre>
<p>That works, but you’re now using the file system as a database: paths are the keys, values are whatever you store in them. There are a few disadvantages:</p>
<ul>
<li>We need to write all our data to disk before we can save them into git</li>
<li>We’re saving data multiple times</li>
<li>File storage is not deduplicated and we lose the benefit git provides us for automatic data deduplication</li>
<li>If we want to work on multiple branches at the same time, we need multiple checked out directories</li>
</ul>
<p>What we want rather is a <em>bare</em> repository, one where none of the files exist in the file system, but only in the git database. Let’s have a look at git’s data model and the plumbing commands to make this work.</p>
<h2></h2>
<h2>Git as a NoSQL database</h2>
<p>Git is a<em> content-addressable file system</em>. This means that it’s a simple key-value store. Whenever you insert content into it, it will give you back a key to retrieve that content later.<br />
Let’s create some content:</p>
<pre class="brush: bash; auto-links: false;">#Initialize a repository
mkdir MyRepo
cd MyRepo
git init

# Save some content
echo {"id": 1, "name": "kenneth"} | git hash-object -w --stdin
da95f8264a0ffe3df10e94eed6371ea83aee9a4d
</pre>
<p><span style="font-family: 'Courier New';">Hash-object</span> is a <em>git plumbing</em> command which takes content, stores is it in the database and returns the key</p>
<blockquote><p>The –<span style="font-family: 'Courier New';">w </span>switch tells it to store the content, otherwise it would just calculate the hash. the <span style="font-family: 'Courier New';">–-stdin </span>switch tells git to read the content from the input, instead of from a file.</p></blockquote>
<p>The key it returns is a sha-1 based on the content. If you run the above commands on your machine, you’ll see it returns the exact same sha-1. Now that we have some content in the database, we can read it back:</p>
<pre class="brush: bash; auto-links: false;">git cat-file -p da95f8264a0ffe3df10e94eed6371ea83aee9a4d
{"id": 1, "name": "kenneth"}
</pre>
<h3>Git Blobs</h3>
<p>We now have a key-value store with one object, a blob:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-width: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb.png" alt="Git Object Database: blob" width="240" height="81" border="0" /></a></figure>
<p>There’s only one problem: we can’t update this, because if we update the content, the key will change. That would mean that for every version of our file, we’d have to remember a different key. What we want instead, is to specify our own key which we can use to track the versions.</p>
<h3>Git Trees</h3>
<p>Trees solve two problems:</p>
<ul>
<li>the need to remember the hashes of our objects and its version</li>
<li>the possibility to storing groups of files.</li>
</ul>
<p>The best way to think about a tree is like a folder in the file system.  To create a tree you have to follow two steps:</p>
<pre class="brush: bash; auto-links: false;"># Create and populate a staging area
git update-index --add --cacheinfo 100644 da95f8264a0ffe3df10e94eed6371ea83aee9a4d 1.json

# write the tree
git write-tree
d6916d3e27baa9ef2742c2ba09696f22e41011a1</pre>
<p>This also gives you back a sha. Now we can read back that tree:</p>
<pre class="brush: bash; auto-links: false;">git cat-file -p d6916d3e27baa9ef2742c2ba09696f22e41011a1
100644 blob da95f8264a0ffe3df10e94eed6371ea83aee9a4d    1.json
</pre>
<p>At this point our object database looks as follows:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-6.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-width: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-6.png" alt="Git Object Database: tree and blob" width="355" height="77" border="0" /></a></figure>
<p>To modify the file, we follow the same steps:</p>
<pre class="brush: bash; auto-links: false;"># Add a blob
echo {"id": 1, "name": "kenneth truyers"} | git hash-object -w --stdin
42d0d209ecf70a96666f5a4c8ed97f3fd2b75dda

# Create and populate a staging area
git update-index --add --cacheinfo 100644 42d0d209ecf70a96666f5a4c8ed97f3fd2b75dda 1.json

# Write the tree
git write-tree
2c59068b29c38db26eda42def74b7142de392212
</pre>
<p>That leaves us with the following situation:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-15.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-15.png" alt="Git Object Database: tree and blob" width="419" height="215" border="0" /></a></figure>
<p>We now have two trees that represent the different states of our files. That doesn’t help much, since we still need to remember the sha-1 values of the trees to get to our content.</p>
<h3></h3>
<h3>Git Commits</h3>
<p>One level up, we get to commits. A commit holds 5 pieces of key information:</p>
<ol>
<li>Author of the commit</li>
<li>Date it was created</li>
<li>Why it was created (message)</li>
<li>A single tree object it points to</li>
<li>One or more previous commits (for now we’ll only consider commits with only a single parent, commits with multiple parents are <em>merge commits</em>).</li>
</ol>
<p>Let’s commit the above trees:</p>
<pre class="brush: bash; auto-links: false;"># Commit the first tree (without a parent)
echo "commit 1st version" | git commit-tree d6916d3
05c1cec5685bbb84e806886dba0de5e2f120ab2a

# Commit the second tree with the first commit as a parent
echo "Commit 2nd version" | git commit-tree 2c59068 -p 05c1cec5
9918e46dfc4241f0782265285970a7c16bf499e4
</pre>
<p>This leaves us with the following state:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-16.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-16.png" alt="Git Object Database: commit, tree and blob" width="704" height="264" border="0" /></a></figure>
<p>Now we have built up a complete history of our file. You could open the repository with any git client and you’ll see how <span style="font-family: 'Courier New';">1.json</span> is being tracked correctly. To demonstrate that, this is the output of running <span style="font-family: 'Courier New';">git log</span>:</p>
<pre class="brush: bash; auto-links: false;">git log --stat 9918e46
9918e46dfc4241f0782265285970a7c16bf499e4 "Commit 2nd version"
 1.json     | 1 +
 1 file changed, 1 insertions(+)
05c1cec5685bbb84e806886dba0de5e2f120ab2a "Commit 1st version"
 1.json | 1 +
 1 file changed, 1 insertion(+)</pre>
<p>And to get the content of the file at the last commit:</p>
<pre class="brush: bash; auto-links: false;">git show 9918e46:1.json
{"id": 1, "name": "kenneth truyers"}
</pre>
<p>We’re still not there though, because we have to remember the hash of the last commit. Up until now, all objects we have created are part of git’s <em>object database. </em>One characteristic of that database is that it stores only <strong>immutable</strong> objects. Once you write a blob, a tree or a commit, you can never modify it without changing the key. You can also not delete them (at least not directly, the git gc command <strong>does</strong> delete objects that are <em>dangling</em>).</p>
<h3></h3>
<h3>Git References</h3>
<p>Yet another level up, are Git references. References are not a part of the object database, they are part of the reference database and are <strong>mutable</strong>. There are different types of references such as branches, tags and remotes. They are similar in nature with a few minor differences. For the moment, let’s just consider branches. A branch is a pointer to a commit. To create a branch we can write the hash of the commit to the file system:</p>
<pre class="brush: bash; auto-links: false;">echo 05c1cec5685bbb84e806886dba0de5e2f120ab2a &gt; .git/refs/heads/master
</pre>
<p>We now have a branch <span style="font-family: 'Courier New';">master</span>, pointing at our first commit. To move the branch, we issue the following command:</p>
<pre class="brush: bash; auto-links: false;">git update-ref refs/heads/master 9918e46
</pre>
<p>This leaves us with the following graph:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-17.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-17.png" alt="Git NoSQL Database: branch, commit, tree and blob" width="781" height="238" border="0" /></a></figure>
<p>And finally, we’re now able to read the current state of our file:</p>
<pre class="brush: bash; auto-links: false;">git show master:1.json
{"id": 1, "name": "kenneth truyers"}
</pre>
<p>The above command will keep working, even if we add newer versions of our file and subsequent trees and commits as long as we move the branch pointer to the latest commit.</p>
<p>All of the above seems rather complex for a simple key-value store. We can however abstract these things so that client applications only have to specify the branch and a key. I’ll come back to that in a different post though. For now, I want to discuss the potential advantages and drawbacks of using git as a NoSQL database.</p>
<h2>Data efficiency</h2>
<p>Git is very efficient when it comes to storing data. As mentioned before, blobs with the same content are stored only once because of how the hash is calculated. You can try this out by adding a whole bunch of files with the same content into an empty git repository and then checking the size of the <span style="font-family: 'Courier New';">.git</span> folder versus the size on disk. You’ll notice that the <span style="font-family: 'Courier New';">.git</span> folder is quite a bit smaller.</p>
<p>But it doesn’t stop there, git does the same for trees. If you change a file in a sub tree, git will only create a new sub tree and just reference the other trees that weren’t affected. The following example shows a commit pointing at a hierarchy with two sub folders:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-18.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-18.png" alt="Git NoSQL Database: nested trees" width="790" height="300" border="0" /></a></figure>
<p>Now if I want to replace the blob <span style="font-family: 'Courier New';">4658ea84</span>, git will only replace those items that are changed and keep those that haven’t as a reference. After replacing the blob with a different file and committing the changes the graph looks as follows (new objects are marked in red):</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-19.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-19.png" alt="Git NoSQL Database: storage efficiency" width="792" height="495" border="0" /></a></figure>
<p>As you can see, git only replaced the necessary items and referenced the already existing items.</p>
<p>Although git is very efficient in how it references existing data, if every small modification would result in a complete copy, we would still get a huge repository after a while. To mitigate this, there’s an automatic garbage collection process. When <em>git gc</em> runs, it will look at your blobs. Where it can it will remove the blobs and instead store a single copy of the base data, together with the delta for each version of the blob. This way, git can still retrieve each unique version of the blob, but doesn’t need to store the data multiple times.</p>
<h2>Versioning</h2>
<p>You get a fully versioned system for free. With that versioning also comes the advantage of not deleting data, ever. I’ve seen examples like this in SQL databases:</p>
<pre class="brush: bash; auto-links: false;">id    | name    | deleted
1     | kenneth | 1
</pre>
<p>That’s OK for a simple record like this, but that’s usually not the whole story. Data might have dependencies on other data (whether they’re foreign keys or not is an implementation detail) and when you want to restore it, chances are you can’t do it in isolation. With git, it’s simply a matter of pointing your branch to a different commit to get back to the correct state on a database level, not a record level.</p>
<p>Another practice I have seen is this:</p>
<pre class="brush: bash; auto-links: false;">id | street  | lastUpdate
1  | town rd | 20161012
</pre>
<p>This practice is even less useful: you know it was updated, but there’s no information on what was actually updated and what the previous value was. Whenever you update data, you’re actually deleting data and inserting new one. The old data is lost forever. With git, you can run <em>git log</em> on any file and see what changed, who changed it, when and why.</p>
<h2>Git tooling</h2>
<p>Git has a rich toolset which you can use to explore and manipulate your data. Most of them focus on code, but that doesn’t mean you can’t use them with other data. The following is a non-exhaustive overview of tools that I can come up with of the top of my mind.</p>
<p>Within the basic git commands, you can:</p>
<ul>
<li>Use <em>git diff</em> to find the exact changes between two commits / branches / tags / …</li>
<li>Use <em>git bisect</em> to find out when something stopped working because of a change in the data</li>
<li>Use git <em>hooks</em> to get automatic change notifications and build full-text indices, update caches, publish data, …</li>
<li>Revert, branch, merge, …</li>
</ul>
<p>And then there are external tools:</p>
<ul>
<li>You can use Git clients to visualize your data and explore it</li>
<li>You can use pull requests, such as the ones on GitHub, to inspect data changes before they are merged</li>
<li>Gitinspector: statistical analysis on git repositories</li>
</ul>
<p>Any tool that works with git, works with your database.</p>
<h2>NoSQL</h2>
<p>Because it’s a key-value store, you get the usual advantages of a NoSQL store such as a schema-less database. You can store any content you want, it doesn’t even have to be JSON.</p>
<h2>Connectivity</h2>
<p>Git can work in a partitioned network. You can put everything on a USB stick, save data when you’re not connected to a network and then push and merge it when you get back online. It’s the same advantage we regularly use when developing code, but it could be a life saver for certain use cases.</p>
<h2>Transactions</h2>
<p>In the above examples, we committed every change to a file. You don’t necessarily have to do that, you can also commit various changes as a single commit. That would make it easy to roll back the changes atomically later.</p>
<p>Long lived transactions are also possible: you can create a branch, commit several changes to it and then merge it (or discard it).</p>
<h2>Backups and replication</h2>
<p>With traditional databases, there’s usually a bit of hassle to create a schedule for full backups and incremental backups. Since git already stores the entire history, there will never be a need to do full backups. Furthermore, a backup is simply executing <em>git push</em>. And those pushes can go anywhere, GitHub, BitBucket or a self-hosted git-server.</p>
<p>Replication is equally simple. By using git hooks, you can set up a trigger to run git push after every commit. Example:</p>
<pre class="brush: bash; auto-links: false;">git remote add replica git@replica.server.com:app.git
cat .git/hooks/post-commit
#!/bin/sh
git push replica</pre>
<p>&nbsp;</p>
<p>This is fantastic! We should all use Git as a database from now on!</p>
<p>Hold on! There are a few disadvantages as well:</p>
<h2></h2>
<h2>Querying</h2>
<p>You can query by key … and that’s about it. The only piece of good news here is that you can structure your data in folders in such a way that you can easily get content by prefix, but that’s about it. Any other query is off limits, unless you want to do a full recursive search. The only option here is to build indices specifically for querying. You can do this on a scheduled basis if staleness is of no concern or you can use <span style="font-family: 'Courier New';">git hooks</span> to update indices as soon as a commit happens.</p>
<h2>Concurrency</h2>
<p>As long as we’re writing blobs there’s no issue with concurrency. The problem occurs when we start writing commits and updating branches. The following graph illustrates the problem when two processes concurrently try to create a commit:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image55.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image55_thumb.png" alt="Git NoSQL Database: concurrency" width="784" height="252" border="0" /></a></figure>
<p>In the above case you can see that when the second process modifies the copy of the tree with its changes, it’s actually working on an outdated tree. When it commits the tree it will lose the changes that the first process made.</p>
<p>The same story applies to moving branch heads. Between the time you commit and update the branch head, another commit might get in. You could potentially update the branch head to the wrong commit.</p>
<p>The only way to counter this is by locking any writes between reading a copy of the current tree and updating the head of the branch.</p>
<h2>Speed</h2>
<p>We all know git to be fast. But that’s in the context of creating branches. When it comes to commits per second it’s actually not that fast, because you’re writing to disk all the time. We don’t notice it, because usually we don’t do many commits per second when writing code (at least I don’t). After running some tests on my local machines I got into a limit of about 110 commits/second.</p>
<blockquote><p>Brandon Keepers showed some results in a <a href="https://vimeo.com/44458223#t=21m32s">video</a> a few years ago and he got to about 90 commits / second which seems in line of what hardware advances may have brought.</p></blockquote>
<p>110 commits/second is enough for a lot of applications, but not for all of them. It’s also a theoretical maximum on my local development machines, with lots of resources. There are various factors that can affect the speed:</p>
<h3></h3>
<h3>Tree sizes</h3>
<p>In general, you should prefer to use lots of subdirectories instead of putting all documents in the same directory. This will keep the write speed as close to the maximum as possible. The reason for that is that every time you create a new commit, you have to copy the tree, make a change to it and then save the modified tree. Although you might think that affects size as well, that’s actually not the case because running <em>git gc</em> will make sure to save it as a delta instead of as two different trees. Let’s take a look at an example:</p>
<p>In the first case, we have 10.000 blobs stored in the root directory. When we add a file we copy the tree that contains 10.000 items, add one and save it. This could be a potentially lengthy operation, because of the size of the tree.</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-20.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-20.png" alt="Git NoSQL Database: large trees" width="777" height="220" border="0" /></a></figure>
<p>In the second case we have 4 levels of trees, with each 10 sub trees and 10 blobs at the last level (10 * 10 * 10 * 10 = 10.000 files):</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-21.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-21.png" alt="Git NoSQL Database: nested trees" width="770" height="795" border="0" /></a></figure>
<p>In this case, if we want to add a blob, we don’t need to copy the entire hierarchy, we just need to copy the branch that leads to the blob. The following image shows the trees that had to be copied and amended:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image-22.png"><img class="alignnone" style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/image_thumb-22.png" alt="Git NoSQL Database: nested tree modifications" width="762" height="787" border="0" /></a></figure>
<p>So, by using sub folder, instead of having to copy 1 tree with 10.000 entries, we can now copy 5 trees with 10 entries, which is quite a bit faster. The more your data grows, the more you’ll want to use sub folders.</p>
<h3>Combining values into transactions</h3>
<p>If you need to do more than 100 commits/second, chances are you don’t need to be able to roll them back on an individual basis. In that case, instead of committing every change, you could commit several changes in one commit. You can write blobs concurrently, so you could potentially write 1000s of files concurrently to disk and then do 1 commit to save them into the repository. This has drawbacks, but if you want raw speed, this is the way to go.</p>
<p>The way to solve this is to add a different backend to git that doesn’t immediately flush its contents to disk, but writes to an in-memory database first and then asynchronously flushes it to disk. Implementing this is not that easy though. When I was testing this solution using <em>libgit2sharp </em>to connect to a repository, I tried using a Voron-backend (which is available as open-source, as well as a variant that uses ElasticSearch). That improved speed quite a bit, but you lose the benefit of being able to inspect your data with any standard git tool.</p>
<h2>Merging</h2>
<p>Another potentially pain point is when you are merging data from different branches. As long as there are no merge conflicts, it’s actually a rather pleasant experience, as it enables a lot of nice scenarios:</p>
<ul>
<li>Modify data that needs approval before it can go “live”</li>
<li>Run tests on live data that you need to revert</li>
<li>Work in isolation before merging data</li>
</ul>
<p>Essentially, you get all the fun with branches you get in development, but on a different level. The problem is when there IS a merge conflict. Merging data can be rather difficult because you won’t always be able to make out how to handle these conflicts.</p>
<p>One potential strategy is to just store the merge conflict as is when you’re writing data and then when you read, present the user with the diff so they can choose which one is correct. Nonetheless, it can be a difficult task to manage this correctly.</p>
<h2>Conclusion</h2>
<p>Git can work as a NoSQL database very well in some circumstances. It has its place and time, but I think it’s particularly useful in the following cases:</p>
<ul>
<li>You have hierarchic data (because of its inherent hierarchical nature)</li>
<li>You need to be able to work in disconnected environments</li>
<li>You need an approval mechanism for your data (aka you need branching and merging)</li>
</ul>
<p>In other cases, it’s not a good fit:</p>
<ul>
<li>You need extremely fast write performance</li>
<li>You need complex querying (although you can solve that by indexing through commit hooks)</li>
<li>You have an enormous set of data (write speed would slow down even further)</li>
</ul>
<p>So, there you go, that’s how you can use git as a NoSQL database. Let me know your thoughts!</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/10/13/git-nosql-database/">Git as a NoSql database</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=o7mkkH52A1Y:Znsy1T_QLFo:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=o7mkkH52A1Y:Znsy1T_QLFo:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=o7mkkH52A1Y:Znsy1T_QLFo:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=o7mkkH52A1Y:Znsy1T_QLFo:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=o7mkkH52A1Y:Znsy1T_QLFo:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=o7mkkH52A1Y:Znsy1T_QLFo:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/o7mkkH52A1Y" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/10/13/git-nosql-database/</feedburner:origLink></item>
		<item>
		<title>Open source software on company time</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/3du_0eDpdAU/</link>
		<pubDate>Wed, 05 Oct 2016 00:52:12 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1512</guid>
		<description><![CDATA[<p>Most developers love open source software, and often we come across a piece of software that we’re writing and think “it would be great if that already existed as an open source package”, but then, it doesn’t. Since we’re writing software for a company, the natural tendency is then to implement it in-house. The thing [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/10/05/open-source-software-company-time/">Open source software on company time</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/osi_keyhole_300X300_90ppi_0.png"><img class="alignnone" style="background-image: none; float: left; padding-top: 0px; padding-left: 0px; margin: 0px 15px 0px 0px; display: inline; padding-right: 0px; border: 0px;" title="osi_keyhole_300X300_90ppi_0" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/10/osi_keyhole_300X300_90ppi_0_thumb.png" alt="open source software" width="136" height="136" align="left" border="0" /></a></figure>
<p>Most developers love open source software, and often we come across a piece of software that we’re writing and think “it would be great if that already existed as an open source package”, but then, it doesn’t. Since we’re writing software for a company, the natural tendency is then to implement it in-house. The thing is, if you had that thought, chances are someone else might have had the same thought. So, why shouldn’t we open source it? From a business perspective, telling your boss you want to write open source software on company might not make much sense to him. After all, he’s paying you to write software for everyone. If you argument is going to be “open source is cool”, you probably won’t get very far.</p>
<p>Apart from being <em>cool</em>, developing open source software as a company actually has a lot of benefits.</p>
<blockquote><p>Before I iterate over what I think the advantages are, I want to make one thing clear. When we write open source software on company time, we don’t want to open source the company’s unique selling point. It’d be foolish to think any company would allow that. When I talk about open sourcing company software, I’m talking about general purpose software. Software you need to develop to support your domain. This could be an interface to a database, some utility functions you often use or any other piece that could be usable outside of the context of your company.</p></blockquote>
<h2>Open source software advantages</h2>
<h3>Quality</h3>
<p>Open source software is, by its nature, public. That means that any hack you implement, any security mistakes will be visible for everybody. Therefor, when you’re working on something public, it’s natural to be a lot more cautious about how you develop. But that’s only one way open source improves  quality.</p>
<p>Another reason quality improves is that it forces you to decouple it from your domain. In in-house software it often becomes tempting to include domain logic inside external libraries, because it’s just easier, faster. In the long run, you will have less separation of concerns which affects maintainability. Writing general purpose open source software, forces you to decouple it.</p>
<h3>Testing</h3>
<p>If your project is popular, a community will form around it. Once you have a community, they will start using it and possibly discover bugs before you run into them. If you have an active community they might even create a pull request to solve the bug for you. That’s free testing and bug fixing for the company.</p>
<h3>New features / scenarios</h3>
<p>Similar to the scenario with bug detection and public testing, it’s possible someone runs into a use case that the current software doesn’t handle. If they really need it, they might decide to implement it and create a pull request.</p>
<h3>Exposure</h3>
<p>If your software is of a high quality, it will start becoming popular. Being open source doesn’t mean it needs to be white-labeled. So, every time someone gets in contact with the software, your company’s logo will be there. This gives exposure of your company’s name to a potential large audience of developers.</p>
<p>There are plenty of examples of tech companies that are open sourcing software, to name a few:</p>
<ul>
<li>Uber: <a title="https://uber.github.io/" href="https://uber.github.io/">https://uber.github.io/</a></li>
<li>Spotify: <a title="https://github.com/spotify" href="https://github.com/spotify">https://github.com/spotify</a></li>
<li>Google: Android</li>
<li>AirBnb: <a title="http://nerds.airbnb.com/open-source/" href="http://nerds.airbnb.com/open-source/">http://nerds.airbnb.com/open-source/</a></li>
<li>Facebook: React</li>
</ul>
<p>If you go through the list, you’ll see they’re not open sourcing their main selling point, but parts that orthogonal to their business model (ie: you won’t see Google open sourcing their search algorithm, or Facebook’s timeline rules). The above is also a list of companies that are <em>hot </em>among developers. A lot of developers want to work in these companies, precisely because of their openness. Having exposure of your company to developers could attract new talent. That’s not to say that this is the ultimate hiring strategy, but it’s another channel which could yield some interesting results.</p>
<h3>Developer satisfaction</h3>
<p>Good developers are, in my opinion, passionate about their work. They will feel happier if they can work on something that solves more than the daily problems they’re running into. Having a happier development team aids in productivity, retaining talent and general atmosphere in the company. Working on something which is open source and not “just” for the company might also trigger them to spend some hobby time on it which again is free work for the company.</p>
<h3>Developer involvement</h3>
<p>Sometimes developers move on from the company. One of the first things you do, is remove their access to the code repository. Even if they feel they did something useful and want to continue using it, they’ll have to work on it privately on a copy they have. Technically they’re not allowed to, but that’s not the reality.</p>
<p>On the other hand, if the code is open source, and they move on, they can still contribute to it. Likely they will use it in the next company they’re at. This will minimize the loss of knowledge in the team and again get more development time on the software for free. This is something that I experienced first hand when moving on from a company. The software we had written was by no means popular, but I found it to be useful, so I introduced it at my new company. Now we’re happily contributing to it when we need it.</p>
<p>It’s a win-win-win. It’s good for my new company, because they get software that was already developed, it’s good for my old company, because I (and my colleagues) are adding new features and bug fixes for their software and it’s good for me, because I didn’t have to do it all over again.</p>
<h2>Conclusion</h2>
<p>A lot of the above advantages are only real if your software becomes popular. But what do you have to lose? If it doesn’t become popular it’s just the same as developing it in-house which was the starting point.</p>
<p>Another argument, although not very tangible, is that it’s just the right thing to do. We live in a world where we can only achieve what we’re achieving by standing on the shoulders of others. I’m sure 99,99% of closed source software is using open source software somewhere, so why not be a good citizen and contribute back to the community?</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/10/05/open-source-software-company-time/">Open source software on company time</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=3du_0eDpdAU:BegDByqnYxk:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=3du_0eDpdAU:BegDByqnYxk:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=3du_0eDpdAU:BegDByqnYxk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=3du_0eDpdAU:BegDByqnYxk:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=3du_0eDpdAU:BegDByqnYxk:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=3du_0eDpdAU:BegDByqnYxk:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/3du_0eDpdAU" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/10/05/open-source-software-company-time/</feedburner:origLink></item>
		<item>
		<title>Avoiding code ownership</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/CfA_ZEmy5Wg/</link>
		<pubDate>Tue, 27 Sep 2016 00:36:07 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1487</guid>
		<description><![CDATA[<p>Creating development silos, is a practice I have seen in many different teams. What I’m talking about is having developers specialize in parts of the domain, i.e. one developer handles all the code related to invoicing, another one does everything around order management, etc. It’s a natural tendency to select the same programmer for the [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/09/27/avoiding-code-ownership/">Avoiding code ownership</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Creating development silos, is a practice I have seen in many different teams. What I’m talking about is having developers specialize in parts of the domain, i.e. one developer handles all the code related to invoicing, another one does everything around order management, etc. It’s a natural tendency to select the same programmer for the same part of the application all the time. The reason is that it creates immediate and tangible results. Assigning the developer that has most knowledge of the code at hand, is the one that will complete the task as fast and as good as possible.</p>
<p>However, in the long term, I believe there’s more benefit in doing the exact opposite. Although counter intuitive, I think spreading the knowledge of the domain throughout the development team has a lot of advantages.</p>
<h2>Knowledge sharing</h2>
<p>If you assign a task only to a developer that has worked on a feature before, you concentrate the knowledge of that part of the domain in that developer. While having a lot of knowledge is good for a single developer, it’s not good for the team. Having the knowledge spread out over the team unlocks a bunch of advantages:</p>
<h3>Consistency</h3>
<p>Initially, a developer with more knowledge of the code will complete a task faster. That is a logical fact. The same is true for the opposite: if you give the task to a developer with no knowledge of the code at all, it will take him considerably more time and effort to complete it. In an ideal world, a team is always the same size and no one ever leaves, takes holidays or is off sick. In reality however, teams change, have other responsibilities and are in general not constant.</p>
<p>By having knowledge concentrated on a developer/feature basis, you will see a lot of highs and lows in the productivity of the team. The highs occur on moments that the team is in a stable period with no one leaving the company, no sick days and/or holidays. The lows happen when someone leaves. All of a sudden a lot of knowledge has left the company and someone needs to pick that up. The same happens when someone goes on holiday. In a way, a holiday is even worse, because the rest of the team will just sit and wait to start “that feature that touches invoicing” until John, who knows all about invoicing, is back from holiday.</p>
<p>If you spread the knowledge however, you will eventually get to a point where development speed is consistent. Holidays are spread over the year and the team just picks up any work that’s available. The same happens when someone leaves the company. What’s more is that a new developer can be trained by anyone, not just by that one other guy who also has the knowledge required (provided there IS actually another one around).</p>
<h3>Responsibility</h3>
<p>If you know that no one will look at the code you’re writing (at least not until you’re gone), there’s a natural tendency to get sloppy. Conversely, knowing that this code will be viewed tomorrow by your colleague tends to put you on edge. This is not just because developers are sloppy or because of a bad developer. To some extent, every developer is prone to this, regardless of how good or professional they are. It’s simple human nature.</p>
<h3></h3>
<h3>Communication</h3>
<p>Having shared knowledge moves code ownership from the individual to the team. With everyone at the same wavelength, it will improve communication within the team, and mistakes will get picked up by the team, not by individuals. Individuals can have good days and bad days. A team usually zeroes that out.</p>
<h3>Code Reviews</h3>
<p>If you’re doing code reviews (and you should: <a href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/</a>), rotating the team will also improve their quality. If you review code of which you have never seen the context, it’s a lot harder to assess the quality. Either it takes a lot of time to review, because you have to read and analyze all the surrounding bits, or, more likely, developers tend to think, “looks decent enough, I suppose that’s handled somewhere else”.</p>
<p>If you know the context, it’s much easier to spot bugs, suggest alternate patterns and provide valuable feedback. Without knowing context, code reviews are often reduced to formatting checks, something that’s better left to automation.</p>
<h3>Code Quality</h3>
<p>By having a broad knowledge of the entire domain, chosen solutions tend to fit better into the whole. It’s easy to provide a narrow solution for the problem at hand, but it’s very difficult to provide a generic solution that will be scalable in light of where the business is heading. By sharing the knowledge, you prevent tunnel vision.</p>
<h2>Developer Satisfaction</h2>
<p>Development is a creative activity. Nothing kills creativity more than repetition and boredom. By working on the same part over and over again, developers get bored, leave for other, more exciting places or simply go into standby mode, doing what they need to do and nothing more. By having people work on different parts, they will feel more as part of an organization, an idea and can see the goals. That creates a motivating environment and will make them want to do the best possible job.</p>
<h2>Planning</h2>
<p>Related to the point on consistency, planning workload also becomes a lot easier. You no longer have to check each developer’s schedule and you don’t have to cut out requested holidays (which also helps for developer satisfaction). Because there are multiple developers who can do a job, you can just assign the task to any developer that’s available.</p>
<h2>Terms and conditions</h2>
<p>Rotating the team around features has a lot of advantages, as explained above. Obviously, don’t take this advice to the extreme. Don’t let your DBA design your front page, don’t let your UX specialist optimize DB queries and, for the love of god, don’t let your PR spokesmen implement your log in page.<br />
Developers all have their specialties, that’s OK, but they should be technical specialties. What you want to avoid is that developers become specialists in a thin slice of the domain.</p>
<p>Another thing to keep in mind is the team size. I’ve found the above guidelines to be useful in small teams. Once your team grows beyond 7-8 developers, I prefer to take the other extreme: separate the teams. The boundaries between the knowledge should be more clearly defined and any team should consider code by a different team as if it were third-party code. This allows teams to be very focused. It also means that external interfaces should be very clear, well documented and above all, stable.</p>
<p>&nbsp;</p>
<h2>Conclusion</h2>
<p>Sometimes doing the non-intuitive thing, is more beneficial than doing what seems natural. Rotating a team around features increases consistency, code quality, responsibility and developer’s satisfaction. While going for short term wins might be tempting, the long term benefits are clear.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/09/27/avoiding-code-ownership/">Avoiding code ownership</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=CfA_ZEmy5Wg:n8l9J6WdKlA:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=CfA_ZEmy5Wg:n8l9J6WdKlA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=CfA_ZEmy5Wg:n8l9J6WdKlA:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=CfA_ZEmy5Wg:n8l9J6WdKlA:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=CfA_ZEmy5Wg:n8l9J6WdKlA:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/CfA_ZEmy5Wg" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/09/27/avoiding-code-ownership/</feedburner:origLink></item>
		<item>
		<title>Database migrations made simple</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/IXfT0nCHh9c/</link>
		<comments>https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/#comments</comments>
		<pubDate>Thu, 02 Jun 2016 10:46:15 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1468</guid>
		<description><![CDATA[<p>I make no secret of the fact that I don’t like ORM’s. One part of why I don’t like them is the way they handle database migrations. To successfully create and execute database migrations, you often need to know quite a bit about the framework. I don’t like having to know things which can be [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/">Database migrations made simple</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p> <img title="database migrations" border="0" style="border: 0; box-shadow: none; float: right;" alt="database migrations" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/06/db_migrations_thumb.jpg" width="240" height="171">I make no secret of the fact that <a href="https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/">I don’t like ORM’s</a>. One part of why I don’t like them is the way they handle database migrations. To successfully create and execute database migrations, you often need to know quite a bit about the framework. I don’t like having to know things which can be solved in a much simpler way. <br /> Apart from ORM migrations, there are tools out there such as Redgate’s database tools. While this is actually a very good and useful tool, it’s often overkill for small to medium-sized applications. Apart from that, it’s also quite expensive, so maybe not the best tool to use in your start-up.</p>
<h2>KISS</h2>
<p>Database migrations, contrary to popular belief, are not rocket science. Essentially, you want to execute some scripts whenever you release a new version of your software and possibly execute some scripts to undo those in case you want to rollback a faulty deployment. Building such a thing is not very difficult. It won’t have all the bells and whistles that a tool such as Redgate has, but the knowledge required is far less and, more importantly, instantly understandable for any new hires on a team. If all you need is upgrade and downgrade, you can use the code from this article with your modifications and tweaks. It’s based on some simple conventions without any possibilities for configuration or customization, but again, YAGNI (you aren’t gonna need it). If you end up needing it, then you can just modify the script and get on with more interesting stuff.</p>
<p>The code is very simple in its setup and it works like this:</p>
<ul>
<li>A table MigrationScripts is created in the database which contains all scripts that have been executed and at what date
<li>On startup of the application (or whichever moment you choose), it scans a folder for .sql scripts with the naming convention YYYYMMDD-HHMM-&lt;some_name&gt;.sql
<li>The code then does a diff to see which scripts have already been executed in the database
<li>It then runs the scripts that haven’t been executed yet, in the order of the date parsed from the naming convention</li>
</ul>
<h2>Database migrations: upgrading</h2>
<p>The example that I’ll be showing here is something I used previously on an ASP.NET MVC app, which was the only client accessing the database. In case you have multiple applications accessing the same database, you probably want to extract this code into a separate application so you can deploy that application whenever a database update is needed. I’ll show the simple version though.</p>
<blockquote><p>NOTE: I’m using Dapper in this example, as we where using it in said project, but this could easily be done with any other micro-ORM or ADO.NET directly.</p>
</blockquote>
<pre class="brush: csharp; auto-links: false;">public static void Run()
{
    string conString = ConfigurationManager.ConnectionStrings["sql_migrations"]
                                           .ConnectionString;
    using (var con = new SqlConnection(conString))
    {
        // check if the migrations table exists, otherwise execute the first script (which creates that table)
        if (con.ExecuteScalar&lt;int&gt;(@"SELECT count(1) FROM sys.tables
                                            WHERE T.Name = 'migrationscripts'") == 0)
        { 
            con.Execute(GetSql("20151204-1030-Init"));
            con.Execute(@"INSERT INTO MigrationScripts (Name, ExecutionDate) 
                                                VALUES (@Name, GETDATE())", 
                                                new { Name = "20151204-1030-Init" });
            }

        // Get all scripts that have been executed from the database
        var executedScripts = con.Query&lt;string&gt;("SELECT Name FROM MigrationScripts");

        // Get all scripts from the filesystem
        Directory.GetFiles(HostingEnvironment.MapPath("/App_Data/Scripts/"))
                 // strip out the extensions
                 .Select(Path.GetFileNameWithoutExtension)
                 // filter the ones that have already been executed
                 .Where(fileName =&gt; !executedScripts.Contains(fileName))
                 // Order by the date in the filename
                 .OrderBy(fileName =&gt; 
                    DateTime.ParseExact(fileName.Substring(0, 13), "yyyyMMdd-HHmm", null))
                 .ForEach(script =&gt;
                 {
                     // Execute each on of the scripts
                     con.Execute(GetSql(script));
                     // record that it was executed in the migrationscripts table
                     con.Execute(@"INSERT INTO MigrationScripts (Name, ExecutionDate) 
                                                         VALUES (@Name, GETDATE())", 
                                                         new { Name = script });
                });
    }
}

static string GetSql(string fileName) =&gt;
    File.ReadAllText(HostingEnvironment.MapPath($"/App_Data/Scripts/{fileName}.sql"));
</pre>
<p>That’s it, about 20 lines of code (comments and line breaks don’t count ;-)) for a fully working database migration infrastructure. </p>
<h2>Database migrations: downgrading</h2>
<p>In cases where you want to be able to roll back the database, you could add another convention: all rollback scripts have the same name but with _rollback appended to the filename. Then you can add a separate function which takes as an argument the name of the script you want to roll back. From there on, it’s a case of loading the correct rollback scripts, sorting them, executing them and removing the records from the MigrationScripts table. All in all, another 20 lines.</p>
<h2>Conclusion</h2>
<p>The above code allows you to:</p>
<ul>
<li>Store all your database changes in Git
<li>Do code reviews on SQL scripts (by anyone, including DB Admins)
<li>Pull the repo with a fresh database, run the application and get started (handy for new hires)
<li>Test the migrations locally and in every test environment
<li>Do data migrations (or any SQL you want to write for your migrations)
<li>Be flexible: you own the code, so anything is possible (convention change, extract it, deploy it separately, etc. )</li>
</ul>
<p>It does have a little cost of ownership, as you may need to modify it sometimes. I’d argue however that the cost is smaller than having to know about your ORM’s migration intricacies or learn how to use a database management tool. </p>
<p><img title="works-on-my-machine" style="float: left" border="0" alt="works-on-my-machine" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/06/works-on-my-machine_thumb.png" width="100">Also, this code is fully certified “Works on my machine”-ware. You can use it, tweak it, ask me a question about it, but don’t ask me to create a NuGet package of it, as it would go right back to the place I wanted to avoid with this code snippet.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/">Database migrations made simple</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=IXfT0nCHh9c:KDkMBmU_8Gw:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=IXfT0nCHh9c:KDkMBmU_8Gw:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=IXfT0nCHh9c:KDkMBmU_8Gw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=IXfT0nCHh9c:KDkMBmU_8Gw:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=IXfT0nCHh9c:KDkMBmU_8Gw:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=IXfT0nCHh9c:KDkMBmU_8Gw:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/IXfT0nCHh9c" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/</feedburner:origLink></item>
		<item>
		<title>Writing custom EsLint rules</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/-l44d_xpPo8/</link>
		<pubDate>Fri, 27 May 2016 10:49:27 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1446</guid>
		<description><![CDATA[<p>In statically compiled languages, we usually lean on the compiler to catch out common errors (or plain stupidities). In dynamic languages we don’t have this luxury. While you could argue over whether this is a good or a bad thing, it’s certainly true that a good static analysis tool can help you quite a bit [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/27/writing-custom-eslint-rules/">Writing custom EsLint rules</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In statically compiled languages, we usually lean on the compiler to catch out common errors (or plain stupidities). In dynamic languages we don’t have this luxury. While you could argue over whether this is a good or a bad thing, it’s certainly true that a good static analysis tool can help you quite a bit in detecting mistakes. For Javascript, a few tools are available: there’s the good old JsLint, which is very strict, JsHint, created because not all of us are Douglas Crockford and then there’s EsLint. In this post I’ll show you how to create custom EsLint rules.</p>
<p>EsLint is quite a nice alternative for JsHint and is very flexible. While JsHint certainly has its benefits and comes out of the box with a lot of configurable options, EsLint allows you to configure your own custom rules.</p>
<p>These custom EsLint rules can be added to their library and released to the community as optional extra checks, they can be company specific to enforce a certain coding style or they can be project specific. </p>
<p>In this post I want to talk about creating project specific custom EsLint rules. It’s easily transferrable to a more common use plugin if you want to (by publishing to NPM). I found creating a project specific plugin has a few more hurdles, so I’ll explain that.</p>
<h2>Analyzing code</h2>
<h3>Asbtract Syntax Trees</h3>
<p>Before we start writing custom EsLint rules, I first want to show how code is analyzed and how we can plug in to that. In order to analyze code we must first build an abstract syntax tree. In this case we want to build an ES 6 <a href="https://github.com/estree/estree" target="_blank">abstract syntax tree (AST)</a>. An AST is essentially a data structure which describes the code. The next example shows some sample code and the corresponding syntax tree:</p>
<pre class="brush: js; auto-links: false;">var a = 1 + 1;
</pre>
<figure><img alt="custom EsLint rules - Abstract Syntax Tree" src="/wp-content/uploads/2016/05/ast_thumb.jpg" width="640" height="474"></figure>
<p>The above visualization can also be presented as a pure data structure, here in the form of JSON:</p>
<pre class="brush: js; auto-links: false;">{
    type: "VariableDeclaration",
    declarations: [{
        type: "VariableDeclarator",
     id: {
         type: "Identifier",
            name: "a"
        },
        init: {
            type: "BinaryExpression",
            left: {
                type: "Literal",
                value: 1,
                raw: "1"
            },
            operator: "+",
            right: {
                type: "Literal",
                value: 1,
                raw: "1"
            }
        }
    }],
    kind: "var"
}

</pre>
<p>When we have this syntax tree, you could walk the structure and then write something like this for each node:</p>
<pre class="brush: js; auto-links: false;">if(node.type === "VariableDeclarator" &amp;&amp; node.id.name.length &lt; 2){
    console.log("Variable names should be more than 1 character");
}</pre>
<h2>Writing custom EsLint rules</h2>
<p>Since all of this AST-generation and node-walking is not specific to any rule, it can be externalized, and that’s exactly what EsLint gives us. EsLint builds the syntax tree and walks all the nodes for us. We can then define interception points for the nodes we want to intercept. Apart from that, EsLint also gives us the infrastructure to report on problems that are found. Here’s the above example rewritten as an EsLint rule:</p>
<pre class="brush: js; auto-links: false;">module.exports.rules = {
    "var-length": context =&gt; ({
        VariableDeclarator: (node) =&gt; {
            if(node.id.name.length &lt; 2){
                context.report(node, 'Variable names should be longer than 1 character');
            }
        }
    })
};
</pre>
<p>This can then be plugged in to EsLint and it will report the errors for any Javascript code you throw at it.</p>
<h2>EsLint plugins</h2>
<p>In order to write a custom EsLint rule, you need to create an EsLint plugin. An EsLint plugin must follow a set of conventions before it can be loaded by EsLint:</p>
<ul>
<li>It must be a node package (distributed through NPM, although there’s a way around it, read on …)
<li>Its name must start with eslint-plugin</li>
</ul>
<blockquote>
<p>The documentation mentions a way to write custom rules in a local directory and running them through a command-line option. This still works, but is deprecated and will soon break in newer versions of EsLint. It’s recommended to go the plugin-route as described in this post.</p>
</blockquote>
<h3>Creating the plugin</h3>
<p>With the above requirements, we can go two routes:</p>
<ul>
<li>Use <a href="http://yeoman.io/" target="_blank">YeoMan</a> and the corresponding <a href="https://www.npmjs.com/package/generator-eslint" target="_blank">EsLint generator</a>
<li>Create your own package</li>
</ul>
<p>The generator sets you up with a nice folder structure, including tests, a proper description and some documentation. However, if you just want to write some quick rules, I find it easier to just create a folder and the structure myself. Essentially, you need two files:</p>
<ul>
<li>package.json (remember, it has to be an NPM package)
<li>index.js, where your rules will live</li>
</ul>
<p>Here’s a basic version of the package.json:</p>
<pre class="brush: js; auto-links: false;">{
  "name": "eslint-plugin-my-eslist-plugin", // remember, the name has to start with eslint-plugin
  "version": "0.0.1",
  "main": "index.js",
  "devDependencies": {
    "eslint": "~2.6.0"
  },
  "engines": {
    "node": "&gt;=0.10.0"
  }
}
</pre>
<p>And this is what index.js looks like, with our custom EsLint rule:</p>
<pre class="brush: js; auto-links: true;">module.exports.rules = {
    "var-length": context =&gt; ({
        VariableDeclarator: (node) =&gt; {
            if(node.id.name.length &lt; 2){
                context.report(node, 'Variable names should be longer than 1 character');
            }
        }
        // , more interception points (see https://github.com/estree/estree)
    })
    // more rules
};

</pre>
<h3></h3>
<h3>Installing the plugin</h3>
<p>As I mentioned before, if you want to share your plugin, you can distribute it via NPM. This doesn’t always make sense though as you might have project specific rules. In those cases, you can just create the folder with your plugin locally and commit it to your code repository. For it to work, you still need to install it as a node package though. You can do that with the following NPM command:</p>
<pre class="brush: js; auto-links: false;">npm install -S ./my-eslint-plugin
</pre>
<p>This will install the package from the local folder my-eslint-plugin. That way, you can keep the rules locally to your project and still use them while running EsLint.</p>
<h3></h3>
<h3>Configuring the plugin</h3>
<p>For EsLint to recognize and use the plugin we have to notify it through the configuration. We need to do two things:</p>
<ul>
<li>Tell it to use the plugin
<li>Switch on the rules</li>
</ul>
<p>To tell it to use plugin, we can add a plugins node into the configuration, specifying the name of the plugin (without the “eslint-plugin”-prefix):</p>
<pre class="brush: js; auto-links: false;">"plugins": [
    "my-eslint-plugin"
]
</pre>
<p>Next we need to define the rules:</p>
<pre class="brush: js; auto-links: false;">"rules": {
    "my-eslint-plugin/var-length": "warn"
}
</pre>
<p>With the plugin installed, you can now run EsLint and it will report on one letter variable names.</p>
<h2></h2>
<h2>Example</h2>
<p>While this is all nice, the above rule is probably not very useful, since there’s already a built-in rule for that (<a title="http://eslint.org/docs/rules/id-length" href="http://eslint.org/docs/rules/id-length">http://eslint.org/docs/rules/id-length</a>).</p>
<p>As for general styling rules, EsLint has probably most of them already covered, and the ones it hasn’t are probably quite obscure. Custom EsLint rules come in handy on a project-level basis.</p>
<p>As an example, I’m currently working on an Angular 1 project. The intention is to port this over to a different framework soon. Because of that, we want to make sure we’re as independent of Angular as possible. There are certain things we can do just as easily in plain JS instead of using angular’s utility methods. For others, we can use different libraries that we can port over as well when we port the application. </p>
<p>Now, we don’t want to go off and change all these occurrences at once, because that would be a lot of upfront work. Ideally, we want the following:</p>
<ul>
<li>Get notified when there’s a call to an angular-method which could be done easily in plain JS in the module we’re working on
<li>Get notified on the CI-server (with a warning) if an angular-method is used
<li>Once we get rid of the warnings for that angular-method, fail the build on the CI-server if that call is detected again</li>
</ul>
<p>So, as an example, here are a few rules we defined in our project:</p>
<pre class="brush: js; auto-links: false;">module.exports.rules = {
    "no-angular-copy": context =&gt; ({
        MemberExpression: function(node) {
            if (node.object.name === "angular" &amp;&amp; node.property.name === "copy") {
                context.report(node, "Don't use angular.copy, use cloneDeep from lodash instead.");
            }
        }
    }),
    "no-angular-isDefined": context =&gt; ({
        MemberExpression: function(node) {
            if (node.object.name === "angular") {
                if(node.property.name === "isDefined") {
                    context.report(node, "Don't use angular.isDefined. Use vanilla JS.");
                } else if (node.property.name === "isUndefined") {
                    context.report(node, "Don't use angular.isUndefined. Use vanilla JS");
                }
            }
        }
    })
};
</pre>
<p>We then enabled the rules with a warning in our configuration. As soon as we notice no more warnings for one of these rules, we will switch them to errors. The CI-build is configured to fail when EsLint finds an error. On top of that we have the EsLint plugin for VsCode, which looks like this in the editor:</p>
<figure><img alt="Custom EsLint rules - VS Code EsLint" src="/wp-content/uploads/2016/05/vscode_eslint_thumb.jpg" width="644" height="119"></figure>
<p>This combination ensures that we clean up angular calls while we continue development and that no new calls will be introduced.</p>
<blockquote>
<p>Sidenote: the rules here are not foolproof since someone could assign angular to a temp variable and then call the methods on the temp variable.&nbsp; Be that as it may, we want to catch the general use case with a simple rule. We could probably write a more thorough analyzer, but it would take a lot of time. Since all of this code still needs to go through <a href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">code reviews</a>, we don’t worry too much about it.</p>
</blockquote>
<h2>Other possibilities</h2>
<p>The above example is something that was very convenient for our use case, but the possibilities are endless. Here are a few things you could achieve with this:</p>
<ul>
<li>Ensure a user message is shown when HTTP calls are initiated and that they’re properly removed once it ends.
<li>Ensure jQuery isn’t used when we’re using a SPA-framework (or only in certain modules)
<li>When using jQuery, ensure you’re always calling event-handlers using the .on method instead of the shorthand ,click and similar</li>
</ul>
<p>There are plenty of possibilities for custom EsLint rules and most of it depends on your project. What other ideas do you have?</p>
<h2>Existing plugins</h2>
<p>Of course, there are plenty of existing <a href="https://www.npmjs.com/search?q=eslint+plugin">EsLint plugins</a> for existing frameworks on NPM already. If you’re using one of these frameworks, it’s worth checking out the rules and see if you could benefit from enabling some of these</p>
<p>More information on writing custom EsLint rules can be found in the <a href="http://eslint.org/docs/developer-guide/working-with-rules">offical documentation</a></p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/27/writing-custom-eslint-rules/">Writing custom EsLint rules</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=-l44d_xpPo8:gHCl4jeGuqg:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=-l44d_xpPo8:gHCl4jeGuqg:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=-l44d_xpPo8:gHCl4jeGuqg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=-l44d_xpPo8:gHCl4jeGuqg:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=-l44d_xpPo8:gHCl4jeGuqg:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=-l44d_xpPo8:gHCl4jeGuqg:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/-l44d_xpPo8" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/05/27/writing-custom-eslint-rules/</feedburner:origLink></item>
		<item>
		<title>Iterators and Generators in Javascript</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/UDdkfv7zCyg/</link>
		<comments>https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/#comments</comments>
		<pubDate>Fri, 20 May 2016 11:01:29 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1439</guid>
		<description><![CDATA[<p>Last week I wrote about the yield return statement in c# and how it allows for deferred execution. In that post I explained how it powers LINQ and explained some non-obvious behaviors. In this week’s post I want to do the same thing but for Javascript. ES6 (ES2015) is becoming more and more mainstream, but [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/">Iterators and Generators in Javascript</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Last week I wrote about the yield return statement in c# and how it allows for deferred execution. In that post I explained how it powers LINQ and explained some non-obvious behaviors.</p>
<p>In this week’s post I want to do the same thing but for Javascript. ES6 (ES2015) is becoming more and more mainstream, but in terms of usage I mostly see the more common arrow-functions or block-scoping (with let and const).</p>
<p>However, iterators and generators are also a part of Javascript and I want to go through how we can use them to create deferred execution in Javascript.</p>
<h2>Iterators</h2>
<p>An iterator is an object that can access one item at a time from a collection while keeping track of its current position. Javascript is a bit ‘simpler’ than c# in this aspect and just requires that you have a method called <font face="Courier New">next</font> to move to the next item to be a valid iterator.</p>
<p>The following is an example of function that creates an iterator from an array:</p>
<pre class="brush: js; auto-links: false;">let makeIterator = function(arr){
    let currentIndex = 0;
    return {
        next(){
            return currentIndex &lt; arr.length ? 
             {
                value: arr[currentIndex++],
                done : false
             } :
             { done: true};
        }
    };
}
</pre>
<p>We could now use this function to create an iterator and iterate over it:</p>
<pre class="brush: js; auto-links: false;">let iterator = makeIterator([1,2,3,4,5]);
while(1){
    let {value, done} = iterator.next();
    if(done) break;
       console.log(value);
}

</pre>
<h2>Iterables</h2>
<p>An iterable is an object that defines its iteration behavior. The <font face="Courier New">for..of</font> loop can loop over any iterable. Built-in Javascript objects such as <font face="Courier New">Array</font> and <font face="Courier New">Map</font> are iterables and can thus be looped over by the <font face="Courier New">for..of</font> construct. But we can also create our own iterables. To do that we must define a method on the object called <font face="Courier New">@@iterator</font> or, more conveniently, use the <font face="Courier New">Symbol.iterator</font> as the method name:</p>
<pre class="brush: js; auto-links: false;">let iterableUser = {
    name: 'kenneth',
    lastName: 'truyers',
    [Symbol.iterator]: function*(){
        yield this.name;
        yield this.lastName;
    }
}

// logs 'kenneth' and 'truyers'
for(let item of iterableUser){
    console.log(item);
}
</pre>
<h2>Generators</h2>
<p>Custom iterators and iterables are useful, but are complicated to build, since you need to take care of the internal state. A generator is a special function that allows you to write an algorithm that maintains its own state. They are factories for iterators. A generator function is a function marked with the <font face="Courier New">*</font> and has at least one <font face="Courier New">yield</font>-statement in it. </p>
<p>The following generator loops endlessly and spits out numbers:</p>
<pre class="brush: js; auto-links: false;">function* generateNumbers(){
  let index = 0;
  while(true)
    yield index++;
}
</pre>
<p>A normal function would run endlessly (or until the memory is full), but similar to what I discussed in the post on yield return in C#, the <font face="Courier New">yield</font>-statement gives control back to the caller, so we can break out of the sequence earlier.</p>
<p>Here’s how we could use the above function:</p>
<pre class="brush: js; auto-links: false;">let sequence = generateNumbers(); //no execution here, just getting a generator

for(let i=0;i&lt;5;i++){
    console.log(sequence.next());
}
</pre>
<h2>Deferred Execution</h2>
<p>Since we have the same possibilities for yielding return values in Javascript as in C#, the only what’s missing to be able to recreate LINQ in Javascript are extension methods. Javascript doesn’t have extension methods, but we can do something similar.</p>
<p>What we’d like to do is to be able to write something like this:</p>
<pre class="brush: js; auto-links: false;">generateNumbers().skip(3)
                 .take(5)
                 .select(n =&gt; n * 3);
</pre>
<p>It turns out, we can do this, although we need to take a few hurdles.</p>
<p>To attach methods to existing objects (similar to what extension methods do in c#), we can use the prototype in Javascript. Generators however all have a different prototype, so we can’t easily attach new methods to all generators. Therefore, what we need to do is make sure that they all share the same prototype. To do that, we can create a shared prototype and a helper function that assigns the shared prototype to the function:</p>
<pre class="brush: js; auto-links: false;">function* Chainable() {}
function createChainable(f){
  f.prototype = Chainable.prototype;
  return f;
}
</pre>
<p>Now that we have a shared prototype, we can add methods to this prototype. I’m also going to create a helper method for this:</p>
<pre class="brush: js; auto-links: false;">function createFunction(f) {
  createChainable(f);
  Chainable.prototype[f.name] = function(...args) {
    return f.call(this, ...args);
  };
  return f;
}
</pre>
<p>In the above method:</p>
<ul>
<li>It makes sure the function itself is also chainable, by calling createChainable
<li>Then it attaches the method to the shared protoype (using the name of the function). The method receives the arguments, which gets passed on to that method while supplying the correct this-context.</li>
</ul>
<p>With this in place we can now create our “extension methods” in Javascript:</p>
<pre class="brush: js; auto-links: false;">// the base generator
let test = createChainable(function*(){
      yield 1;
      yield 2;
      yield 3;
      yield 4;
      yield 5;
});

// an 'extension' method
createFunction(function* take(count){
  for(let i=0;i&lt;count;i++){
      yield this.next().value;
  }
});

// an 'extension' method
createFunction(function* select(selector){
  for(let item of this){
      yield selector(item);
  }
});

// now we can iterate over this and this will log 2,4,6)
for(let item of test.take(3).select(n =&gt; n*2)){
    console.log(item);
}</pre>
<p>Note that in the above method, it doesn’t matter whether we first <font face="Courier New">take</font> and then <font face="Courier New">select</font> or the other way around. Because of the deferred execution, it will only fetch 3 values and do only 3 selects.</p>
<h3>Caveat</h3>
<p>One problem with the above is that it doesn’t work on standard iterables such as Arrays, Sets and Maps because they don’t share the prototype. The workaround is to write a wrapper-method that wraps the iterable with a method that does use the shared prototype:</p>
<pre class="brush: js; auto-links: false;">let wrap = createChainable(function*(iterable){
    for(let item of iterable){
           yield item;
     }
});
</pre>
<p>With the wrap function, we can now wrap any array, set or map and chain our previous function to it:</p>
<pre class="brush: js; auto-links: false;">let myMap = new Map();
myMap.set("1", "test");
myMap.set("2", "test2");
myMap.set("3", "test3");

for(let item of wrap(myMap).select(([key,value]) =&gt; key + "--" + value)
                           .take(3)){
    console.log(item);
}
</pre>
<p>One more thing I want to add is the ability to execute a chain, so that it returns an array (for c# devs: the ToList-method). This method can be added on to the prototype:</p>
<pre class="brush: js; auto-links: false;">Chainable.prototype.toArray = function(){
  let arr = [];
  for(let item of this){
      arr.push(item);
  }
  return arr;
}
</pre>
<h2>Conclusion</h2>
<p>If we implement the above, it allows us to write LINQ-style Javascript:</p>
<pre class="brush: js; auto-links: false;">mySet.set("1", "test");
mySet.set("2", "test2");
mySet.set("3", "test3");

wrap(mySet).select(([key,value]) =&gt; key + "--" + value)
           .take(3)
           .toArray()
           .forEach(item =&gt; console.log(item));
</pre>
<p>Obviously, this only works in ES2015 and it’s probably not a good idea to actually write LINQ in Javascript using this method (and besides, there are already other implementations of LinqJS), but it does demonstrate the power of Iterators and Generators in Javascript.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/">Iterators and Generators in Javascript</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=UDdkfv7zCyg:pwQJLgBWC4c:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=UDdkfv7zCyg:pwQJLgBWC4c:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=UDdkfv7zCyg:pwQJLgBWC4c:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=UDdkfv7zCyg:pwQJLgBWC4c:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=UDdkfv7zCyg:pwQJLgBWC4c:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/UDdkfv7zCyg" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/</feedburner:origLink></item>
		<item>
		<title>Yield return in C#</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/lUWwAExTZgE/</link>
		<comments>https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/#comments</comments>
		<pubDate>Thu, 12 May 2016 12:00:26 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1415</guid>
		<description><![CDATA[<p>The yield return statement is probably one of the most unknown features of C#. In this post I want to explain what it does and what its applications are. Even if most developers have heard of yield return it’s often misunderstood. Let’s start with an easy example: IEnumerable&#60;int&#62; GetNumbers() { yield return 1; yield return [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/">Yield return in C#</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The yield return statement is probably one of the most unknown features of C#. In this post I want to explain what it does and what its applications are.</p>
<p>Even if most developers have heard of yield return it’s often misunderstood. Let’s start with an easy example:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GetNumbers()
{
    yield return 1;
    yield return 2;
    yield return 3;
}
</pre>
<p>While the above has no value for anything serious, it’s a good example to debug to see how the yield return statement works. Let’s call this method:</p>
<pre class="brush: csharp; auto-links: false;">foreach(var number in GetNumbers())
    Console.WriteLine(number);
</pre>
<p>When you debug this (using F11, Step Into), you will see how the current line of execution jumps between the foreach-loop and the yield return statements. What happens here is that each iteration of the <strong>foreach</strong> loop calls the iterator method until it reaches the <strong>yield return</strong> statement. Here the value is returned to the caller and the location in the iterator function is saved. Execution is restarted from that location the next time that the iterator function is called. This continues until there are no more yield returns.</p>
<p>A first use case of the yield statement is the fact that we don’t have to create an intermediate list to hold our variables, such as in the example above. There are a few more implications though.</p>
<h2>Yield return versus traditional loops</h2>
<p>Let’s have a look at a different example. We’ll start with a traditional loop which returns a list:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GenerateWithoutYield()
{
    var i = 0;
    var list = new List&lt;int&gt;();
    while (i&lt;5)
        list.Add(++i);
    return list;
}

foreach(var number in GenerateWithoutYield()) 
    Console.WriteLine(number);
</pre>
<p>These are the steps that are executed:</p>
<ol>
<li><font face="Courier New">GenerateWithoutYield</font> is called.
<li>The entire method gets executed and the list is constructed.
<li>The foreach-construct loops over all the values in the list.
<li>The net result is that we get numbers 1 to 5 printed in the console.</li>
</ol>
<p>Now, let’s look at an example with the <strong>yield return</strong> statement:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GenerateWithYield()
{
    var i = 0;
    while (i&lt;5)
        yield return ++i;
}

foreach(var number in GenerateWithYield())
    Console.WriteLine(number);
</pre>
<p>At first sight, we might think that this is a function which returns a list of 5 numbers. However, because of the yield-statement, this is actually something completely different. This method doesn’t in fact return a list at all. What it does is it creates a state-machine with a promise to return 5 numbers. That’s a whole different thing than a list of 5 numbers. While often the result is the same, there are certain subtleties you need to be aware of.</p>
<p>This is what happens when we execute this code:</p>
<ol>
<li><font face="Courier New">GenerateWithYield</font> is called.
<li>This returns an <font face="Courier New">IEnumerable&lt;int&gt;</font>. Remember that it’s not returning a list, but a promise to return a sequence of numbers when asked for it (more concretely, it exposes an iterator to allow us to act on that promise).
<li>Each iteration of the <strong>foreach</strong> loop calls the iterator method. When the <strong>yield return</strong> statement is reached the value is returned, and the current location in code is retained. Execution is restarted from that location the next time that the iterator function is called.
<li>The end result is that you get the numbers 1 to 5 printed in the console.</li>
</ol>
<h2>Example: infinite loops</h2>
<p>Now you might think that since both example behave exactly the same, that there’s no difference in which one we use. Let’s modify the example a bit to show where the differences lie. I’m going to make two small changes:</p>
<ul>
<li>Instead of looping in the generator until we reach 5, I’m going to loop endlessly:
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GenerateWithYield()
{
    var i = 0;
    while (true)
        yield return ++i;
}

IEnumerable&lt;int&gt; GenerateWithoutYield()
{
    var i = 0;
    var list = new List&lt;int&gt;();
    while (true)
        list.Add(++i);
    return list;
}
</pre>
<li>Instead of iterating directly over the list, I&#8217;m going to take 5 items of the list:
<pre class="brush: csharp; auto-links: false;">foreach(var number in GenerateWithoutYield().Take(5))
    Console.WriteLine(number);

foreach(var number in GenerateWithYield().Take(5))
    Console.WriteLine(number);</pre>
</li>
</ul>
<p>When we do this, the difference is clear. Following the previously described steps, in the case of the method without yield, the loop will never finish as it will keep looping forever inside the <font face="Courier New">GenerateWithoutYield</font>-method when it’s called in the first step (until it throws an OutOfMemoryException). In the case of the <font face="Courier New">GenerateWithYield</font>-method however, we get a different behavior. Because the <font face="Courier New">Take</font>-method is actually implemented with a yield return operator as well, this will succeed. The method only gets called until the <font face="Courier New">Take</font>-method is satisfied.</p>
<h2>Example: multiple iterations</h2>
<p>Another side effect of the yield return statement is that multiple invocations will result in multiple iterations. Let’s have a look at an example:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;Invoice&gt; GetInvoices()
{
    for(var i = 1;i&lt;11;i++)
        yield return new Invoice {Amount = i * 10};
}

void DoubleAmounts(IEnumerable&lt;Invoice&gt; invoices)
{
    foreach(var invoice in invoices)
        invoice.Amount = invoice.Amount * 2;
}

var invoices = GetInvoices();
DoubleAmounts(invoices);

Console.WriteLine(invoices.First().Amount);
</pre>
<p>Read through the above code sample and try to predict what will be written to the console. </p>
<p>What do you think the output is here? 20? In fact, the result is 10. Let’s see why:</p>
<ul>
<li>When the line <font face="Courier New">var invoices = GetInvoices();</font> is executed we’re not getting a list of invoices, we’re getting a state-machine that can create invoices.
<li>That state machine is then passed to the <font face="Courier New">DoubleAmounts</font>-method.
<li>Inside the <font face="Courier New">DoubleAmounts</font>-method we use the state-machine to generate the invoices and we double the amount of each of those invoices.
<li>All the invoices that were created are discarded though, as there are no references to them.
<li>When we return to the main method, we still have a reference to the state-machine. By calling the <font face="Courier New">First</font>-method we again ask it to generate invoices (only one in this case). The state-machine again creates an invoice. This is a new invoice and as a result, the amount will be 10.</li>
</ul>
<blockquote>
<p>Because this is non-obvious behavior, tools such as Resharper will warn you about multiple iterations.</p>
</blockquote>
<h2>Real life usage</h2>
<p>It’s pretty neat that we can write seemingly infinite loops and get away with it, but what can we use it for in real life? In broad terms, I’ve found two main use cases (all other use cases I found are a subclass of these two).</p>
<h3>Custom iteration</h3>
<p>Let’s say we have a list of numbers. We now want to display all the numbers larger than a specific number. In a traditional implementation that might look like this:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GetNumbersGreaterThan3(List&lt;int&gt; numbers)
{
    var theNumbers = new List&lt;int&gt;();
    foreach(var nr in numbers)
    {
        if(nr &gt; 3)
            theNumbers.Add(nr);
    }
    return theNumbers;
}
foreach(var nr in GetNumbersGreaterThan3(new List&lt;int&gt; {1,2,3,4,5})
    Console.WriteLine(nr);</pre>
<p>While this will work, it has a disadvantage: we had to create an intermediate list to hold the items. The flow can be visualized as follows:</p>
<figure><img title="standard_loop" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="standard_loop" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/05/standard_loop-2.jpg" width="458" height="271"></figure>
<p>You can see in the above image, how the first list is created, then iterated and filtered into a new list. This new list is then iterated again.</p>
<p>We can avoid this intermediate list by using yield return:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; GetNumbersGreaterThan3(List&lt;int&gt; numbers)
{
    foreach(var nr in numbers)
    {
        if(nr &gt; 3)
            yield return nr;
    }
}
foreach(var nr in GetNumbersGreaterThan3(new List&lt;int&gt; {1,2,3,4,5})
    Console.WriteLine(nr);
</pre>
<p>Now, the execution looks very different:</p>
<figure><img title="yield_loop" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="yield_loop" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/05/yield_loop-2.jpg" width="457" height="274"></figure>
<p>In this diagram it’s clear that we only iterate the list once. When we get to the items that are needed, control is ceded to the caller (the foreach-loop in this case)</p>
<h3>Stateful iteration</h3>
<p>Since the method containing the yield return statement will be paused and resumed where the yield-statement takes place, it still maintains its state. Let’s take a look at the following example:</p>
<pre class="brush: csharp; auto-links: false;">IEnumerable&lt;int&gt; Totals(List&lt;int&gt; numbers)
{
    var total = 0;
    foreach(var number in numbers)
    {
        total += number;
        yield return total;
    }
}

foreach(var total in Totals(new List&lt;int&gt; {1,2,3,4,5})
    Console.WriteLine(total);
</pre>
<p>The above code will output the values 1,3,6,10,15. Because of the pause/resume behavior, the variable total will hold its value between iterations. This can be handy to do stateful calculations.</p>
<h2></h2>
<h2>Deferred execution</h2>
<p>All of the above samples have one thing in common: they only get executed as and when necessary. It’s the mechanism of pause/resume in the methods that makes this possible. By using deferred execution we can make some methods simpler, some faster and some even possible where they were impossible before (remember the infinite number generator).</p>
<p>The entire LINQ part of C# is built around deferred execution. Let’s see a few sample how deferred execution can make things more efficient:</p>
<pre class="brush: csharp; auto-links: false;">var dollarPrices = FetchProducts().Take(10)
                                  .Select(p =&gt; p.CalculatePrice())
                                  .OrderBy(price =&gt; price)
                                  .Take(5)
                                  .Select(price =&gt; ConvertTo$(price));
</pre>
<p>Suppose we have 1000 products. If the above method did not have deferred execution, it would mean we would:</p>
<ul>
<li>Fetch all 1000 products
<li>Calculate the price of all 1000 products
<li>Order 1000 prices
<li>Convert all the prices to dollars
<li>Take the top 5 of those prices </li>
</ul>
<p>Because of deferred execution however, this can be reduced to:</p>
<ul>
<li>Fetch 10 products
<li>Calculate the price of 10 products
<li>Order 10 prices
<li>Convert 5 of these prices to dollars </li>
</ul>
<p>While maybe a contrived example, it shows clearly how deferred execution can greatly increase efficiency.</p>
<blockquote>
<p>Side note: I want to make clear that deferred execution in itself does not make your code faster. Inherently, it has no effect on the speed or efficiency of your code. The value of deferred execution is that it allows you to optimize your code in a clean, readable and maintainable way. This is an important distinction.</p>
</blockquote>
<h2>Conclusion</h2>
<p>The yield-keyword is often misunderstood. Its behavior can seem a bit strange at first sight. However, it’s often the key to creating efficient code that is maintainable at the same time. Its main use cases are custom and stateful iteration which allow you to create simple yet powerful code. The yield-keyword is what’s powering the deferred execution used in LINQ and allows us to use it in our code. I hope this article helped explaining the semantics of the yield-keyword and the effects and implications it has on calling code. Feel free to ask any questions in the comments!</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/">Yield return in C#</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=lUWwAExTZgE:zo8vEenJlSg:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=lUWwAExTZgE:zo8vEenJlSg:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=lUWwAExTZgE:zo8vEenJlSg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=lUWwAExTZgE:zo8vEenJlSg:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=lUWwAExTZgE:zo8vEenJlSg:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=lUWwAExTZgE:zo8vEenJlSg:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/lUWwAExTZgE" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/</feedburner:origLink></item>
		<item>
		<title>Impressions as a rookie Microsoft MVP</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/v2GKyl76iRw/</link>
		<pubDate>Mon, 02 May 2016 21:58:24 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1408</guid>
		<description><![CDATA[<p>Last week I attended my first Open MVP day since I got the Microsoft MVP award. It was a great experience and I wanted to share what I learned and shout out to the great professionals I met there. For me, it’s an honor to be part of this community. Not only does it feel [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/02/impressions-as-a-rookie-microsoft-mvp/">Impressions as a rookie Microsoft MVP</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Last week I attended my first Open MVP day since I got the Microsoft MVP award. It was a great experience and I wanted to share what I learned and shout out to the great professionals I met there.</p>
<p>For me, it’s an honor to be part of this community. Not only does it feel good to be recognized for the work I’ve been doing but it’s an even greater opportunity to learn from people and build a professional network. The Open MVP day is exactly about that.</p>
<p>It was a busy week, as I just started a new contract, flew out to London and then had to rush to Rome to attend the Open MVP day. Lots of travel, lots of attention required and all in a short time span. While I was happy to be able to do all of that, I can’t say I wasn’t relieved to be back at home as well.</p>
<h2>People I’ve met</h2>
<p>Apart from the technical information I got, the most important part for me was to get to know as many people as possible and build a network of peers. It’d be great to continue some of the conversations I’ve had with people there and build a relationship with mutual benefits. On the night I arrived I quickly caught up with the Spanish delegation (mostly with <a href="http://soydachi.com/" target="_blank">Dachi Gogotchuri</a>, <a href="https://about.me/sergio_parra_guerra" target="_blank">Sergio Guerra</a> and <a href="http://pildorasdotnet.blogspot.com.es/" target="_blank">Asier Villanueva</a>) and we had a good chat over a beer about what we like (and don’t like) about the technologies we work in. It was particularly interesting to see what grievances are shared and which ones are probably just my own problem :-). It was really great to see so many people with the same shared interests and a commitment to keep learning and exploring technology. If you’re like me (a geek), you probably recognize the feeling where you have a lot of ideas and thoughts and no one to share them with (at least not in person). </p>
<p>Apart from fellow MVP’s, we also had the chance to meet some of the technical evangelists from Microsoft. They’re experts in building and maintaining communities. It was very interesting to hear some new ideas from <a href="https://alejandrocamposmagencio.com/" target="_blank">Alejandro Campos</a> on how to build a community and wake interest in technology. I can’t wait to start and put these ideas in action to foster the local community and build a network of like-minded people here in Mallorca.</p>
<h2>Things I’ve learned</h2>
<p>Apart from networking, there are obviously technical sessions. While I found that most sessions where rather introductory sessions, I do want the highlight the session by <a href="https://weblogs.asp.net/ricardoperes" target="_blank">Ricardo Peres</a> on ElasticSearch. While also an introductory session, this one was particularly interesting as I just started a project with heavy usage of ElasticSearch. I definitely learnt a lot in that session and hope to soon start to apply that knowledge in real life. </p>
<h2>The warning signs</h2>
<p>If there’s one thing I was a bit weary about, I’d say it’s the effect of the <a href="https://en.wikipedia.org/wiki/Echo_chamber_(media)" target="_blank">echo chamber</a>. This has nothing to do with the organizers or the sessions, but with the very nature of a vendor specific event. Since all attendees are Microsoft MVP’s, the focus naturally lies on Microsoft technology. Even though I’m mostly Microsoft oriented, I like to venture into related technologies to compare, contrast and learn from them. This doesn’t mean I find MS tech worse (or better) than other tech stacks, it just means that, while attending an event that focuses on a particular vendor, it’s important to not get soaked up by it and keep an open mind.</p>
<p>On the other hand, I also like to mention that there a lot of MVP’s that didn’t only specialize in Microsoft tech. As an example, <a href="http://nicolaiarocci.com/" target="_blank">Nicola Iarocci</a>, is a python specialist on the server, but works with MS tech on the client side. His perspective was particularly interesting as it shows that it isn’t necessary to be only “devoted” to MS tech to become an MVP. It shows that the movement towards openness from Microsoft is not just hollow words.</p>
<h2></h2>
<h2>Conclusion</h2>
<p>All in all, I’d say that my first Open MVP day was a great success and I can’t wait to attend my first MVP summit later this year. As I’ve been told, it’s a fantastic opportunity to meet peers from all over the world as well as get an insight on “how the sausage is made” at Microsoft.</p>
<p>I want to thank the organizers and everybody I’ve met for making this a great first experience and hope to see everyone at the next gathering.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/05/02/impressions-as-a-rookie-microsoft-mvp/">Impressions as a rookie Microsoft MVP</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=v2GKyl76iRw:TbmILBW4G1k:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=v2GKyl76iRw:TbmILBW4G1k:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=v2GKyl76iRw:TbmILBW4G1k:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=v2GKyl76iRw:TbmILBW4G1k:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=v2GKyl76iRw:TbmILBW4G1k:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/v2GKyl76iRw" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/05/02/impressions-as-a-rookie-microsoft-mvp/</feedburner:origLink></item>
		<item>
		<title>Javascript sandbox pattern</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/1avf07oQT7g/</link>
		<comments>https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/#comments</comments>
		<pubDate>Mon, 25 Apr 2016 13:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1396</guid>
		<description><![CDATA[<p>A few years ago I wrote a post about Javascript namespaces and modules. In that post I discussed a pattern for isolating your code from outside code. I also promised to write up another pattern, the javascript sandbox pattern. I never did though. Lately I received a few emails about this and decided to write [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/">Javascript sandbox pattern</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>A few years ago I wrote a post about <a href="https://www.kenneth-truyers.net/2013/04/27/javascript-namespaces-and-modules/">Javascript namespaces and modules</a>. In that post I discussed a pattern for isolating your code from outside code. I also promised to write up another pattern, the javascript sandbox pattern. I never did though. Lately I received a few emails about this and decided to write it up eventually. While 3 years have past since then, and a lot has happened in the Javascript world, I still think this is a valuable pattern, if only for historical purposes. If you’re using ES6, there are probably better alternatives, but it still is a good way to understand the semantics of Javascript.</p>
<p>The namespace pattern described in my other post has a few drawbacks:</p>
<ul>
<li>It relies on a single global variable to be the application’s global. That means there’s no way to use two versions of the same application or library. Since they both need the same global name, they would overwrite each other.
<li>The syntax can become a bit heavy if you have deeply nested namespaces (eg: myapplication.services.data.dataservice)</li>
</ul>
<p>In this post I want to show a different pattern: a javascript sandbox. This pattern provides an environment for modules to interact without affecting any outside code. </p>
<h2>Sandbox constructor</h2>
<p>In the namespace pattern, there was one global object. In the javascript sandbox this single global is a constructor. The idea is that you create objects using this constructor to which you pass the code that lives in the isolated sandbox:</p>
<pre class="brush: js; auto-links: false;">new Sandbox(function(box){
    // your code here
});
</pre>
<p>The object box, which is supplied to the function will have all the external functionality you need. </p>
<h2>Adding Modules</h2>
<p>In the above snippet, we saw that the sandboxed code receives an object box. This object will provide the dependencies we need. Let’s see how this works. The Sandbox constructor is also an object, so we can add static properties to it. In the sample below we’re adding a static object modules. This object contains key-value pairs where the key indicates the module name and the value is a function which returns the module.</p>
<pre class="brush: js; auto-links: false;">Sandbox.modules = {
    dom: function(){
        return {
            getElement: function(){},
            getStyle: function(){}
        };
    },
    ajax: function(){
        return {
            post = function(){},
            get = function(){}
        };
    }
};</pre>
<p>With this in place, let’s now look at how we pass the modules to the sandboxed code. For that, we’ll have a look at a first version of the Sandbox constructor:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(callback){    
    var modules = [];
    for(var i in Sandbox.modules){
        modules.push(i);
    }
    for(var i = i &lt; modules.length;i++){
        this[modules[i]] = Sandbox.modules[modules[i]]();
    }
    callback(this);
}</pre>
<p>First we iterate over all the modules and push the names of all of them into an array. Next, we get each module from the static modules object and assign it to the current instance of the box. Lastly we pass the instance to the sandboxed code. That ensures the box has access to those modules.</p>
<h2>Improvements</h2>
<p>While the above is a good proof of concept, it requires some modifications to make it more versatile and safer to use. </p>
<h3>Enforce constructor usage</h3>
<p>First of all, let’s make sure that it’s always called as a constructor and not just with a regular function-call:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(callback){
    if(!(this instanceOf Sandbox){
        return new Sandbox(callback);
    }
}</pre>
<h3>Allow module specification</h3>
<p>Next, we want to be able to define which modules we are going to use so that only those modules will be initialized and passed. We do this by accepting an array of module names and then only adding those modules to the box, instead of iterating over all the modules. That makes our constructor a bit simpler:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(modules, callback){    
    if(! (this instanceOf Sandbox){
        return new Sandbox(modules, callback);
    }

    for(var i = i &lt; modules.length;i++){
        this[modules[i]] = Sandbox.modules[modules[i]]();
    }
    callback(this);
}
</pre>
<h3>Optional arguments</h3>
<p>We want to make the modules argument optional. If it’s not provided, we will use all the modules. We also want to add the ability to pass in the modules one by one as strings, instead of in an array. For that we need to do a bit of argument parsing and again iterate over all the modules:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(){
    // transform arguments into an array
    var args = Array.prototype.slice.call(arguments); 
    // the last argument is the callback
    var callback = args.pop(); <br>    // modules is either an array or individual parameters
    var modules = (args[0] &amp;&amp; typeof args[0] === "string" ? args : args[0];

    if(!modules){
        modules = [];
        for(var i in Sandbox.modules){
            modules.push[i];
        }
    }
}</pre>
<h3>Common instance properties</h3>
<p>Since we’re passing in the instance of the box to the sandboxed code, we can add some predefined properties to each instance so that all sandboxed code has access to these:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(){
    // ...

    this.sandboxVersion = "1.0.1";
    callback(this);
}
</pre>
<h3>Arguments destructuring</h3>
<p>Currently, the client-code has to access the modules through the box-instance. It would be nicer if we could pass in the modules as separate arguments. This makes the dependencies even more explicit. To do so, instead of calling the callback directly, we can use apply to execute the callback. Also, instead of initializing the modules as properties on the sandbox, we save them in an array:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(){
    // ...
    var moduleInstances = modules.map(function(m){
        return Sandbox.modules[m]();
    });

    callback.apply(this, moduleInstances);
}
</pre>
<h2>The complete Javascript sandbox</h2>
<p>When we put everything together, our constructor looks like this:</p>
<pre class="brush: js; auto-links: false;">function Sandbox(){
    // parse the arguments    
    var args = Array.prototype.slice.call(arguments),
    callback = args.pop(),
    modules = (args[0] &amp;&amp; typeof args[0] === "string") ? args : args[0];

    // add properties for all sandboxes
    this.applicationVersion = "1.0.2";

    // ensure constructor call
    if (!(this instanceOf Sandbox)){
        return new Sandbox(modules, callback);
    }

    // add all modules if no modules were passed
    if(!modules){
        modules = [];
        for(var i in Sandbox.modules){
            modules.push(i);
        }
    }

    // initialize and add all modules to the sandbox
    var moduleInstances = modules.map(function(m){ 
        return Sandbox.modules[m](); 
    }); 

    // execute the code
    callback.apply(this, moduleInstances);
}

Sandbox.modules = {
    dom: function(){
        return {
            getElement: function(){},
            getStyle: function(){}
        };
    },
    ajax: function(){
        return {
            get: function(){},
            post: function(){}
        };
    }
}
</pre>
<p>With the sandbox in place, here are a few example usages:</p>
<pre class="brush: js; auto-links: false;">new Sandbox('dom', function(dom){
    console.log(this.sandboxVersion);
    var element = dom.getElement();
});

new Sandbox(function(dom, ajax){
    console.log(this.sandboxVersion);
    var element = dom.getElement();
    ajax.post();
});

new Sandbox(['ajax', 'dom'], function(ajax, dom){...});
</pre>
<h2></h2>
<h2>Conclusion</h2>
<p>The Javascript sandbox let’s you isolate code from outside factors. It also allows you to explicitly define dependencies which reduces coupling and makes it easier to test. While there are other patterns to do this, and certain framework have this built in, this could be a good pattern to use if you’re still working with ES5 and no frameworks.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/">Javascript sandbox pattern</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=1avf07oQT7g:rieVfinK-jw:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=1avf07oQT7g:rieVfinK-jw:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=1avf07oQT7g:rieVfinK-jw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=1avf07oQT7g:rieVfinK-jw:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=1avf07oQT7g:rieVfinK-jw:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=1avf07oQT7g:rieVfinK-jw:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/1avf07oQT7g" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/</feedburner:origLink></item>
		<item>
		<title>Technical debt: managing code quality</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/AgrIv9QyvYY/</link>
		<comments>https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/#comments</comments>
		<pubDate>Wed, 13 Apr 2016 10:06:57 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1390</guid>
		<description><![CDATA[<p>Technical debt is usually seen as a negative factor in a development process. While having too much technical debt is indeed a good indicator for a project gone bad, technical debt is not always a bad thing. What is technical debt? When you start writing code you usually have a choice: either do it quick [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/">Technical debt: managing code quality</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Technical debt is usually seen as a negative factor in a development process. While having too much technical debt is indeed a good indicator for a project gone bad, technical debt is not always a bad thing.</p>
<h2></h2>
<h2>What is technical debt?</h2>
<p>When you start writing code you usually have a choice: either do it quick and messy or do it, as we developers tend to call it, “the right way”. The quick way is obviously better for the business as it delivers value earlier.</p>
<p>From a business point of view, no one really cares in what state the code is in. If it works, the business is happy. So then, considering that a healthy code base takes more time and is thus more expensive, why should the business have to pay more for a healthy code base, when it doesn’t really concern them?</p>
<p>Although a bad or untested code base may deliver business value, if it’s left uncontrolled, it’s not necessarily good in the long run. An unhealthy code base is hard to maintain and will develop stability issues. This will affect future efforts to add more business value.</p>
<p>In lay man’s terms, technical debt is the amount of mess left behind in a code base from quick fixes. </p>
<h2>Benefits of technical debt</h2>
<p>From the previous description, it might seem obvious that technical debt is a bad thing and should be avoided at all costs. That’s not entirely true though. Sometimes the cost of accruing technical debt is less than the cost of having to release later. Debt in the financial world has obvious advantages, and if managed correctly it can be a tool to improve yourself.</p>
<p>Taking a mortgage on a house when you have studied how to pay it back, can be a good investment. Waiting until you have saved the entire amount would be impossible for most people and thus you have a cost of opportunity. On the other hand, using a credit card like it’s a free for all pass and buying anything that you desire is probably not a good way of managing your financial situation.</p>
<p>In software development, the same can be said. When you need to push out that release before Christmas, it’s possibly a good idea to implement something quickly so it’s there when the opportunity is big. On the other hand, pushing out all features as soon as possible without regards to code quality, architecture and tests will sooner or later result in a slower process and reduced quality of the application.</p>
<h2>Managing technical debt</h2>
<p>As with financial debt, technical debt can beneficial. And also in line with how we think about financial debt, the key to balancing cost and advantages is to make sure you have a good strategy for controlling debt. Here are a few ways I found have helped me in keeping technical debt under control.</p>
<h3></h3>
<h3>Default to avoiding technical debt</h3>
<p>The default way of writing code in your organization should be to write well-factored, flexible code with decent test coverage. Acquiring debt is not a decision that should be taken lightly, therefore, when it’s taken it should be taken deliberately and not by coincidence because someone didn’t feel like writing good code today.</p>
<h3>Communicate the consequences</h3>
<p>There’s a typical conversation between a developer and a manager. If you are a developer, I’m sure you have heard it as well. It goes something like this:</p>
<p><font face="Courier New"><strong>Manager</strong>: I need feature X. How much time do you think it will take you?<br /><strong>Developer</strong>: 1 week<br /><strong>Manager</strong>: Hmm, it needs to go online in two days though, as big event X is in two days<br /><strong>Developer</strong>: OK, I’ll see what I can do<br /><strong>Manager</strong>: OK<br /><strong>Developer</strong>: OK</font></p>
<p>Familiar? I thought so. This happens all the time and the problem here is not bad standards or bad employees. The problem is communication. Here is the same conversation, but with the thoughts of both in brackets:</p>
<p><font face="Courier New"><strong>Manager</strong>: I need feature X. How much time do you think it will take you?<br /><strong>Developer</strong>: 1 week (<em>1 day of thinking, 2 days coding and testing, 1 refactoring, 1 days extra testing</em>)<br /><strong>Manager</strong>: Hmm, it needs to go online in two days though, as big event X is in two days<br /><strong>Developer</strong>: OK, I’ll see what I can do (<em>I’ll cut down on the thinking and testing</em>)<br /><strong>Manager</strong>: OK (<em>I’m a great manager, I just managed to get something done in 2 days which normally takes a week</em>)<br /><strong>Developer</strong>: OK (<em>ugh, always the same, we just can’t write decent code here</em>)</font></p>
<p>It’s important to realize that various factors are at play here:</p>
<ul>
<li>The manager wants feature X in two days, not because he’s a tyrant, but for a good reason: it earns more money if implemented earlier.
<li>The manager walks away with idea that it can be done in two days. If this happens often, he will realize that pushing developers works, after all, in two days he’s going to see that it works indeed. He doesn’t know what happens in the code base, so it’s normal that the next time around he’ll try to negotiate the estimate.
<li>The developer wants to satisfy the need of the business, but feels incapable of doing so in both short and long term</li>
</ul>
<p>As a developer we have an obligation to communicate better to the business. Here’s my improved version of this conversation:</p>
<p><font face="Courier New"><strong>Manager</strong>: I need feature X. How much time do you think it will take you?<br /><strong>Developer</strong>: 1 week<br /><strong>Manager</strong>: Hmm, it needs to go online in two days though, as big event X is in two days<br /><strong>Developer</strong>: It’s impossible to do this feature well in 2 days, it needs a week to be done properly.<br /><strong>Manager</strong>: OK, but we don’t have a week. If it takes a week, there’s no point as we won’t earn as much money from it.<br /><strong>Developer</strong>: What I can do is take a shortcut and do it quickly. That would require me to rearrange some things and I need to go back later to fix it. <br /><strong>Manager</strong>: OK, that sounds reasonable (<em>great, I’m going to get it done in time</em>)<br /><strong>Developer</strong>: OK (<em>great, I will make sure the feature gets implemented soon and then I’ll need to go back to make sure nothing gets left behind that can cause trouble in the future</em>)</font></p>
<p>In this conversation, a middle ground is found. The feature will be implemented, and the technical debt is accounted for and managed. The consequences are well understood by the business and can be dealt with accordingly.</p>
<h3>Track</h3>
<p>Just as with financial debt, you want to know what debt you have and how long it would take you to get rid of it (even if you’re never going to get rid of all of it). In the above story, it would be wise to create a task for cleaning up the code base and writing tests after the feature was implemented. This way, it’s visible to everyone that there was technical debt acquired. </p>
<p>The way you track technical debt depends on your process. In an agile process, we have created a new type of story before. Apart from user stories, tasks and bugs we’d have another story type called “technical debt”. This allows us to see in a quick view how much technical debt we have and whether it’s becoming a problem.</p>
<p>Apart from tracking technical debt as and when you create it, it’s sometimes also necessary to track technical debt that you spot. This could be either legacy code or it could be some big refactoring that you and the team feel is necessary for the code base to be flexible towards future developments.</p>
<p>Depending on what situation you’re in, you could incorporate a certain percentage of the time to working on technical debt stories. make sure the business is aware of this and approves.</p>
<h3>Repay your debt</h3>
<p>Communication and tracking don’t serve any purpose if you don’t repay your debt. You have to make sure that technical debt stories are dealt with on a regular basis. When deciding which stories to tackle you need to factor in a few properties:</p>
<ul>
<li>Age: Just as with financial debt, technical debt comes with an interest. The longer you leave code in a bad state, the bigger the impact it will have: it’ll create more bugs and people will forget why and how something was implemented (remember, it’s bad code, so it’s probably obscure by nature)
<li>Impact: There’s a difference between a class that has some formatting issues and an entire subsystem that doesn’t have any tests. Tackle those issues that have the biggest impact first. (to continue the analogy: pay of the credit card with 20% interest rate before you pay of the one with 2% interest rate)</li>
</ul>
<p>Apart from the need to repay your debt, there are also different ways you can choose to repay it:</p>
<ul>
<li>Repay it completely: replace the code or refactor it to a good solution
<li>Partially repay it: Instead of implementing a good solution, implement a different solution that has less interest
<li>Don’t repay it at all: just deal with the “interest”. This can be a good option if the cost is minimal, the code is hardly ever changed and replacing it would be very costly</li>
</ul>
<h3>Dealing with legacy code</h3>
<p>Considering that we default to writing good code and all technical debt is communicated and dealt with appropriately, a normal project should never have so much technical debt that it impacts the business. However, there are situations where we don’t have these values from the beginning of the project: legacy projects.</p>
<p>Dealing with legacy projects is a totally different ball game. Often it’s difficult to identify the good code (or worse, there is no good code). Furthermore, it’s difficult to identify which parts of the code are causing most problems and how to solve them (aka: it’s difficult to estimate the interest). </p>
<p>To deal with this situation it’s best to create a metaphorical fork in the road, from which point you default to writing good code. All legacy code should be isolated as much as possible. Once legacy code is isolated, you can then decide that all modifications to that legacy should leave the code in a better state. A good book to read on this topic is <a href="http://www.amazon.com/gp/product/0131177052/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=0131177052&amp;linkCode=as2&amp;tag=kennethtruyer-20&amp;linkId=EPN6346OL6664AY3">Working Effectively with Legacy Code</a><img style="border-top-style: none !important; border-left-style: none !important; border-bottom-style: none !important; border-right-style: none !important; margin: 0px" border="0" alt="" src="http://ir-na.amazon-adsystem.com/e/ir?t=kennethtruyer-20&amp;l=as2&amp;o=1&amp;a=0131177052" width="1" height="1"> by Michael Feathers.</p>
<h2>Conclusion</h2>
<p>Technical debt is inevitable in software projects. Instead of trying to avoid it, we should try to manage it as effectively as possible. When managed correctly, technical debt can be a powerful tool to help your business grow faster without impacting the long term goals.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/">Technical debt: managing code quality</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=AgrIv9QyvYY:JzhLPJ6BCAA:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=AgrIv9QyvYY:JzhLPJ6BCAA:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=AgrIv9QyvYY:JzhLPJ6BCAA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=AgrIv9QyvYY:JzhLPJ6BCAA:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=AgrIv9QyvYY:JzhLPJ6BCAA:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=AgrIv9QyvYY:JzhLPJ6BCAA:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/AgrIv9QyvYY" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/</feedburner:origLink></item>
		<item>
		<title>Code Reviews: why and how?</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/fIziq-eVwfU/</link>
		<pubDate>Thu, 07 Apr 2016 23:07:51 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1373</guid>
		<description><![CDATA[<p>Of all the practices implemented to improve code quality, such as unit testing, continuous integration, continuous deployment, daily stand-ups, I find the most important one is doing proper code reviews. Code reviews have a lot of advantages: It’s much easier to spot problems with other people’s code than with your own Knowledge of the codebase [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">Code Reviews: why and how?</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<div> <a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/code_review-2.jpg"><img title="code_review" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; margin: 0px; display: inline; border-top-width: 0px" border="0" alt="code_review" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/code_review_thumb-2.jpg" width="240" align="right" height="125"></a>
<p> Of all the practices implemented to improve code quality, such as unit testing, continuous integration, continuous deployment, daily stand-ups, I find the most important one is doing proper code reviews.</p>
<p>Code reviews have a lot of advantages:</p>
<ul>
<li>It’s much easier to spot problems with other people’s code than with your own
<li>Knowledge of the codebase is spread through the team, thereby avoiding the pitfall where certain developers have sole knowledge of a subsystem
<li>General knowledge of the team is improved as new methods or practices are visible to everyone
<li>Adherence to code standards are enforced</li>
</ul></div>
<p>It can be a bit awkward in the beginning, but once you have found a good flow, you’ll notice a big change in how software is developed, not only in terms of quality but also in terms of the morale and coherence of the development team.</p>
<h2>Rules for reviewing code</h2>
<p>For code reviews to work in your benefit, a certain set of rules need to be followed. It’s easy to just start using some software, but if you aren’t reviewing code consistently, it won’t bring you all the benefits listed above. Here are a few the rules I have experienced to positively influence the gains of code reviews as well as satisfaction among developers.</p>
<h3>Small commits, good commit messages</h3>
<p>Having a lot of small grained commits with decent commit messages makes it much easier to review code as everything will be explained step-by-step. It also makes it easier for the reviewer to see the thinking behind the implementation. </p>
<p>As an added benefit, git blame will serve as your living documentation for all your code.</p>
<h3>Review the code, not the developer</h3>
<p>Since you’re reviewing someone’s work, it might be tempting to critique that person. However, if this culture persists, you’ll more often than not get developers that aren’t happy anymore or who will start hiding their not-so-nice code. </p>
<p>This doesn’t mean you should let code pass that doesn’t live up to the standard, it just means you should critique the code instead of the person. This can be a subtle difference. Instead of saying “You didn’t follow the standard here”, which can sound accusatory (intentional or not), say “this code should be formatted differently”. Note the difference in tone.</p>
<p>This responsibility also falls on the developer. As a developer you have to get into the mindset that you are not your code. Even if a comment might sound accusatory, don’t take it personally and remember that it’s a comment on the code, not on you or your behavior.</p>
<h3>Multiple reviewers</h3>
<p>Always assign more than 1 person to review the code. That way, if there are any disagreements, it will be easily solved by a majority vote. If you assign only 1 developer, you might run into a discussion without end between the developer and the reviewer. </p>
<h3>Have a checklist</h3>
<p>Having a list of things to look out for, makes it easier to conduct code reviews systematically. Here’s my personal list of things to check for (in order of priority):</p>
<ul>
<li>Function: does it do what it needs to do? (A good spec goes a long way here) </li>
<li>Bugs </li>
<li>Test coverage </li>
<li>Adherence to architecture and patterns </li>
<li>Duplicate code </li>
<li>Code style and formatting</li>
</ul>
<p>Having a checklist also makes it easier for the developer to run through it before submitting the code for review.</p>
<h3>When to review</h3>
<p>This depends a bit on the confidence of the team and how well they work together. </p>
<p>In teams with low to medium confidence, I would opt for a feature-branch strategy where code is reviewed first and only then&nbsp; integrated into the main line. In this case, you have to make sure that code is reviewed as soon as possible, since you don’t want any branches to live for a long time only to see that your properly reviewed code can’t be integrated anymore without merge conflicts.</p>
<p>In teams with a high confidence level, I would opt to integrate directly into the main line and do reviews after the fact. The reason this can only be done in high confidence teams is multiple:</p>
<ul>
<li>It might open you up to code reviews that are never completed
<li>A developer can ignore the code review
<li>Bad code can be committed </li>
</ul>
<h3>Review process</h3>
<p>Whether you sit down together in front of the screen or have software to be able to complete your task, make sure reviewing code is as accessible as possible. The last thing you want is that developers come to see it as a chore. A fast code review process will yield more code reviews and better results.. I had very good experiences with <a href="https://www.fogcreek.com/kiln/features/code-reviews/">Kiln</a>. It does more than just code reviews, but I particularly liked their interface (YMMV).</p>
<blockquote><p>Quick tip: Review only tests<br />If you have limited time and have good test coverage (or it’s enforced by your build process), you can choose to only review the tests. The reasoning is that if the tests are good, the implementation will be good as well. By reviewing the tests you will see what the public API looks like, so you’ll get a good feel about the code. Use this tip sparingly though, a full review is still useful.</p></blockquote>
<h2>Pitfalls</h2>
<p>Apart from following the above rules, there are a few pitfalls and anti-patterns to look out for.</p>
<h3>Gaming the system</h3>
<p>If you don’t have complete buy-in from the whole team, this issue might come up. It happens when 2 or more devs on the team decide to approve each other’s code without properly reviewing it. They’re technically following the rules, but in practice the code is not reviewed at all. A possible remedy is to make sure that the rule of 2 or more reviewers is appended with a rule that says you have to select two different people all the time.</p>
<h3>Review gate</h3>
<p>This problem can manifest itself in a few ways:</p>
<ul>
<li>One dev never approves of any review so code is unnecessarily blocked in the review. If this happens frequently, find out what the underlying issue is.
<li>All code needs to pass past one developer before it hits main. While this can sometimes be good for a short while, it’s never a good plan to do this long-term. If really necessary, than at least that role should be switched regularly. Otherwise developers might stop caring about their code at all.
<li>Complete lockdown: this happens where developers are taken away permission to touch the main line at all. If you really have big code quality issues it can be a good temporary measure, but otherwise this is the fastest way to sink your team’s morale.</li>
</ul>
<h2>Conclusion</h2>
<p>Code reviews are, in my experience, the most valuable tool for improving software quality, distributing knowledge and enforcing a common coding standard. </p>
<p>If you’re just starting out with code review, keep in mind that it needs perfecting. Don’t worry when it’s not yet creating that huge change you expected. If you improve the process bit by bit and get more experienced, you’ll soon see the benefits.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">Code Reviews: why and how?</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=fIziq-eVwfU:S7QU3W38BBQ:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=fIziq-eVwfU:S7QU3W38BBQ:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=fIziq-eVwfU:S7QU3W38BBQ:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=fIziq-eVwfU:S7QU3W38BBQ:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=fIziq-eVwfU:S7QU3W38BBQ:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=fIziq-eVwfU:S7QU3W38BBQ:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/fIziq-eVwfU" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/</feedburner:origLink></item>
		<item>
		<title>Build 2016 announcements</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/eMcIAeFLVmk/</link>
		<pubDate>Sat, 02 Apr 2016 18:15:51 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1350</guid>
		<description><![CDATA[<p>Build 2016 is finished and as always it was great to see Microsoft bringing new opportunities to businesses and developers. Unfortunately I wasn’t able to attend, but luckily, the live stream of all the important sessions, especially for the keynotes, made up for that. These are the announcements that excited me the most. Microsoft Bot [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/02/build-2016-announcements/">Build 2016 announcements</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/build.jpg"><img title="build" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="build" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/build_thumb.jpg" width="800" height="400"></a></figure>
<p>Build 2016 is finished and as always it was great to see Microsoft bringing new opportunities to businesses and developers. Unfortunately I wasn’t able to attend, but luckily, the live stream of all the important sessions, especially for the keynotes, made up for that. These are the announcements that excited me the most.</p>
<h2>Microsoft Bot Framework</h2>
<p>Completely unexpected but a very cool way of building new applications and solutions. The idea behind the bot framework is to use conversations as an application framework. The big challenge for developers is to make the interaction with bots as natural as possible. To that end, Microsoft is offering a framework plus a set of intelligence services such as speech and text recognition and a wide variety of other cognitive services. This should enable developers to build clever bots that can automate the things we do on websites at the moment. I’m not sure it will replace websites anytime soon as Microsoft claims, but it definitely has some benefits over traditional applications. User interface design becomes kind of obsolete, and if you think about it, we have been using language to communicate our intentions forever, so if we can properly crack the key to that, we could see very interesting applications.</p>
<p>Obviously Microsoft wouldn’t be Microsoft if they didn’t connect their existing services to this new framework. Skype and Cortana will be tied in and soon you’ll see new integrations pop up in these tools.</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/bot_framework.jpg"><img title="bot_framework" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="bot_framework" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/bot_framework_thumb.jpg" width="800" height="384"></a></figure>
<h2>Bash on Windows</h2>
<p>A few years ago, the good old April’s fool day joke would be Microsoft releasing a Linux distro or working together with Linux and open source in general. The news is the same, only this time it’s for real. Through an integration with native Ubuntu binaries, windows developers can now use long known bash tools such as grep, awk, sed, … This definitely opens up a lot of possibilities, not in the least for making it easier to follow online tutorials. </p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/bash_on_windows.jpg"><img title="bash_on_windows" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="bash_on_windows" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/bash_on_windows_thumb.jpg" width="856" height="468"></a></figure>
<h2>HoloLens</h2>
<p>Already announced earlier this year, but now it’s for real: the HoloLens dev-kit is now going out to developers. We’ve seen some impressive demos from Microsoft so far, but now it will be interesting to see what the rest of the world can do with it. This is the first real test for HoloLens. If it really is that impressive as the demos we saw from Microsoft, we’re up for some mind blowing applications in the next couple of months. Furthermore, because of a few design changes on the actual headset, users who tried it out reported a better field of view, which was one of the points of critique up until now.</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/hololens.jpg"><img title="hololens" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="hololens" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/hololens_thumb.jpg" width="800" height="498"></a></figure>
<h2>Azure is getting bigger</h2>
<p>Azure already consists of a huge set of services that make the life of developers easier. At Build 2016, Microsoft added a bunch of new services to grow Azure even more. These were all announced:</p>
<ul>
<li>Azure IoT Starter Kits are now available for purchase from partners
<li>Azure IoT Hub device management and Gateway SDK will be available later in Q2
<li>A new service, Azure Functions is now in preview
<li>DocumentDb supports a MongoDb protocol now
<li>Azure Developer Tools
<li>Microsoft Cognitive Services is in preview</li>
</ul>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/azurebuild2016.jpg"><img title="azure build 2016" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="azure build 2016" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/azurebuild2016_thumb.jpg" width="800" height="438"></a></figure>
<h2>Xamarin</h2>
<p>Probably the most awaited announcement. As everyone was hoping, Xamarin will now come bundled with Visual Studio. That’s great news for developers that were using the paid version before as it was quite expensive. Not only does it come with paid versions of Visual Studio but also with the free community edition. To top it off, they also announced open sourcing the Xamarin core SDK. These announcements were certainly above expectations. While everyone was hoping for the Visual Studio bundling, no one dared to hope for inclusion in the free product and even less on having it available as open source.</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/xamarin.jpg"><img title="xamarin" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="xamarin" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/xamarin_thumb.jpg" width="963" height="580"></a></figure>
<h2>Desktop App Converter</h2>
<p>While one of the big disappointments of the last months was the discontinuation of the Android porting project, project Astoria, Microsoft now did release another porting tool, this time to convert Win32 application to UWP. Any app based on Win32 and .NET can be converted to the AppX format. Furthermore, work is still continuing on project Islandwood, the porting tool for iOS apps. Let’s hope these converters can make a dent in the app gap.</p>
<p>&nbsp;</p>
<p>What are you planning to do with these new services?</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/04/02/build-2016-announcements/">Build 2016 announcements</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=eMcIAeFLVmk:bKHh5GE0F7w:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=eMcIAeFLVmk:bKHh5GE0F7w:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=eMcIAeFLVmk:bKHh5GE0F7w:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=eMcIAeFLVmk:bKHh5GE0F7w:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=eMcIAeFLVmk:bKHh5GE0F7w:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=eMcIAeFLVmk:bKHh5GE0F7w:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/eMcIAeFLVmk" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/04/02/build-2016-announcements/</feedburner:origLink></item>
		<item>
		<title>Dependency management: strategies and pitfalls</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/6qtm4tfz7mY/</link>
		<comments>https://www.kenneth-truyers.net/2016/03/25/dependency-management-strategies-pitfalls/#comments</comments>
		<pubDate>Thu, 24 Mar 2016 23:55:04 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1334</guid>
		<description><![CDATA[<p>In the wake of the issue with NPM, I wanted to share my view and experience with dependency management. First of all, what happened? I’m not going to go too deep on what actually happened (there’s plenty of information about that), but essentially, because of a copyright dispute a package was removed and replaced with [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/03/25/dependency-management-strategies-pitfalls/">Dependency management: strategies and pitfalls</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In the wake of the issue with NPM, I wanted to share my view and experience with dependency management.</p>
<p>First of all, what happened? I’m not going to go too deep on what actually happened (there’s plenty of information about that), but essentially, because of a copyright dispute a package was removed and replaced with a different one. Because of this, a whole slew of popular packages that depended on this broke. All projects depending on those consequently broke as well, resulting in broken builds all over the world.</p>
<p>To add insult to injury, the package that caused most issues was leftPad, a seemingly trivial module that should be easy for any developer to implement themselves.</p>
<p>The incident has caused a lot of reflection on what the best way is to do dependency management. The discussion basically broke down into two camps:</p>
<p>In the one camp you have people saying that developers should not include dependencies for trivial things or on modules that consist of a one-liner. They argue that:</p>
<ul>
<li>Functions or one liners are too small to include as dependencies
<li>There’s no guarantee that what someone has written is correct
<li>Developers who are not capable of writing a trivial method like the one mentioned, should not be writing code at all
<li>Modules this trivial and this widely used, should be part of the base framework (nodejs in this case)</li>
</ul>
<p>The other camp sees the above problem as an issue with the registry (NPM) and that there’s no problem in having small packages. They argue that:</p>
<ul>
<li>We shouldn’t reinvent the wheel for common functionality
<li>Small modules allow for more modular design and composability of applications</li>
</ul>
<p>All good points, and I certainly agree with every argument made from both camps.</p>
<h2>Granular or coarse composition?</h2>
<p>I have experienced a similar problem in one of my previous projects. We were using internally distributed modules, so we didn’t have any of the aforementioned issues with registries and authorship.</p>
<p>The project I’m talking about was built in .NET and we were using NuGet packages. Regardless, the problem space is still the same: package and dependency management.</p>
<p>We were torn between two different styles of dependency management:</p>
<ul>
<li>Either create a single package with multiple utilities, the typical &lt;companyname&gt;.Utils package so to speak.
<li>Or create a single package for every single utility: &lt;companyname&gt;.LeftPad (for example)</li>
</ul>
<p>Our problem was related to the authoring and the management of these packages:</p>
<p>In the first option, it was easy to publish new versions of the package. On the other hand, if you wanted only one part of it, you had to import everything and the kitchen sink.</p>
<p>The second option was easier from a client point of view as you could just import what you needed, but it made authoring and maintaining all these different packages quite difficult.</p>
<h2></h2>
<h2>Dependency management and visibility</h2>
<p>Another problem, which is true for both solutions, but magnified by the second approach, was debugging these modules. If you would encounter a problem in one of the packages, it was impossible to step into them to see what they were doing. This shouldn’t be a problem in stable packages, but that’s not always the case. If you have a single package, you can publish the symbol files (in .NET) and use them for debugging. For multiple modules, this is also possible but it creates more overhead (at authoring time, but also in start up time of your application and loading symbols).</p>
<blockquote><p>Side note: We experienced first hand that SymbolSource, the repository for .NET symbols, is also not the most stable solution for storing symbols. </p>
</blockquote>
<p>While this is a problem specific to .NET, I do think that hiding away dependencies in a node_modules folder is kind of the same, you lose visibility.</p>
<p>The main issue I have with traditional package management solutions (NPM, NuGet, …) is that you lose visibility on what’s happening with your code base. I don’t like the idea of having to depend on external developers and less when there’s no guarantee that these packages will always be in the same hands (a change breaks the trust you have in a package).</p>
<p>Not only is visibility a problem, but when there’s an actual bug in a dependency, you have to hope that the author will fix it or accept your pull request if you decide to fix it yourself. If you can’t do this, you might get stuck.</p>
<h2>Our solution</h2>
<p>We came up with an intermediate solution (based on a <a href="https://nikcodes.com/2013/10/23/packaging-source-code-with-nuget/" rel="nofollow" target="_blank">post by Nik Molnar</a>) for package management: Instead of depending on a compiled package that is stored in the packages folder (equivalent to a package that lives in node_modules), we decided to publish source-only packages. What does this mean?</p>
<p>Say we have a package LeftPad. Instead of creating a package that distributes a DLL with that function, we would distribute a package that creates a class LeftPad.cs in your solution. This solves a few problems:</p>
<ul>
<li>You don’t need to reinvent the wheel, you can reuse existing modules that are used by many others (and thus potentially vetted)
<li>The code is available inside your project, so it’s visible and modifiable at all times
<li>If an update comes along, it will overwrite the class. Source control will show you exactly what was modified and you have great visibility over what could potentially be a breaking change
<li>You can make modifications to it very easily, again, any updates will highlight where your local modifications would be overwritten and it’s quite easy to manage</li>
</ul>
<p>Another key point is that these packages are marked as development dependencies, which means they won’t be installed in subsequent levels of the dependency chain. This makes the dependency chain a lot flatter and more manageable.</p>
<p>A potential disadvantage is that you have to take ownership of the external code. I don’t see that as a problem though, but rather an advantage. It improves visibility and in the end you’re the one responsible for your application. </p>
<p>This approach is not new and has been used in previous projects:</p>
<ul>
<li>NETFx: <a title="https://netfx.codeplex.com/" href="https://netfx.codeplex.com/" rel="nofollow">https://netfx.codeplex.com/</a>
<li>Quarks: <a title="https://github.com/shaynevanasperen/Quarks" href="https://github.com/shaynevanasperen/Quarks" rel="nofollow">https://github.com/shaynevanasperen/Quarks</a> (this is the project that grew out of our project)</li>
</ul>
<p>In my opinion this approach could work as well for NPM. Package authors could have an option of outputting certain files to the application directory instead of the nodes_modules folder so they can increase the visibility,</p>
<p>What do you think about this approach? We haven’t experienced any downsides with it (apart from some naming collisions, that were easily solved). Sound off in the comments!</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/03/25/dependency-management-strategies-pitfalls/">Dependency management: strategies and pitfalls</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=6qtm4tfz7mY:rfB0xRoAORM:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=6qtm4tfz7mY:rfB0xRoAORM:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=6qtm4tfz7mY:rfB0xRoAORM:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=6qtm4tfz7mY:rfB0xRoAORM:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=6qtm4tfz7mY:rfB0xRoAORM:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=6qtm4tfz7mY:rfB0xRoAORM:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/6qtm4tfz7mY" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2016/03/25/dependency-management-strategies-pitfalls/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2016/03/25/dependency-management-strategies-pitfalls/</feedburner:origLink></item>
		<item>
		<title>Vertical slices in ASP.NET MVC</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/iP-VQaTXo5k/</link>
		<pubDate>Tue, 02 Feb 2016 01:22:54 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[asp.net]]></category>
		<category><![CDATA[c#]]></category>
		<category><![CDATA[patterns]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1242</guid>
		<description><![CDATA[<p>Why? In ASP.NET MVC, applications are divided into horizontal layers, which is reflected in the project structure: Controllers Views Models Scripts Style It’s a good idea to divide you application into logical parts. While the idea of horizontal slices might look like a good idea, in practice I have noticed that it’s not necessarily the [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/02/02/vertical-slices-in-asp-net-mvc/">Vertical slices in ASP.NET MVC</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<h2>Why?</h2>
<p>In ASP.NET MVC, applications are divided into horizontal layers, which is reflected in the project structure:</p>
<ul>
<li>Controllers
<li>Views
<li>Models
<li>Scripts
<li>Style </li>
</ul>
<p>It’s a good idea to divide you application into logical parts. While the idea of horizontal slices might look like a good idea, in practice I have noticed that it’s not necessarily the only way and more often than not, not the best way. Why not?</p>
<p>The following is a diagram of how a standard MVC application is structured:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/02/horizontalslices.jpg"><img title="horizontal-slices" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="horizontal-slices" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/02/horizontalslices_thumb.jpg" width="600" height="479"></a></figure>
<p>From the diagram above, you can see that our application has 5 layers (the standard MVC ones) and then probably a few other ones, depending on the application. Cutting through these layers are the features: all of them have styles, scripts, views, controllers and models.</p>
<p>There are a few reasons why this separation does not make sense:</p>
<ul>
<li>Changes usually happen vertically, not horizontally. As an example, if you need to add a field to your database, you need to add it to your model, do some validation in the controller, display it in the view, style it in your CSS and do something funky with it in JavaScript.
<li>When you make horizontal slices, you automatically limit your application to the same model over all the slices. It’s weird to have one controller that accesses the database directly and another that first goes through another layer. Because you have layered the application, everything is suddenly layered. In a vertically structured application you can choose which paradigm to use for each part of the application. Even though it doesn’t physically limit you from mixing it up, it certainly pushes you in that direction (and I have seen those exact guidelines on several projects).
<li>From a purely practical point of view, it’s very difficult having all these files that relate to the same feature in different folders.
<li>It’s an arbitrary structure. From the diagram above, you can see that the natural structure our application wants to have is vertical (all our features are vertical), but the actual structure is horizontal. </li>
</ul>
<p>A better way to structure an application would be in vertical layers. We want all code belonging to a feature to sit together in a vertical slice (also called a feature slice) so it’s easy to work on that specific feature. That allows us to work in a very tight scope and makes our solution explorer work for us, instead of having to swap between folders all the time. A more optimal structure would look like this:</p>
<figure><a href="https://www.kenneth-truyers.net/wp-content/uploads/2016/02/verticalslices.jpg"><img title="vertical-slices" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="vertical-slices" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/02/verticalslices_thumb.jpg" width="600" height="615"></a></figure>
<p>In the above diagram, you can see that our slices are following the natural structure of the application. Furthermore, if we go one level deeper, we see that not all slices have controllers, not all of them depend on a domain (why do you need a domain model for content rendering?). That means we can tailor our code to exactly what is needed for that part of the application. A folder structure could look like this:</p>
<pre class="brush: csharp;">Features
    -&gt; Users
        - users.css
        - users.js
        - UserController.cs
        - Index.cshtml
        - Detail.cshtml
    -&gt; Search
        - search.css
        - search.js
        - SearchController.cs
        - Index.cshtml
    -&gt; Content
        - about-us.html
        - content.css
    -&gt; Invoicing
        - invoices.css
        - invoices.js
        - InvoiceController.cs
        - InvoiceViewModel.cs
        - Invoice.cs
Styles
    - layout.css
Scripts
    - app.js
</pre>
<p>In the above folder structure you can see that:</p>
<ul>
<li>Working on a feature can be done by just opening one folder and modifying only the files in that folder, so you have less context switching going on
<li>Some features may have more or less horizontal layers
<li>Even scripts and CSS live in the same folder as the server-side code </li>
</ul>
<p>I have used this structure in several projects and even though the code complexity is the same (you still need the same amount and type of code), I found that the perceived complexity is dramatically lower. Since the scope seems smaller, it feels like the application is just a bunch of smaller apps working together (under a shared structure, but still).</p>
<h2>How?</h2>
<p>Even though ASP.NET MVC comes standard in a horizontal flavor, it’s actually relatively simple to change this.</p>
<h3>Controllers</h3>
<p>If you use attribute routing, controllers are very easy, you can just move them wherever you want and they will still work. Yay, that was easy.</p>
<h3>Models</h3>
<p>Another easy one, just move them around at will, change a couple of namespaces and you’re done.</p>
<h3></h3>
<h3>Views</h3>
<p>Views are a bit trickier. By convention MVC looks for them under <font face="Courier New">Views –&gt; ControllerName</font> or under <font face="Courier New">Views –&gt; Shared</font>. To change this, we need to swap out the <font face="Courier New">ViewEngine</font>. That may sound difficult, but it’s actually rather simple. For the folder structure above, you can create the following class:</p>
<pre class="brush: csharp;">public class FeatureViewLocationRazorViewEngine : RazorViewEngine
{
    public FeatureViewLocationRazorViewEngine()
    {
        var featureFolderViewLocationFormats = new[]
        {
            "~/Features/{1}/{0}.cshtml",
            "~/Features/Shared/Views/{0}.cshtml",
        };

        ViewLocationFormats = featureFolderViewLocationFormats;
        MasterLocationFormats = featureFolderViewLocationFormats;
        PartialViewLocationFormats = featureFolderViewLocationFormats;
    }
}
</pre>
<p>Once you have this class, you can substitute the standard <font face="Courier New">ViewEngine</font> with this one in your <font face="Courier New">global.asax</font>:</p>
<pre class="brush: csharp;">ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new FeatureViewLocationRazorViewEngine());
</pre>
<h3>Scripts</h3>
<p>For scripts, we will have a shared part, which will still live in the default location (think about common modules for the entire application) and then there will be scripts that are specific to each feature. Those will live in the feature folders. Using ES6 modules makes this easy since you can use a bundler to bundle and minify everything. Because of the way modules work, it will automatically bundle all the scripts that are referenced from your entry point. Here’s an example of how something like that might work:</p>
<p>In the <font face="Courier New">/scripts</font> folder I have a file called <font face="Courier New">app.js</font>, which is my main point of entry. I trigger general modules from here and then based on the URL, I also trigger specific JS-files:</p>
<pre class="brush: js;">// Need to go 1 level up and into the feature folders
import ContactForm from '../Features/Contact/contact.js';
import Search from '../Features/Search/search.js';
import Users from './../Features/Users/logon.js';
import Invoices from './../Features/Invoices/invoices.js';

// this is a shared module that lives in the same directory
import Analytics from './analytics.js';

// This module always gets called, regardless of the page
new Analytics();

// Based on the path we activate a different module
var path = document.location.pathname.toLowerCase();
if(path.indexOf('contact') !== -1){
    new ContactForm();   
}
if(path.indexOf('search') !== -1){
    new Search();   
}
if(path.indexOf('invoices') !== -1){
    new Invoices();   
}
if(path.indexOf('logon') !== -1){
    new Users();   
}</pre>
<p>Once you have the main entry point in place, you can use <em>Gulp</em> to bundle and minify all of this into a single file:</p>
<pre class="brush: js;">gulp.task('js', function () {
    return gulp.src('scripts/app.js')
             .pipe(jspm({ selfExecutingBundle: true }))
             .pipe(rename('app.min.js'))
             .pipe(gulp.dest('scripts'));
});</pre>
<p>I’m using JSPM here, but it would work with any other bundler.</p>
<h3>Styles</h3>
<p>For our style sheets, I’m also going to use <em>Gulp</em>. In this example, I will be using <em>SCSS</em> because it allows me to include other files. First of all, I’m going to define my main entry point, let’s call it <font face="Courier New">style.scss</font>, which will live under the <font face="Courier New">/styles</font> folder. In this file, I only include references to other files:</p>
<pre class="brush: css;">// These are shared files that live under the same directory
@import "base/*";
@import "layout/*";

// using a glob I also include all .scss file in the feature-folders
@import "../Features/**/*.scss";
</pre>
<p>Note that I have used globs to include all files that are in the feature folders. This is not natively supported by most SCSS-compilers, but there’s a plugin (<em>sassGlob</em>) for <em>Gulp</em> that makes this possible (I’m sure there are plugins for <em>Grunt</em> as well, if you prefer to use <em>Grunt</em>). The following Gulp-task will make sure that all SCSS-files are combined, compiled to CSS and minified into one file:</p>
<pre class="brush: js;">gulp.task('sass', function () {
    return gulp.src('styles/style.scss')
               .pipe(sassGlob()) // Need to use a gulp plugin to make sure globs work
               .pipe(sass()))
               .pipe(rename({ suffix: '.min' }))
               .pipe(gulp.dest('styles'));
});</pre>
<h2>Conclusion</h2>
<p>I find feature slices to be a huge improvement over the standard structure of an MVC-application. With a bit of infrastructure code this is easy to set up and it makes working with large application a lot easier.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/02/02/vertical-slices-in-asp-net-mvc/">Vertical slices in ASP.NET MVC</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=iP-VQaTXo5k:Uax23HvWzFY:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=iP-VQaTXo5k:Uax23HvWzFY:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=iP-VQaTXo5k:Uax23HvWzFY:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=iP-VQaTXo5k:Uax23HvWzFY:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=iP-VQaTXo5k:Uax23HvWzFY:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=iP-VQaTXo5k:Uax23HvWzFY:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/iP-VQaTXo5k" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/02/02/vertical-slices-in-asp-net-mvc/</feedburner:origLink></item>
		<item>
		<title>Testing REST clients</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/jg4aj4m16vU/</link>
		<pubDate>Thu, 28 Jan 2016 23:11:21 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[REST]]></category>
		<category><![CDATA[unit testing]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1225</guid>
		<description><![CDATA[<p>With the proliferation of REST API’s, external ones and internal ones (think microservices), we very often find ourselves depending on these external services in our applications. Usually we have some designated class in front of the access to such a REST API. That class takes care of authentication, serialization and other plumbing. Testing this part [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/29/testing-rest-clients/">Testing REST clients</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<figure><img style="margin-left: 0px; display: inline; margin-right: 0px; border-width: 0px;" title="Testing REST clients with NancyFX" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/01/testing_rest_clients_small1.jpg" alt="Testing REST clients with NancyFX" width="400" height="281" align="right" border="0" /> With the proliferation of REST API’s, external ones and internal ones (think microservices), we very often find ourselves depending on these external services in our applications. Usually we have some designated class in front of the access to such a REST API. That class takes care of authentication, serialization and other plumbing. Testing this part of the application is a bit difficult though. Often it gets left out.</p>
<p>In this post I’ll show how you can create an in-memory web server by using a NancyFX module so you can simulate the API’s responses and you can test all the details of the connection.</p>
<h2>Set up</h2>
<p>To set this up we need two components:</p>
<ol>
<li>The host that will serve our fake web app</li>
<li>The fake web application</li>
</ol>
<blockquote><p>To be clear: Only the test project uses NancyFX, it’s not necessary to use NancyFX as your web framework.</p></blockquote>
<h3>1. The host</h3>
<p>In your test project you will need the package <a href="https://www.nuget.org/packages/Nancy.Hosting.Self/" target="_blank">Nancy.Hosting.Self</a> . This package allows you to run a web server in-memory. The following code shows how to create a host (and how to dispose of it):</p>
<pre class="brush: csharp;">public class Host : IDisposable
{
    NancyHost _host;
    public Host()
    {
        _host = new NancyHost(new HostConfiguration { RewriteLocalhost = false }, 
                              new Uri("http://localhost:50001"));
        _host.Start();
    }

    public void Dispose()
    {
        _host.Stop();
        _host = null;
    }
}
</pre>
<p>Important to note here is the <span style="font-family: 'Courier New';">RewriteLocalhost</span> option in the <span style="font-family: 'Courier New';">HostConfiguration</span>. This value determines whether localhost-url’s are rewritten to <span style="font-family: 'Courier New';">http://+:port/</span> style URL’s to allow for listening on all ports. If you do this, you either need a namespace registration or admin access. Since we want our tests to be independent to the environment we need to disable it. The default value is true, so we explicitly set it to false. For more information, check the <a href="https://github.com/NancyFx/Nancy/wiki/Self-Hosting-Nancy" target="_blank">NancyFX documentation</a></figure>
<p>We can use this class now in our unit tests. Since it’s a rather expensive operation (we want our unit tests to by lightning fast, see <a href="https://www.kenneth-truyers.net/2012/12/15/key-qualities-of-a-good-unit-test/" target="_blank">key qualities of a good unit test</a>), I tend to set it up only once per test run. The following shows how to set it up once for a test run in xUnit. For this particular case (shared context between multiple test classes), we need to use <a href="https://xunit.github.io/docs/shared-context.html#collection-fixture" target="_blank">collection fixtures</a>:</p>
<pre class="brush: csharp;">[CollectionDefinition("Host")]
public class HostCollection: ICollectionFixture&lt;Host&gt; { }
</pre>
<p>This class is a marker class that will allows us to group classes in a collection. It will construct the host before the first test in the collection is executed and will dispose of it after the last test in the collection finishes. Once we have this marker class, we can declare the tests that rely on the host as follows:</p>
<pre class="brush: csharp;">[Collection("Host")]
public class RestClientTests
{
    …
}</pre>
<p>Different testing frameworks have different methods of setting up shared contexts, so this can be different depending on the framework you’re using.</p>
<h3>2. The fake web application</h3>
<p>Now that we have an in-memory server, we can start building our fake we application that will respond to our web requests. A Nancy web app is a fairly simple module:</p>
<pre class="brush: csharp;">public class FakeApp : NancyModule
{
    public FakeApp()
    {
        Get["/products/{id}"] = _ =&gt;
            Response.AsJson(new Product());
    }
}
</pre>
<p>You can have multiple classes inside your test project that inherit from <span style="font-family: 'Courier New';">NancyModule</span>. You apply the route to any of the HTTP methods in the constructor to emulate the REST API and then return the values the API would normally return.</p>
<h2></h2>
<h2>Tying it together</h2>
<p>Now that we have an in-memory web server and a fake web app, we can use it to test our REST client.</p>
<p>Suppose we have the following REST client (our system under test):</p>
<pre class="brush: csharp;">public class RestClient
{
    IRestClient _client;
    public RestClient(string url)
    {
        _client = new RestClient(url);
    }

    public Product GetProductById(int id)
    {
        return _client.Get&lt;Product&gt;(new RestRequest($"/products/{id}")).Data;
    }
}
</pre>
<p>In this example, I’m using RestSharp, but this would work with WebClient, HttpClient or any other library.</p>
<p>To test this class, we can write the following test:</p>
<pre class="brush: csharp;">[Collection("Host")]
public class RestClientTests
{ 
    [Fact]
    public void When_getting_a_product_it_correctly_deserializes_it()
    {
         new RestClient("http://localhost:50001").GetProductById(1).ShouldNotBeNull();
    }
}
</pre>
<p>The Collection-attribute ensures that our host is up and our fake app is running. By passing in the local URL, we make sure that we target the correct URL so it targets our fake web application.</p>
<h2>Improving our fake app</h2>
<p>To be able to run some more interesting tests, we need to make sure that our fake web app emulates the REST API as much as possible. We don’t want to spend too much time replicating the real API so I’ve come up with an implementation that allows us to match URL’s with predefined JSON responses. To do this we will store canned JSON responses as files in the assembly and then use a convention to match routes to those responses.</p>
<p>To add a JSON-file to your assembly, add a new file with the .json-extension and set the Build Action to Embedded Resource in the properties.</p>
<p>Now we’ll match routes with these predefined responses:</p>
<p>Instead of defining various routes for all the API’s operations, we will now define only one. Based on the route that this method was called with we will retrieve the JSON from the assembly’s resources and then return that result:</p>
<pre class="brush: csharp;">// Use a greedy route
Get[@"/{route*}"] = _ =&gt;
{
    // 1. Read the url and replace all dashes with dots
    var resourcePath = _.route.ToString().Replace("/", ".");

    // 2. Combine the filename with the path of the assembly and the directory
    var resourceName = $"&lt;assemblyname&gt;.&lt;directory&gt;.{resourcePath}.json";
                
    // 3. Read the content of the json file
    Response response;
    using (var stream = Assembly.GetExecutingAssembly()
                                .GetManifestResourceStream(resourceName))
        using (var reader = new StreamReader(stream))
            response = reader.ReadToEnd();
                
     // 4. Set the content type as JSON and return the response
     response.ContentType = "application/json";
     return response;
};
</pre>
<p>This method consists of 4 parts:</p>
<ul>
<li>First we get the route that the method was called with. We replace all slashes with dots (see the next step for why this is important).</li>
<li>Then we translate this path into a path to a resource. To read a resource you have to specify the path to it in the format displayed above. If you have directories they should be separated with dots instead of slashes. This allows us to match routes with a directory structure.</li>
<li>Next we read the content of the file and (implicitly) cast it to a NancyFX Response.</li>
<li>Lastly, we set the content-type and return it.</li>
</ul>
<p>With this in place, we don’t have to modify the server anymore. We can now add responses based on routes. As an example we can simulate the responses to the following routes, by creating the directory structure below:</p>
<ul>
<li>/products/{id}</li>
<li>/deals/{id}</li>
<li>/deals/{id}/products
<pre class="brush: csharp;">- deals
    - 1994
        -&gt; products.json    // matches /deals/1994/products
    -&gt; 1994.json            // matches /deals/1994
    -&gt; 2441.json            // matches /deals/2441
- products
    -&gt; 4245.json            // matches /products/4245
</pre>
<p>The above method only works for GET-requests, but you could devise a similar strategy for other HTTP methods as well.</p>
<h2></h2>
<h2>Conclusion</h2>
<p>With a bit of infrastructure code, which is reusable across projects, we can set up a quick way of testing our REST clients.</p>
<p>In this post I showed a basic infrastructure set up and some improvements to make working with the fake application easier. There are still more improvements to make, such as a mechanism to match POST requests with canned responses, setting response headers and status codes and the ability to look into our test server to see what was sent to it.</li>
</ul>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/29/testing-rest-clients/">Testing REST clients</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=jg4aj4m16vU:JqK2albFeTs:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=jg4aj4m16vU:JqK2albFeTs:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=jg4aj4m16vU:JqK2albFeTs:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=jg4aj4m16vU:JqK2albFeTs:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=jg4aj4m16vU:JqK2albFeTs:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=jg4aj4m16vU:JqK2albFeTs:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/jg4aj4m16vU" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/01/29/testing-rest-clients/</feedburner:origLink></item>
		<item>
		<title>New features in C# 7, part 2</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/8DOgfeevm44/</link>
		<pubDate>Mon, 25 Jan 2016 01:25:16 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[c#]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1220</guid>
		<description><![CDATA[<p>In my previous post about probable new features in C# 7, I talked about Tuples, Record Types and Pattern Matching. These are the most obvious candidates for inclusion. In this post I want to highlight a few more new features that are not getting as much attention but are also very useful features. C# 7 [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/25/new-features-in-c-sharp-7-part-2/">New features in C# 7, part 2</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my previous post about probable new features in C# 7, I talked about <strong>Tuples</strong>, <strong>Record Types</strong> and <strong>Pattern Matching</strong>. These are the most obvious candidates for inclusion. In this post I want to highlight a few more new features that are not getting as much attention but are also very useful features.</p>
<h2>C# 7 Non-nullable reference types</h2>
<h3>What?</h3>
<p>Nullable value types where introduced in C# 2.0. Essentially they’re just syntactic sugar around the <font face="Courier New">Nullable&lt;T&gt;</font> class. Non-nullable reference types are the reverse of that feature. It let’s you declare a reference type that is guaranteed not to be null.</p>
<h3>Why?</h3>
<p>The null reference has been called “The billion dollar mistake” (by the inventor: <a href="https://en.wikipedia.org/wiki/Tony_Hoare" target="_blank">Tony Hoare</a>). <font face="Courier New">NullReference</font> exceptions are all too common. The problem is two-fold: either you don’t check for them and then you might get runtime exceptions or you do check for them and then your code becomes verbose or littered with statements that have little to do with what you’re actually trying to achieve. The ability to declare a reference type as non-nullable overcomes these problems.</p>
<h3>How?</h3>
<p>NOTE: The syntax here is still in flux and will probably change. There are various proposals floating around and it’s still unclear what the definitive form will be. Also, where I mention “error”, it’s still unclear whether it will be a compilation error or just a warning.</p>
<p>First of all, the ideal syntax would be to default to non-nullable reference types. This would provide symmetry between reference and value types: </p>
<pre class="brush: csharp;">int a;     //non-nullable value type
int? b;    //nullable value type
string c;  //non-nullable reference type
string? d; //nullable reference type
</pre>
<p>However, there are millions of lines of C# out there that would break if non-nullable types would become the default, so unfortunately it has to be designed differently to keep everything backwards compatible. The currently proposed syntax is as follows:</p>
<pre class="brush: csharp;">int a;     //non-nullable value type
int? b;    //nullable value type
string! c; //non-nullable reference type
string d;  //nullable reference type
</pre>
<p>Using nullable and non-nullable types will then affect the compiler:</p>
<pre class="brush: csharp;">MyClass a;  // Nullable reference type
MyClass! b; // Non-nullable reference type

a = null;   // OK, this is nullable
b = null;   // Error, b is non-nullable
b = a;      // Error, n might be null, s can't be null

WriteLine(b.ToString()); // OK, can't be null
WriteLine(a.ToString()); // Warning! Could be null!

if (a != null) { WriteLine(a.ToString); } // OK, you checked
WriteLine(a!.Length); // Ok, if you say so

</pre>
<p>Using this syntax is OK, but it would become problematic for generic types:</p>
<pre class="brush: csharp;">// The Dictionary is non-nullable but string, List and MyClass aren't
Dictionary&lt;string, List&lt;MyClass&gt;&gt;! myDict;   

// Proper way to declare all types as non-nullable
Dictionary&lt;string!, List&lt;MyClass!&gt;!&gt;! myDict;</pre>
<p>The above is a bit difficult to read (and type) so a shortcut has also been proposed:</p>
<pre class="brush: csharp;">// Typing ! in front of the type arguments makes all types non-nullable
Dictionary!&lt;string, List&lt;MyClass&gt;&gt; myDict;</pre>
<h2>C# 7 Local Functions</h2>
<h3>What?</h3>
<p>The ability to declare methods and types in block scope.</p>
<h3>Why?</h3>
<p>This is already (kind of) possible by using the <font face="Courier New">Func</font> and <font face="Courier New">Action</font> types with anonymous methods. However, they lack a few features:</p>
<ul>
<li>Generics</li>
<li>ref and out parameters</li>
<li>params</li>
</ul>
<p>Local functions would have the same capabilities as normal methods but would only be scoped to the block they were declared in.</p>
<h3>How?</h3>
<pre class="brush: csharp;">public int Calculate(int someInput)
{
    int Factorial(int i)
    {
        if (i &lt;= 1)
            return 1;
        return i * Factorial(i - 1);
    }
    var input = someInput + ... // Other calcs

    return Factorial(input);
}

</pre>
<h2>C# 7 Immutable Types</h2>
<h3>What?</h3>
<p>An immutable object is an object whose state cannot be modified after its creation. </p>
<h3>Why?</h3>
<p>Immutable objects are offer a few benefits:</p>
<ul>
<li>Inherently thread-safe</li>
<li>Makes it easier to use and reason about code</li>
<li>Easier to parallelize your code</li>
<li>Reference to immutable objects can be cached, as they won’t change</li>
</ul>
<p>Currently it’s already possible to declare immutable objects:</p>
<pre class="brush: csharp;">public class Point
{
    public Point(int x, int y)
    {
        x = x;
        Y = y;
    }

    public int X { get; }
    public int Y { get; }
}</pre>
<p>While the above is definitely an immutable object, the problem is that the <em>intent</em> is not clearly visible. One day, someone might add a setter and consumers of this type, expecting immutability, could experience different results.</p>
<h3>How?</h3>
<p>NOTE: Again, the syntax here is still in flux. The initial proposal suggests adding an immutable keyword:</p>
<pre class="brush: csharp;">public immutable class Point
{
    public Point(int x, int y)
    {
        x = x;
        Y = y;
    }

    public int X { get; }
    public int Y { get; }
}

</pre>
<p>When you have immutable types, a nice addition is language support for creating new instances based on a different instance:</p>
<pre class="brush: csharp;">var a = new Point(2, 5);
var b = a with { X = 1};
</pre>
<p>&nbsp;</p>
<h2>Conclusion</h2>
<p>As I said before, it’s still early days, so the above syntax can (and probably will) change, but these features are very exciting and will make C# even more enjoyable to work with. I encourage everyone to have a look on GitHub to follow the current discussions around these features:</p>
<p><a href="https://github.com/dotnet/roslyn/issues/2136" target="_blank">Work List of Features</a></p>
<p>The full details of the above proposals can also be found here:</p>
<p><a href="https://github.com/dotnet/roslyn/issues/227" target="_blank">Non-nullable reference types</a><br /><a href="https://github.com/dotnet/roslyn/issues/259" target="_blank">Local functions</a><br /><a href="https://github.com/dotnet/roslyn/issues/159" target="_blank">Immutable types</a></p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/25/new-features-in-c-sharp-7-part-2/">New features in C# 7, part 2</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=8DOgfeevm44:WBDWV2tJCTs:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=8DOgfeevm44:WBDWV2tJCTs:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=8DOgfeevm44:WBDWV2tJCTs:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=8DOgfeevm44:WBDWV2tJCTs:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=8DOgfeevm44:WBDWV2tJCTs:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=8DOgfeevm44:WBDWV2tJCTs:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/8DOgfeevm44" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/01/25/new-features-in-c-sharp-7-part-2/</feedburner:origLink></item>
		<item>
		<title>C# 7: New Features</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/GTItP9Nlnr4/</link>
		<pubDate>Wed, 20 Jan 2016 00:20:04 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[c#]]></category>

		<guid isPermaLink="false">https://www.kenneth-truyers.net/?p=1215</guid>
		<description><![CDATA[<p>It seems like only yesterday we got C# 6, but as it goes in software development land, the next thing is already on its way. In this post I want to describe the most likely new C# 7 features, what they look like and why they’re useful. C# 7 Tuples Update 22/07/2016: Tuples are planned [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/20/new-features-in-c-sharp-7/">C# 7: New Features</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<figure><img style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; border: none; margin-right: 10px;" title="C# 7" src="https://www.kenneth-truyers.net/wp-content/uploads/2016/04/csharp-3.jpg" alt="C# 7" width="80" height="77" align="left" border="0" /> It seems like only yesterday we got C# 6, but as it goes in software development land, the next thing is already on its way. In this post I want to describe the most likely new C# 7 features, what they look like and why they’re useful.</p>
<h2>C# 7 Tuples</h2>
<blockquote><p>Update 22/07/2016: Tuples are planned to be a part of C# 7</p></blockquote>
<h3>What?</h3>
<p><span style="font-family: 'Courier New';">Tuples</span> are a temporary grouping of values. You could compare a <span style="font-family: 'Courier New';">Tuple</span> to a POCO-class, but instead of defining it as a class you can define it on the fly. The following is an example of such a class:</p>
<pre class="brush: csharp;">class PropertyBag
{
    public int Id {get; set;}
    public string Name {get; set;}
}
var myObj = new PropertyBag { Id = 1, Name = "test};
</pre>
<p>In the above example it wasn&#8217;t really necessary to name the concept we&#8217;re working with as it is probably a temporary structure that doesn&#8217;t need naming. <span style="font-family: 'Courier New';">Tuples</span> are a way of temporarily creating such structures on the fly without the need to create classes</p>
<h3>Why?</h3>
<p>The most common reason for having a group of values temporarily grouped are multiple return values from a method. Currently, there are a few ways of doing that in C#:</p>
<p><strong>Out parameters</strong></figure>
<pre class="brush: csharp;">public void GetLatLng(string address, out double lat, out double lng) { ... }

double lat, lng;
GetLatLng(myValues, out lat, out lng);
Console.WriteLine($"Lat: {lat}, Long: {lng}"); 
</pre>
<p>Using out-parameters has several disadvantages:</p>
<ul>
<li>It cannot be used for <span style="font-family: 'Courier New';">async</span>-methods</li>
<li>You have to declare the parameters upfront (and you can’t use <span style="font-family: 'Courier New';">var</span>, you have to include the type)</li>
</ul>
<p><strong>Tuple-type</strong></p>
<p>There currently already is a <span style="font-family: 'Courier New';">Tuple</span>-type in C# that behaves like a native tuple. You could rewrite the previous method as follows:</p>
<pre class="brush: csharp;">public Tuple&lt;int, int&gt; GetLatLng(string address) { ... }

var latLng = GetLatLng("some address");
Console.WriteLine($"Lat: {latLng.Item1}, Long: {latLng.Item2}"); 
</pre>
<p>This does not have the disadvantages of out-parameters, but the resulting code is rather obscure with the resulting tuple having property names like <span style="font-family: 'Courier New';">Item1</span> and <span style="font-family: 'Courier New';">Item2</span>.</p>
<p><strong>Class / struct</strong></p>
<p>You could also declare a new type and use that as the return type:</p>
<pre class="brush: csharp;">struct LatLng{ public double Lat; public double Lng;}
public LatLng GetLatLng(string address) { ... }

var ll= GetLatLng("some address");
Console.WriteLine($"Lat: {ll.Lat}, Long: {ll.Lng}"); 
</pre>
<p>This has none of the disadvantages of out-parameters and the tuple type, but it’s rather verbose and it is meaningless overhead.</p>
<h3>How?</h3>
<p>There are a few different use cases for tuples that will be available with C# 7 Tuples:</p>
<p><strong>Tuple return types</strong></p>
<p>You can specify multiple return types for a function, in much the same syntax as you do for specifying multiple input types (method arguments)</p>
<pre class="brush: csharp;">public (double lat, double lng) GetLatLng(string address) { ... }

var ll = GetLatLng("some address"); 
Console.WriteLine($"Lat: {ll.lat}, Long: {ll.lng}");</pre>
<p><strong>Inline tuples</strong></p>
<p>You could also create tuples inline:</p>
<pre class="brush: csharp;">var ll = new (double lat, double lng) { lat = 0, lng = 0 };
</pre>
<p><strong>Tuple deconstruction</strong></p>
<p>Because the bundling of the values is not important as a concept, it’s quite possible that you don’t want to access the bundle at all, but get straight to the internal values. Instead of accessing the tuple properties as in the example of <em>Tuple Return Types</em>, you can also destructure the tuple immediately:</p>
<pre class="brush: csharp;">(var lat, var lng) = GetLatLng("some address");
Console.WriteLine($"Lat: {lat}, Long: {lng}");</pre>
<h2>C# 7 Record types</h2>
<blockquote><p>Update 22/07/2016: Records are probably not coming in C# 7, but will have to wait until the next version (supposedly c# 8)</p></blockquote>
<h3>What?</h3>
<p><em>A record type</em> is a simple bag of properties, a data type with only properties</p>
<h3>Why?</h3>
<p>Often <span style="font-family: 'Courier New';">classes</span> or <span style="font-family: 'Courier New';">structs</span> are merely a collection of properties. They still need a full declaration which is quite verbose. The following class demonstrates that a class with 3 properties requires quite a bit of text to declare:</p>
<pre class="brush: csharp;">class MyPoint
{
    int _x;
    int _y;
    int _z;
    MyPoint(int x, int y, int z){
        this._x = x;
        this._y = y;
        this._z = z;
    }
    public int X {get{ return this._x;}}
    public int Y {get{ return this._y;}}
    public int Z {get{ return this._z;}}
}
</pre>
<h3>How?</h3>
<p>With record types you could write the above in a single line:</p>
<pre class="brush: csharp;">class Point(int X, int Y, int Z);
</pre>
<p>You will get a few more things out of this:</p>
<ul>
<li>The class will automatically implement <span style="font-family: 'Courier New';">IEquatable&lt;Point&gt;</span>, which means you can compare two record types based on their values instead of reference.</li>
<li>The <span style="font-family: 'Courier New';">ToString</span>-method will output the values in the record</li>
</ul>
<h2>C# 7 Pattern Matching</h2>
<blockquote><p>Update 22/07/2016: Pattern matching is planned to be partially supported in C# 7. In C# 7 only switching on types will be available. Full support for pattern matching will come in the next version (supposedly c# 8)</p></blockquote>
<h3>What?</h3>
<p>With record types in play, we can now get pattern matching built-in. Pattern matching means that you can switch on the type of data you have to execute one or the other statement.</p>
<h3>Why?</h3>
<p>Although pattern matching looks a lot like if/else, it has certain advantages:</p>
<ul>
<li>You can do pattern matching on any data type, even your own, whereas if/else you always need primitives to match</li>
<li>Pattern matching can extract values from your expression</li>
</ul>
<h3>How?</h3>
<p>The following is an example of pattern matching:</p>
<pre class="brush: csharp;">class Geometry();
class Triangle(int Width, int Height, int Base) : Geometry;
class Rectangle(int Width, int Height) : Geometry;
class Square(int width) : Geometry;

Geometry g = new Square(5);
switch (g)
{
    case Triangle(int Width, int Height, int Base):
        WriteLine($"{Width} {Height} {Base}");
        break;
    case Rectangle(int Width, int Height):
        WriteLine($"{Width} {Height}");
        break;
    case Square(int Width):
        WriteLine($"{Width}");
        break;
    default:
        WriteLine("&lt;other&gt;");
        break;
}
</pre>
<p>In the sample above you can see how we match on the data type and immediately destructure it into its components.</p>
<h2>C# 7 Non-nullable reference types</h2>
<blockquote><p>Update 22/07/2016: Non-nullable reference types are probably not coming in C# 7, but will have to wait until the next version (supposedly c# 8)</p></blockquote>
<h3>What?</h3>
<p>Nullable value types where introduced in C# 2.0. Essentially they’re just syntactic sugar around the <span style="font-family: 'Courier New';">Nullable&lt;T&gt;</span> class. Non-nullable reference types are the reverse of that feature. It let’s you declare a reference type that is guaranteed not to be null.</p>
<h3>Why?</h3>
<p>The null reference has been called “The billion dollar mistake” (by the inventor: <a href="https://en.wikipedia.org/wiki/Tony_Hoare" target="_blank">Tony Hoare</a>). <span style="font-family: 'Courier New';">NullReference</span> exceptions are all too common. The problem is two-fold: either you don’t check for them and then you might get runtime exceptions or you do check for them and then your code becomes verbose or littered with statements that have little to do with what you’re actually trying to achieve. The ability to declare a reference type as non-nullable overcomes these problems.</p>
<h3>How?</h3>
<p>NOTE: The syntax here is still in flux and will probably change. There are various proposals floating around and it’s still unclear what the definitive form will be. Also, where I mention “error”, it’s still unclear whether it will be a compilation error or just a warning.</p>
<p>First of all, the ideal syntax would be to default to non-nullable reference types. This would provide symmetry between reference and value types:</p>
<pre class="brush: csharp;">int a;     //non-nullable value type
int? b;    //nullable value type
string c;  //non-nullable reference type
string? d; //nullable reference type
</pre>
<p>However, there are millions of lines of C# out there that would break if non-nullable types would become the default, so unfortunately it has to be designed differently to keep everything backwards compatible. The currently proposed syntax is as follows:</p>
<pre class="brush: csharp;">int a;     //non-nullable value type
int? b;    //nullable value type
string! c; //non-nullable reference type
string d;  //nullable reference type
</pre>
<p>Using nullable and non-nullable types will then affect the compiler:</p>
<pre class="brush: csharp;">MyClass a;  // Nullable reference type
MyClass! b; // Non-nullable reference type

a = null;   // OK, this is nullable
b = null;   // Error, b is non-nullable
b = a;      // Error, n might be null, s can't be null

WriteLine(b.ToString()); // OK, can't be null
WriteLine(a.ToString()); // Warning! Could be null!

if (a != null) { WriteLine(a.ToString); } // OK, you checked
WriteLine(a!.Length); // Ok, if you say so

</pre>
<p>Using this syntax is OK, but it would become problematic for generic types:</p>
<pre class="brush: csharp;">// The Dictionary is non-nullable but string, List and MyClass aren't</pre>
<p>Dictionary&lt;string, List&lt;MyClass&gt;&gt;! myDict; // Proper way to declare all types as non-nullable Dictionary&lt;string!, List&lt;MyClass!&gt;!&gt;! myDict;</p>
<p>The above is a bit difficult to read (and type) so a shortcut has also been proposed:</p>
<pre class="brush: csharp;">// Typing ! in front of the type arguments makes all types non-nullable
Dictionary!&lt;string, List&lt;MyClass&gt;&gt; myDict;</pre>
<h2>C# 7 Local Functions</h2>
<blockquote><p>Update 22/07/2016: Local functions are planned to be a part of C# 7</p></blockquote>
<h3>What?</h3>
<p>The ability to declare methods and types in block scope.</p>
<h3>Why?</h3>
<p>This is already (kind of) possible by using the <span style="font-family: 'Courier New';">Func</span> and <span style="font-family: 'Courier New';">Action</span> types with anonymous methods. However, they lack a few features:</p>
<ul>
<li>Generics</li>
<li>ref and out parameters</li>
<li>params</li>
</ul>
<p>Local functions would have the same capabilities as normal methods but would only be scoped to the block they were declared in.</p>
<h3>How?</h3>
<pre class="brush: csharp;">public int Calculate(int someInput)
{
    int Factorial(int i)
    {
        if (i &lt;= 1)
            return 1;
        return i * Factorial(i - 1);
    }
    var input = someInput + ... // Other calcs

    return Factorial(input);
}

</pre>
<h2>C# 7 Immutable Types</h2>
<blockquote><p>Update 22/07/2016: Immutable types are currently not on the planning for C# 7, nor the next version</p></blockquote>
<h3>What?</h3>
<p>An immutable object is an object whose state cannot be modified after its creation.</p>
<h3>Why?</h3>
<p>Immutable objects are offer a few benefits:</p>
<ul>
<li>Inherently thread-safe</li>
<li>Makes it easier to use and reason about code</li>
<li>Easier to parallelize your code</li>
<li>References to immutable objects can be cached, as they won’t change</li>
</ul>
<p>Currently it’s already possible to declare immutable objects:</p>
<pre class="brush: csharp;">public class Point
{
    public Point(int x, int y)
    {
        x = x;
        Y = y;
    }

    public int X { get; }
    public int Y { get; }
}</pre>
<p>While the above is definitely an immutable object, the problem is that the <em>intent</em> is not clearly visible. One day, someone might add a setter and consumers of this type, expecting immutability, could experience different results.</p>
<h3>How?</h3>
<p>NOTE: Again, the syntax here is still in flux but the initial proposal suggests adding an immutable keyword:</p>
<pre class="brush: csharp;">public immutable class Point
{
    public Point(int x, int y)
    {
        x = x;
        Y = y;
    }

    public int X { get; }
    public int Y { get; }
}

</pre>
<p>When you have immutable types, a nice addition is language support for creating new instances based on a different instance:</p>
<pre class="brush: csharp;">var a = new Point(2, 5);
var b = a with { X = 1};
</pre>
<h2>Conclusion</h2>
<p>For more information about these and other new proposed features, head over to GitHub for the full list:</p>
<p><a href="https://github.com/dotnet/roslyn/issues/2136" target="_blank">Work List of Features</a></p>
<p>NOTE: It’s still early days, the syntax of all of these can (and probably will) change. Some features might not even make it into C# 7 so we&#8217;ll have to wait until C# 8 or later. I encourage everyone to take a look on the GitHub page to examine and learn about these features. The discussion is quite lively there. If you have another good proposal, check out the same forums and maybe your feature gets the cut. It’s really nice that Microsoft has opened up all these channels, so we better make use of them!</p>
<p>Update 22/07/2016 If you want to try out these features, you can now download the preview version of <a href="https://www.visualstudio.com/en-us/news/releasenotes/vs15-relnotes">Visual Studio “15” Preview 3</a> (which is different from Visual Studio 2015).</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2016/01/20/new-features-in-c-sharp-7/">C# 7: New Features</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=GTItP9Nlnr4:37xQjY3_pXc:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=GTItP9Nlnr4:37xQjY3_pXc:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=GTItP9Nlnr4:37xQjY3_pXc:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=GTItP9Nlnr4:37xQjY3_pXc:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=GTItP9Nlnr4:37xQjY3_pXc:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=GTItP9Nlnr4:37xQjY3_pXc:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/GTItP9Nlnr4" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2016/01/20/new-features-in-c-sharp-7/</feedburner:origLink></item>
		<item>
		<title>Programming in the zone</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/PDFW9jKRtos/</link>
		<pubDate>Mon, 05 Oct 2015 17:22:29 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[development]]></category>
		<category><![CDATA[simplicity]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1212</guid>
		<description><![CDATA[<p>Much has been written and said about programming in “the zone”. Most articles give you tips on how to get in the zone and keep there. I haven’t found any article though that challenges the usefulness of ‘being in the zone’ (If I have missed some, please let me know). What is the zone? The [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/10/05/programming-in-the-zone/">Programming in the zone</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Much has been written and said about programming in “the zone”. Most articles give you tips on how to get in the zone and keep there. I haven’t found any article though that challenges the usefulness of ‘being in the zone’ (If I have missed some, please let me know). </p>
<h2>What is the zone?</h2>
<p>The zone is usually described as a state-of-mind where one feels productive and hyper-focused. These are certainly good traits to have when you are programming. However, the zone also comes with tunnel-vision and a sense of being infallible. These are not good traits to have.</p>
<p>Being in the zone is also often described as a pleasurable feeling.</p>
<p>Furthermore, the zone is also related to left-side hemisphere vs right-side hemisphere of the brain thinking. (note that these are not referring to the actual location of where brain activity is measured, but just placeholders to indicate a certain type of brain activity). Generally speaking, the left side of the brain tends to control many aspects of language and logic, while the right side tends to handle spatial information, visual comprehension and creativity. With that in mind, do we really want to be in a state-of–mind where logic is less prominent and we have a tendency to be more creative? While creativity is a good trait to have when programming, it depends a lot on what type of creativity.</p>
<h2>Negative effects</h2>
<p>I have found personally that being in the zone is counter-productive. When I’m in the zone, I do feel very productive, write more code and find complex solutions. When I look at the same code afterwards though, I often find that it’s overly complex. My ‘creative’ solutions are not always as clear as they should be. Because of the tunnel vision you get when programming in the zone, you can loose the big picture. Code that I write in the zone comes out faster, but I need to go back to it more often. Even if it doesn’t need any changes, it’s harder to read and in the end causes more loss of time than something that took a bit longer to write but is more readable and understandable. </p>
<p>One of my favorite quotes about programming is by Brian W. Kernighan:</p>
<blockquote><p>Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it </p>
</blockquote>
<p>This certainly applies to code written in the zone. When I’m in the zone I’m more focused, so the code I write is the most clever code I can write. By the above definition then, I will not be smart enough to debug it unless I’m in the zone again, and thus, you’re abilities as a developer become dependant of being in the zone. I don’t think that’s a good idea.</p>
<h2>Interruptions</h2>
<p>The role of software developers has changed significantly over the past 10 or 20 years. While 20 years ago, the stereotype of a developer’s was a geek in a dark room in front of a bright screen, cranking out code, today, developers are part of an organization, a team and a development process, where interaction with customers and colleagues is often as important, if not more important, as cranking out code. </p>
<p>Something developers often complain about is interruptions. It’s true that an interruption gets you out of the zone, and being pulled out of focused state can really be annoying. But a developer’s job is not just about cranking out code, it’s about supporting your business, coding is just a part of that. If someone needs help, should they really wait until you ‘feel like answering them’? It’s part of the job, so we should help people out when needed. After all, we expect that treatment from other colleagues, don’t we? </p>
<p>Obviously there’s a limit to the amount of interruptions you can handle without losing productivity, but I don’t think an interruption should get anyone upset.</p>
<h2>Using it correctly</h2>
<p>As with most ‘tools’ we have available, saying that a tool is great or bad at everything is usually short-sighted. While I’m certainly opposed to thinking that being in the zone all the time is optimal, I also think it can be used effectively, given the correct circumstances.</p>
<p>When you’re learning a new tool or paradigm, it can be very beneficial to be in the zone, as you will be hyper-focused and can absorb much more information in a shorter time span. Coding kata’s are a perfect example where being in the zone can be very beneficial. A kata is a very focused exercise where you don’t need to keep the big picture in mind (there is none) and it often requires some creativity to solve the puzzle. These puzzles are excellent to train your mind.</p>
<p>Essentially, the zone CAN be used for a very tightly scoped and particularly hard problem, but I avoid it as a general state to develop software.</p>
<h2>Conclusion</h2>
<p>While many will certainly disagree with me (that’s perfectly fine), I would like to invite you to take a step back and think about whether you see being in the zone beneficial because it improves your productivity or whether you just enjoy the feeling of being in the zone.</p>
<p>I tend to steer away from being in the zone. When I feel myself getting into that state, I usually get up for a few minutes and do something else. I only allow this to happen whenever I want to practice some skills, because then there’s no downside of being in the zone. In the end, I think awareness of what the zone is, realizing when you’re in the zone and the ability to use it effectively is a valuable skill. What do you think? Sound of in the comments!</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/10/05/programming-in-the-zone/">Programming in the zone</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=PDFW9jKRtos:ncGSqVGFyEg:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=PDFW9jKRtos:ncGSqVGFyEg:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=PDFW9jKRtos:ncGSqVGFyEg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=PDFW9jKRtos:ncGSqVGFyEg:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=PDFW9jKRtos:ncGSqVGFyEg:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=PDFW9jKRtos:ncGSqVGFyEg:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/PDFW9jKRtos" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2015/10/05/programming-in-the-zone/</feedburner:origLink></item>
		<item>
		<title>The test pyramid</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/T6eZJ4-8cNw/</link>
		<pubDate>Sat, 27 Jun 2015 00:51:19 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[Acceptance Testing]]></category>
		<category><![CDATA[c#]]></category>
		<category><![CDATA[Continuous Delivery]]></category>
		<category><![CDATA[development]]></category>
		<category><![CDATA[unit testing]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1208</guid>
		<description><![CDATA[<p>The test pyramid is a concept that was developed by Mike Cohn. It states that you should have an appropriate amount of each type of test. In the pyramid he distinguishes different types of tests: Exploratory tests: Performed manually by a tester System tests: Executed by a program or script that automates the UI (also [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/06/27/the-test-pyramid/">The test pyramid</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>The test pyramid is a concept that was developed by Mike Cohn. It states that you should have an appropriate amount of each type of test. In the pyramid he distinguishes different types of tests:</p>
<ul>
<li>Exploratory tests: Performed manually by a tester
<li>System tests: Executed by a program or script that automates the UI (also known as Acceptance Tests or UI tests)
<li>Integration tests: Executed against a layer just beneath the UI (sometimes referred to as subcutaneous tests)
<li>Component tests: Executed against a single component of the application
<li>Unit tests: Test a single “unit” in the software (sometimes a class, sometimes a function)</li>
</ul>
<p>In this post I’ll be talking about automated testing, so the exploratory tests are not part of this discussion. Because of their similar nature, I will also be grouping the integration tests and component tests in one category. The theory for the testing pyramid says that you should have a good coverage (almost complete) for your unit tests, a decent coverage for the integration tests and a small coverage for system tests. </p>
<figure><a href="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/test_pyramid.png"><img title="test_pyramid" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="test_pyramid" src="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/test_pyramid_thumb.png" width="644" height="435"></a>&nbsp; </p>
<p>&nbsp;</p>
<p>This is widely accepted as the way to implement automated testing. While the general idea is sound, I think it shouldn’t be applied blindly. When you ask yourself the question, “how should we organize or testing efforts?”, the answer, as often in software, is “it depends”. For any given software product, there are various factors in play that could skew the pyramid.</p>
<p>Before we look at what influences our pyramid, we have to look at the characteristics of these tests. As you go towards the top of the top pyramid, your tests will instill more confidence, as they are end-to-end tests. Conversely the more to the bottom you go the less confidence the tests will bring you, since they only test a small part in isolation. On the other hand, at the top of the pyramid, you will have slower, more brittle and harder to write and maintain tests. A unit test tends to be more deterministic, faster and easier to write.</p>
<p><a href="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/test_characteristics.png"><img title="test_characteristics" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="test_characteristics" src="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/test_characteristics_thumb.png" width="644" height="307"></a></figure>
<p>In an ideal world, we would like our tests to be end-to-end, fast, deterministic and easy to write and maintain. Unfortunately, that’s not possible (yet). However, instead of looking at the pyramid and focusing our test efforts on following this theory, we should take this ideal goal and strive towards a test suite that matches that description.</p>
<h2></h2>
<h2>Tooling</h2>
<p>When this theory was presented,we didn’t have all the tools we currently have (or they where not as easily accessible as they are now). A few of the characteristics have been skewed as a result of this. To give a few examples:</p>
<ul>
<li>Cloud infrastructure (VM’s, containers) provided us with cheap “hardware” on demand. This could solve part of the problem of slow UI tests. Instead of trying to write fast UI tests, we can just throw hardware at it. (that doesn’t mean you should write slow tests though, but sometimes it’s more cost effective to add hardware instead of manpower)
<li>Services like BrowserStack and SauceLabs became available, allowing you to spin up tests on a variety of platforms, without a big investment.
<li>Testing frameworks have improved. BDD has become quite popular and as a result a lot of the frameworks have been adding features.
<li>Application frameworks have been adapted to be more loosely coupled and more flexible in the way we run them. As an example, any ASP.NET application can now be self-hosted, through the use of OWIN. If you combine that with an in-memory database, it can speed up integration tests.</li>
</ul>
<p>Knowing that the tools have improved, we can already see that our pyramid, should perhaps be a bit taller, with a little less focus on the bottom part, and some more focus on the top (confidence!). This is still on a global level though. Depending on the application you can still change the pyramid to the need of the application.</p>
<h2>Application</h2>
<p>The nature of the application plays a big role in where you should put your emphasis in testing. Each layer of the pyramid is influenced by the application type.</p>
<h3>Unit testing</h3>
<p>Steve Sanderson wrote a great <a href="http://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/" target="_blank">blog post</a> about the cost and benefit of writing unit testing. I advise you to read it, as he makes great points in there. The summary of his post is that depending on what code you have, the costs and benefits are different. </p>
<figure><a href="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/steve_sanderson_testing.png"><img title="steve_sanderson_testing" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; display: inline; border-top-width: 0px" border="0" alt="steve_sanderson_testing" src="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/06/steve_sanderson_testing_thumb.png" width="644" height="467"></a></figure>
<p>The reasoning is as follows:</p>
<ul>
<li>Algorithmic code with little dependencies is easy to test because you have a fixed set of outcomes for a fixed set of inputs. It’s also very beneficial, because code tends to be non-obvious and mistakes are easily made.
<li>Trivial code is easy to test, but there’s little to no benefit in doing so since the chances of catching a mistake are slim
<li>Coordinators (code that “glues” together a bunch of dependencies) is difficult to test, because you’d need a lot of mock, stubs and fakes and when you change the implementation you usually have to change the test. There’s also very little value in testing it, because this code usually doesn’t really do something, it just delegates.
<li>Then there’s non-trivial code that has a lot of dependencies. This code is difficult to test, because there are a lot of dependencies, but it would be good to add some tests as the code can be non-obvious.</li>
</ul>
<p>Taking this into account, when your application has a lot of algorithmic code, you should probably opt for a thicker layer of unit tests. In practice though, most of the applications I see have a majority of coordinators and trivial code (and sadly also overcomplicated code). So, unless you have a specific application that is very algorithmic in nature, I’d tend to write less unit tests. The objective should not be getting test coverage, the object is getting confidence when making changes to the software.</p>
<h3>Integration testing</h3>
<p>An integration test takes a few layers (or components or parts, whatever you want to call it) together and tests them as a whole. Depending on how your application is structured this can be easy or difficult. If you have an application that is very tightly coupled, it will probably be difficult to separate it from its UI. On the other hand, if you have an application that is easily configurable and flexible, it might be easier to isolate a part of it. <br />An example of this could be a Web API back-end with a Javascript front-end, hosted on OWIN with a database underneath. In order to just test the server part, you could self-host the Web API in your tests, use an in-memory SqLite database and run all your tests in-memory for a fast to write and run, deterministic suite of tests. In that case, put some more weight on the integration layer. If you have a legacy application that doesn’t lend itself to isolation, write less integration tests (or refactor it first).</p>
<h3></h3>
<h3>UI testing</h3>
<p>The UI of your application is what end users will see, so it’s very important to test. The three things holding us back from writing more UI tests are speed, brittleness and ease of writing. Let’s look at them one-by-one. </p>
<p>If you have a rather small application, even though your tests are relatively slow, the suite as a whole will not take very long and you can probably afford to cover most of it with UI tests. For a very large application, this will almost be impossible, unless you start parallelizing your tests (but that works against the third principle, ease of writing). </p>
<p>Brittleness of tests comes from an application not being deterministic under different circumstances. For example, a web app might behave differently when there’s a slow connection, a desktop app could behave differently when certain programs are installed or not.</p>
<p>Depending on how complex your UI is, writing UI tests can be more difficult or not. A UI where you have to go through ten different steps to execute a use case is harder to test then one where you have to do two steps. Other than that, the type of application often dictates which tools you can use. There are plenty of test tools for web applications, but very few can script interactions for a mobile application.</p>
<p>The amount of UI testing you want to do depends largely on the size of the application, how deterministic it is, how easy the use cases are and what tools are available.</p>
<h2>The test-refactor cycle</h2>
<p>The factors written above only work when you apply them in combination. You could easily find yourself in a situation where you have non-algorithmic code, that doesn’t lend itself to isolation, is big and has difficult use cases. You can’t just slim down all three layers of the pyramid. If you get into a situation like this, the application simply does not lend itself easily to automated testing. When that is the case, you could rely primarily on non-automated testing. This is a bad idea (why it is a bad idea deserves a whole different blog post) and should always be a temporary solution. </p>
<p>It’s better to refactor your code to make it more testable. That’s a catch 22, you can’t refactor because you need automated tests and you can’t write automated tests because you need to refactor first. The best approach here is to get into a test-refactor cycle. The idea is that you make the smallest possible refactor to allow you to write an automated test. When that test passes, you can refactor more (because now you have a test). If you keep doing this and focus on the attributes mentioned in the previous section, you’ll soon notice that you can get to a higher level of confidence. <br />Often it’s impossible to start these initial cycles on a unit test or integration level without doing a lot of refactoring upfront. The easiest way to do is, is through UI testing. When you have no tests, the first thing you need is confidence (the most important part of testing). UI testing will give you the most confidence and requires little to no refactoring upfront. Once you have that in place, you can start moving down the pyramid and isolate parts of the application for integration testing. When you built up some confidence through these tests, it’s time to refactor the lower level code and refactor the code into pure algorithms, coordinators and trivial code. The algorithms can then be unit tested.</p>
<h2>Summary</h2>
<p>The testing pyramid is a good starting point to structure your test efforts. We should skew the pyramid depending on the application though. Therefore, we should take into account various factors. To summarize, here are a few guidelines:</p>
<ul>
<li>Algorithmic code should be tested with unit tests
<li>Do not unit test code that is trivial or just delegates tasks
<li>Make your application configurable so you can isolate parts of it and test it with integration tests
<li>For legacy applications, start with UI tests, refactor and then add integration tests. Another round of refactor should expose the algorithms, which you can then unit test.</li>
</ul>
<p>To illustrate my point, here are a few examples of applications and what their test distribution should be (the numbers are totally made up, it’s just to give an idea on where to put the emphasis):</p>
<ol>
<li>A modern web app written in JavaScript, backed by a REST API, using Web API hosted on OWIN. The application uses Entity Framework as an ORM and serves to update an inventory of shopping items.
<ol>
<li>Unit tests: 10%. There’s very little algorithmic code and you would be stubbing out the EF-layer which gives a false sense of security.
<li>Integration tests: 80%. You can self-host the API and attach an in-memory database. This will give you fast, reliable and fairly easy to write tests
<li>UI tests: 10%. Since our integration tests cover a fair part of our stack, the UI tests should just cover the main use cases. </li>
</ol>
<li>A legacy web app, built with ASP.NET Web Forms that is used for listing and browsing properties (real estate).
<ol>
<li>Unit tests: 5%. Web forms are notoriously hard to unit test because of their inherent dependencies. Extract the algorithms and put those under test. Leave the rest for higher layers.
<li>Integration testing: 5%. If there’s no possibility to bypass the UI, it will be very hard to write integration tests. Implement the part that can be bypassed
<li>UI testing: 90%. The application does not lend itself to other types of tests, because it wasn’t built with testability in mind. Your first job is to gain confidence in your refactors and changes to the code base. A UI test does not need refactoring and gives you a high level of confidence. Once you have that confidence, you can start refactoring the code and slowly add more weight to the Integration and Unit tests</li>
</ol>
<li>A REST API for a bank, that allows you to send requests for calculating mortgages, loans and investments
<ol>
<li>Unit tests: 80%. The code is highly algorithmic (and the algorithms are stable). You can write tests that don’t need a lot of maintenance, because an implementation detail won’t need a change to the test
<li>Integration tests: 20%. Since you have a REST API, you can easily write some tests that exercises the API and checks the result to see whether all units are composed correctly.
<li>UI tests: 0%. There is no UI and the integration tests are end-to-end tests.</li>
</ol>
</li>
</ol>
<p>These examples are quite arbitrary, but if you look at the application you’re working on, you’ll surely see some similarities and I hope by applying these techniques you can gain more confidence in your code.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/06/27/the-test-pyramid/">The test pyramid</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=T6eZJ4-8cNw:m5LcqEbZyjw:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=T6eZJ4-8cNw:m5LcqEbZyjw:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=T6eZJ4-8cNw:m5LcqEbZyjw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=T6eZJ4-8cNw:m5LcqEbZyjw:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=T6eZJ4-8cNw:m5LcqEbZyjw:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=T6eZJ4-8cNw:m5LcqEbZyjw:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/T6eZJ4-8cNw" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2015/06/27/the-test-pyramid/</feedburner:origLink></item>
		<item>
		<title>Running SpecFlow Acceptance Tests in parallel on BrowserStack</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/EHolxbPZn8A/</link>
		<comments>https://www.kenneth-truyers.net/2015/01/03/running-specflow-acceptance-tests-in-parallel-on-browserstack/#comments</comments>
		<pubDate>Sat, 03 Jan 2015 10:22:01 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Acceptance Testing]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[Powershell]]></category>
		<category><![CDATA[unit testing]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1189</guid>
		<description><![CDATA[<p>Automated acceptance tests play a vital role in continuous delivery. Contrary to unit tests though, they’re quite hard to get right. This is not only because end-to-end testing is harder than testing single units, but also because of the way they need to be executed. You need the a fully working version of the application [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/01/03/running-specflow-acceptance-tests-in-parallel-on-browserstack/">Running SpecFlow Acceptance Tests in parallel on BrowserStack</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<figure><img title="i-heard-you-want-to-be-a-web-developer" style="border-left-width: 0px; border-right-width: 0px; border-bottom-width: 0px; margin-left: 0px; display: inline; border-top-width: 0px; margin-right: 0px" border="0" alt="i-heard-you-want-to-be-a-web-developer" src="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/01/iheardyouwanttobeawebdeveloper2.jpg" width="244" align="right" height="184"> Automated acceptance tests play a vital role in continuous delivery. Contrary to unit tests though, they’re quite hard to get right. This is not only because end-to-end testing is harder than testing single units, but also because of the way they need to be executed. You need the a fully working version of the application under test and a client that represents a real-world scenario. When you apply this to a browser based app, things can get complicated.</p>
<p>If we want to test our website fully, we need to be able to test it on a variety of browsers, devices and screens. Building all this infrastructure is very costly and time consuming. But new companies such as BrowserStack and SauceLabs have a solution for that: they provide VM’s for all kinds of browsers, configurations and emulators. We can make use of their infrastructure by running our acceptance tests on their resources. </p>
<p>Another problem acceptance tests usually pose is that they are slow (in comparison to unit tests). By carefully maintaining your tests, you can get them as fast as they can possibly be, but they are slow by nature so there’s only so much you can do. </p>
<p>In this post I’ll work through an example on how to run a SpecFlow test remotely on multiple BrowserStack VM’s at the same time. These are the goals:</p>
<ul>
<li>No need to build/buy infrastructure
<li>Run tests simultaneously, so that adding configurations doesn’t slow down the process
<li>Make it easy to add configurations
<li>Make tests executable from the command-line, by anyone, at any time</li>
</ul>
<h2></h2></figure>
<h2>1. The tests</h2>
<p>As an example I’m going to use a simple test that executes a google search and then verifies whether results were returned:</p>
<pre class="brush: gherkin;">Feature: Searching google

Given I am on the google page
When I search the web
Then I get search results</pre>
<p>The step definitions are quite straightforward (intentionally so):</p>
<pre class="brush: csharp;">[Binding]
public class GoogleSteps
{
    readonly IWebDriver _driver;

    public GoogleSteps()
    {
        _driver = (IWebDriver)ScenarioContext.Current["driver"];
    }

    [Given(@"I am on the google page")]
    public void GivenIAmOnTheGooglePage()
    {
        _driver.Navigate().GoToUrl("http://www.google.com");
    }

    [When(@"I search the web")]
    public void WhenISearchTheWeb()
    {
        var q = _driver.FindElement(By.Name("q"));
        q.SendKeys("Kenneth Truyers");
        q.Submit();
    }

    [Then(@"I get search results")]
    public void ThenIGetSearchResults()
    {
        Assert.That(_driver.FindElement(By.Id("resultStats")).Text, Is.Not.Empty);
    }
}
</pre>
<blockquote>
<p>Note that here I’m accessing the UI immediately in my step definitions. This is done for simplicity’s sake. In a real-word scenario, you probably want to use some indirection through the use of Page Objects. For more information on how to create maintainable specflow tests, you can refer to my article about <a href="http://www.kenneth-truyers.net/2013/08/25/automated-acceptance-testing-with-cucumber-for-net-and-java/" target="_blank">Automated acceptance tests with Cucumber</a></p>
</blockquote>
<h2>2. The driver</h2>
<p>To drive the browser I will be using Selenium (as seen in the example above). Selenium can drive the browser through an instance called a <em>WebDriver.</em> There are a few different WebDrivers available for IE, Chrome, FireFox, … but there’s also a driver called <em>RemoteWebDriver</em>. This is a driver that can drive a browser on a different machine. This is the one we will be using.</p>
<p>Nevertheless, we don’t want to tie any of our tests to a particular type of WebDriver. Luckily they all implement the <em>IWebDriver</em> interface. </p>
<h3></h3>
<h3>2.1 Instantiating the driver</h3>
<p>If we could use constructor injection, we could inject an instance of an IWebDriver into our step definitions. Unfortunately, SpecFlow doesn’t allow this. The next best thing we can do is use a <em>Service Locator</em> pattern. SpecFlow provides a Dictionary-like object called a ScenarioContext, which contains objects available to the current scenario. In the example above you can see that upon instantiation of the GoogleSteps-class, we get the driver from this dictionary. The next bit of code shows how we set up the driver at the beginning of a scenario:</p>
<pre class="brush: csharp;">[Binding]
public class Setup
{
    IWebDriver driver;

    [BeforeScenario]
    public void BeforeScenario()
    {
        driver = new FirefoxDriver();
        ScenarioContext.Current["driver"] = driver;
    }

    [AfterScenario]
    public void AfterScenario()
    {
        driver.Dispose();
    }
}</pre>
<p>Through the use of some SpecFlow hooks, we create a driver before each scenario and tear it down after the scenario finishes. In this example we just created a local FireFox driver. At this point we can execute our test and it will run successfully in a local FireFox instance. </p>
<h3>2.2 Configuring the driver to use browserstack</h3>
<p>The next step is executing the same test, but on a remote machine managed by BrowserStack. To do this, we need to create a RemoteWebDriver, and configure it accordingly. When you create a RemoteWebDriver, you need to provide two arguments:</p>
<ul>
<li>The Url of where the RemoteDriver will accept commands (this is provided by BrowserStack)
<li>The capabilities: this is a loosely typed dictionary of parameters that will be sent to the RemoteDriver. In this case, BrowserStack dictates which parameters you need.</li>
</ul>
<p>So first of all, let’s create an instance of the DesiredCapabilities-class and set the properties accordingly:</p>
<pre class="brush: csharp;">[BeforeScenario]
public void BeforeScenario()
{
    if (Process.GetProcessesByName("BrowserStackLocal").Length == 0)
        new Process
        {
            StartInfo = new ProcessStartInfo
            {
                FileName = "BrowserStackLocal.exe",
                Arguments = ConfigurationManager.AppSettings["browserstack.key"] + " -forcelocal"
            }
        }.Start();

    var capabilities = new DesiredCapabilities();

    capabilities.SetCapability(CapabilityType.Version, "33");
    capabilities.SetCapability("os", "windows");
    capabilities.SetCapability("os_version", "8");
    capabilities.SetCapability("browserName", "firefox");

    capabilities.SetCapability("browserstack.user", ConfigurationManager.AppSettings["browserstack.user"]);
    capabilities.SetCapability("browserstack.key", ConfigurationManager.AppSettings["browserstack.key"]);
    
    capabilities.SetCapability("project", "Google");
    capabilities.SetCapability("name", ScenarioContext.Current.ScenarioInfo.Title);

    capabilities.SetCapability("browserstack.local", true);

    driver = new RemoteWebDriver(new Uri(ConfigurationManager.AppSettings["browserstack.hub"]), capabilities);
    driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(1));
    ScenarioContext.Current["driver"] = driver;
}</pre>
<p>I have separated the capabilities in to four different sections:</p>
<ul>
<li>The first four capabilities determine what browser and platform you want to run these tests on.
<li>The next two, indicate your BrowserStack username and key. These you can get from the BrowserStack interface after you create an “Automate”-account (or a trial)
<li>The last two items are optional and are merely so you can see what project and what test is running from the BrowserStack interface</li>
</ul>
<p>With these capabilities set up, we can now create an instance of the RemoteWebDriver:</p>
<pre class="brush: csharp;">driver = new RemoteWebDriver(new Uri(ConfigurationManager.AppSettings["browserstack.hub"]), capabilities);
</pre>
<p>The driver is still of type IWebDriver, so we don’t need to change anything to our steps. Provided we have entered the correct username and key, we can now run our tests remotely on a VM managed by BrowserStack.</p>
<h3>2.3 Allowing BrowserStack to access local resources</h3>
<p>In the above example, I’m only accessing a public website. Chances are that in a real world scenario you will be testing a QA environment or even a dev environment on your PC. These are usually not publicly accessible. To allow BrowserStack access to these resources, they allow you to set up a tunnel through your PC and then access the resources from your PC. To set this up, you need to:</p>
<ul>
<li>Download the appropriate binary from their website (<a title="https://www.browserstack.com/local-testing#command-line" href="https://www.browserstack.com/local-testing#command-line">https://www.browserstack.com/local-testing#command-line</a>)
<li>Run the binary providing your key as a command-line argument (ie: <font face="Courier New">browserstacklocal.exe -&lt;<em>yourkeyhere</em>&gt;</font>)
<li>Indicate that you need local access in the capabilities</li>
</ul>
<p>So before we run our tests, we can run the executable from the scenario hook:</p>
<pre class="brush: csharp;">if (Process.GetProcessesByName("BrowserStackLocal").Length == 0)
    new Process 
    { 
        StartInfo = new ProcessStartInfo 
        { 
            FileName = "BrowserStackLocal.exe", 
            Arguments = ConfigurationManager.AppSettings["browserstack.key"] + " -forcelocal" 
        } 
    }.Start();</pre>
<h2>3. Running tests from the command-line</h2>
<p>The tests can now be ran from Visual Studio. To run these tests from the command-line, we can use the unit test runner. In this case I have used NUnit as the underlying test framework, so I’ll be using the NUnit runners. To run the tests from the command line you run the following command:</p>
<pre class="brush: bash;">nunit-console.exe /xml:nunit.xml /nologo /config:release &lt;pathtodll&gt;.dll
</pre>
<h2>4. Running tests in parallel</h2>
<p>Now that we have the tests running in BrowserStack from the command-line, it’s time to start running them simultaneously on various configurations. In order to run them on various configurations we need a few things:</p>
<ul>
<li>Parameterize the capabilities (namely os, os_version, browsername and version)
<li>Run several test at the same time with different parameters</li>
</ul>
<p>There’s one problem with this approach: when we run NUnit through the console there’s no way to pass parameters to the tests. The general approach given on various forums is to set environment variables. Since we are running the tests simultaneously, this doesn’t help us. There’s one thing however that lets us parameterize how we run the tests: NUnit allows us to specify the configuration we want to run the tests with. So I have come up with the following approach:</p>
<ul>
<li>Use one solution configuration for every configuration we want to test
<li>Extract the os, os_version, browsername and version to the configuration
<li>Use config transforms to vary these parameters based on the solution configuration
<li>Use Powershell to run various instances of the NUnit-console process with different configurations</li>
</ul>
<h3>4.1 Creating different solution configurations</h3>
<p>For each configuration we want to test, we will add a different solution configuration. Usually I start by deleting the standard release configuration and renaming the debug-configuration to something more sensible (eg: Win8Firefox33). You can add as many solution configurations as you want. It’s best to copy them from the original Debug-config as this will have all the settings necessary to be able to debug your code.</p>
<h3>4.2 Extract the parameters to the configuration</h3>
<p>We also need to change the capabilities to fetch these values from the config:</p>
<pre class="brush: csharp;">capabilities.SetCapability("os", ConfigurationManager.AppSettings["os"]);
capabilities.SetCapability("os_version", ConfigurationManager.AppSettings["os_version"]);
capabilities.SetCapability("browserName", ConfigurationManager.AppSettings["browser"]);
capabilities.SetCapability(CapabilityType.Version, ConfigurationManager.AppSettings["version"]);</pre>
<h3>4.3 Add config transforms</h3>
<p>When you have created the different solution configurations, you now need to add a config transform for each configuration. This is an example for my Win8Firefox33 configuration (note that the name is not important, it’s merely a convention)</p>
<pre class="brush: xml;">&lt;?xml version="1.0"?&gt;
&lt;configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"&gt;
    &lt;appSettings&gt;
        &lt;add key="browser" value="firefox" xdt:Transform="Insert"/&gt;
        &lt;add key="os" value="windows" xdt:Transform="Insert"/&gt;
        &lt;add key="version" value="33" xdt:Transform="Insert"/&gt;
        &lt;add key="os_version" value="8" xdt:Transform="Insert"/&gt;
    &lt;/appSettings&gt;
&lt;/configuration&gt;
</pre>
<p>If your not sure how to add config transformations to a class-library, you can use a tool such as <a href="https://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5" target="_blank">SlowCheetah</a> to do it for you. </p>
<h3>4.4 Run various instances of the NUnit-console process with Powershell</h3>
<p>We can now already run a different configuration on BrowserStack by varying the config-parameter of the previous command. Here are two examples of how to run a different configuration:</p>
<pre class="brush: bash;">nunit-console.exe /xml:nunit.xml /nologo /config:Win8Firefox33 &lt;pathtodll&gt;.dll 
nunit-console.exe /xml:nunit.xml /nologo /config:Win7Chrome38 &lt;pathtodll&gt;.dll 
</pre>
<p>The next step is executing all the configurations we have in parallel with powershell. To do this we’ll need to execute the following steps:</p>
<ul>
<li>Get all the configurations in the solution
<li>Compile the project in each configuration
<li>Run all configurations in parallel</li>
</ul>
<p>To get all the configurations in the solution, we can use this small function:</p>
<pre class="brush: ps;">function Get-SolutionConfigurations($solution)
{
        Get-Content $solution |
        Where-Object {$_ -match "(?&lt;config&gt;\w+)\|"} |
        %{ $($Matches['config'])} |
        select -uniq
}
</pre>
<p>This will open the .sln file, use a regex to find the configurations available, deduplicate them and return them as an array.</p>
<p>Next step is compiling the project against the different configurations. We will be using MsBuild for this:</p>
<pre class="brush: ps;">@(Get-SolutionConfigurations "&lt;path.to.sln&gt;") | foreach {
    msbuild &lt;path.to.csproj&gt; /p:Configuration=$_ /nologo /verbosity:quiet
}</pre>
<p>Now that the project is compiled in all configurations we can run the tests in parallel. For this, we will be using Powershell jobs:</p>
<pre class="brush: ps;">@(Get-SolutionConfigurations "&lt;path.to.sln&gt;")| foreach {
    Start-Job -ScriptBlock {
        param($configuration)

        nunit-console.exe /xml:nunit_$configuration.xml /nologo /config:$configuration &lt;path.to.dll.for.this.config&gt;
    } -ArgumentList $_ 
}
Get-Job | Wait-Job
Get-Job | Receive-Job</pre>
<p>This snippet retrieves the configurations and it start a job for each one. When all jobs are started it waits for all of them to complete and then receives and writes the output of all of them to the console one by one.</p>
<h2>5. Reporting</h2>
<p>Once we have run all our tests, we want to report the results. First of all, you can look at your tests performing live via the BrowserStack website. This is great and allows you visually verify any errors if the occur. It also shows you a complete log of everything that has happened in the scenario. The image below shows what you can see while the tests are running:</p>
<figure><a href="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/01/browserstack_live.jpg"><img title="browserstack_live" style="border-top: 0px; border-right: 0px; border-bottom: 0px; border-left: 0px; display: inline" border="0" alt="browserstack_live" src="http://kennethtruyersnet.blob.core.windows.net/wordpress/2015/01/browserstack_live_thumb.jpg" width="804" height="512"></a></figure>
<p>You can see in real-time which tests are running, how far they are and even follow on-screen what they are doing. That’s pretty awesome for live debugging and monitoring.</p>
<p>Apart from live debugging, we also need to create reports when tests have finished so we can check them afterwards or even archive them for later review. To do this, you can run the specflow tool:</p>
<pre class="brush: ps;">specflow.exe nunitexecutionreport &lt;path.to.csproj&gt; /out:specresult.html /xmlTestResult:nunit.xml /testOutput:nunit.txt</pre>
<p>When we add this to our parallel test runner, the script becomes as follows (we need to parameterize the text-files because we’ll have one for each configuration):</p>
<pre class="brush: ps;">@(Get-SolutionConfigurations "&lt;path.to.sln&gt;")| foreach {
    Start-Job -ScriptBlock {
        param($configuration)

        try
        {
            nunit-console.exe /labels /out=nunit_$configuration.txt /xml:nunit_$configuration.xml /nologo /config:$configuration &lt;path.to.dll.for.this.config&gt;
        }
        finally
        {
            specflow.exe nunitexecutionreport &lt;path.to.csproj&gt; /out:specresult_$configuration.html /xmlTestResult:nunit_$configuration.xml /testOutput:nunit_$configuration.txt
        }

    } -ArgumentList $_
}
Get-Job | Wait-Job
Get-Job | Receive-Job</pre>
<p>Now every NUnit test run will dump two files: an XML-file with the test results and a TXT-file with more information. The SpecFlow runner will then parse these files and generate an HTML-report.</p>
<h2>6. Putting it all together</h2>
<p>Now that we have gotten all steps finished, we can put everything together. To execute the tests we need the following:</p>
<ul>
<li>A parameterized setup of the driver before each scenario
<li>A config transformation for each configuration we want to run
<li>A PowerShell build script that can execute the configurations in parallel and report on them</li>
</ul>
<p>For completeness, here is the full code for each of these requirements. Alternatively, you can <a href="https://github.com/Kennethtruyers/SpecFlow.BrowserStack" target="_blank">clone the code from GitHub</a> and play with it yourself.</p>
<h3>6.1 Parameterized driver</h3>
<pre class="brush: ps;">[BeforeScenario]
public void BeforeScenario()
{
    if (Process.GetProcessesByName("BrowserStackLocal").Length == 0)
        new Process
        {
            StartInfo = new ProcessStartInfo
            {
                FileName = "BrowserStackLocal.exe",
                Arguments = ConfigurationManager.AppSettings["browserstack.key"] + " -forcelocal"
            }
        }.Start();

    var capabilities = new DesiredCapabilities();

    capabilities.SetCapability(CapabilityType.Version, ConfigurationManager.AppSettings["version"]);
    capabilities.SetCapability("os", ConfigurationManager.AppSettings["os"]);
    capabilities.SetCapability("os_version", ConfigurationManager.AppSettings["os_version"]);
    capabilities.SetCapability("browserName", ConfigurationManager.AppSettings["browser"]);
    
    capabilities.SetCapability("browserstack.user", ConfigurationManager.AppSettings["browserstack.user"]);
    capabilities.SetCapability("browserstack.key", ConfigurationManager.AppSettings["browserstack.key"]);
    capabilities.SetCapability("browserstack.local", true);
    
    capabilities.SetCapability("project", "Seatwave.Websites.Consumer");
    capabilities.SetCapability("name", ScenarioContext.Current.ScenarioInfo.Title);

    driver = new RemoteWebDriver(new Uri(ConfigurationManager.AppSettings["browserstack.hub"]), capabilities);
    driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(1));
    ScenarioContext.Current["driver"] = driver;
}
</pre>
<h3>6.2 Config transformations</h3>
<pre class="brush: xml;">&lt;?xml version="1.0"?&gt;
&lt;configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"&gt;
    &lt;appSettings&gt;
        &lt;add key="browser" value="firefox" xdt:Transform="Insert"/&gt;
        &lt;add key="os" value="windows" xdt:Transform="Insert"/&gt;
        &lt;add key="version" value="33" xdt:Transform="Insert"/&gt;
        &lt;add key="os_version" value="8" xdt:Transform="Insert"/&gt;
    &lt;/appSettings&gt;
&lt;/configuration&gt;</pre>
<h3>6.3 Build script in Powershell</h3>
<pre class="brush: ps;">function Get-SolutionConfigurations($solution)
{
        Get-Content $solution |
        Where-Object {$_ -match "(?&lt;config&gt;\w+)\|"} |
        %{ $($Matches['config'])} |
        select -uniq
}

@(Get-SolutionConfigurations "&lt;path.to.sln&gt;") | foreach {
    iex { msbuild &lt;path.to.csproj&gt; /p:Configuration=$_ /nologo /verbosity:quiet }
}

@(Get-SolutionConfigurations "&lt;path.to.sln&gt;")| foreach {
    Start-Job -ScriptBlock {
        param($configuration)

        try
        {
            nunit-console.exe /labels /out=nunit_$configuration.txt /xml:nunit_$configuration.xml /nologo /config:$configuration &lt;path.to.dll.for.this.config&gt;
        }
        finally
        {
            specflow.exe nunitexecutionreport &lt;path.to.csproj&gt; /out:specresult_$configuration.html /xmlTestResult:nunit_$configuration.xml /testOutput:nunit_$configuration.txt
        }

    } -ArgumentList $_
}
Get-Job | Wait-Job
Get-Job | Receive-Job</pre>
<h3></h3>
<h3>7. Demo project</h3>
<p>In the code above, I have omitted a few things for brevity’s sake. In order to be able to run msbuild, the nunit console runner and specflow, you need to ensure that all the paths are set up correctly (ie: added to your environment variables). Therefore, I have created a full working implementation, which you can find on <a href="https://github.com/Kennethtruyers/SpecFlow.BrowserStack" target="_blank">GitHub</a>.</p>
<p>To run it you need to do the following:</p>
<ul>
<li>Clone the repository to your PC</li>
<li>Create a trial account on BrowserStack</li>
<li>Go to Account =&gt; Automate and copy the UserName and Value into the app.config</li>
<li>Open a command prompt and run “<font face="Courier New">powershell –file build.ps1</font>”</li>
</ul>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2015/01/03/running-specflow-acceptance-tests-in-parallel-on-browserstack/">Running SpecFlow Acceptance Tests in parallel on BrowserStack</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=EHolxbPZn8A:qeuxD2N12kU:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=EHolxbPZn8A:qeuxD2N12kU:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=EHolxbPZn8A:qeuxD2N12kU:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=EHolxbPZn8A:qeuxD2N12kU:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=EHolxbPZn8A:qeuxD2N12kU:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=EHolxbPZn8A:qeuxD2N12kU:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/EHolxbPZn8A" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2015/01/03/running-specflow-acceptance-tests-in-parallel-on-browserstack/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2015/01/03/running-specflow-acceptance-tests-in-parallel-on-browserstack/</feedburner:origLink></item>
		<item>
		<title>Simple code: a sample app</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/XJHaLnzJ1LA/</link>
		<pubDate>Thu, 20 Nov 2014 22:47:16 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[patterns]]></category>
		<category><![CDATA[simplicity]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1177</guid>
		<description><![CDATA[<p>In my last few posts I have hammered a lot on simplicity in software. In my first post (Simplicity in software) I explained what the difference is between easy and simple code (or hard and complex code). On the basis that introducing frameworks and libraries increases complexity, the following posts then touched on a few [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/20/simple-code-a-sample-app/">Simple code: a sample app</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my last few posts I have hammered a lot on simplicity in software. In my first post (<a href="http://www.kenneth-truyers.net/2014/02/20/simplicity-in-software-what-you-see-is-not-what-you-get/" target="_blank">Simplicity in software</a>) I explained what the difference is between easy and simple code (or hard and complex code). On the basis that introducing frameworks and libraries increases complexity, the following posts then touched on a few common types of frameworks that I feel are often overused: </p>
<ul>
<li>Aspect oriented programming versus composition in <a href="http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/" target="_blank">Simplify code by using composition</a>
<li>Hand-written SQL or Micro-ORM’s versus ORM’s in <a href="http://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/" target="_blank">How to ditch your ORM</a>
<li>DI containers versus Pure DI in <a href="http://www.kenneth-truyers.net/2014/11/18/how-to-use-pure-di/" target="_blank">How to use Pure DI</a>
<li>Partial application versus over-usage of interfaces in <a href="http://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/" target="_blank">Using partial application for dependency injection</a></li>
</ul>
<p>In this post I will reiterate these patterns by combining them all into a sample application. The application can be found on <a href="https://github.com/Kennethtruyers/SimpleCode" target="_blank">GitHub</a>. </p>
<blockquote><p>Disclaimer: the sample application serves to demonstrate patterns and practices. It is not intended to be fully tested nor complete. I do not pretend for this architecture to be the golden standard for development. I’m merely using it to show examples and alternative solutions to common and recurring problems I have faced. If you have any ideas or suggestions, feel free to comment, fork, throw a pull-request or engage with me on twitter (@kennethtruyers). The discovery and learning part is what interests me.</p>
</blockquote>
<h2>The sample app</h2>
<p>The application is a rest API built with ASP.NET Web API (primarily because it saved me from writing a UI) on top of a SQL database. It allows you to create a user, update its profile and keep track of a list of friends. They key patterns that I want to illustrate are these:</p>
<ul>
<li>The separation of the read and write model.
<li>The use of an event-bus, query bus and command bus to interact between the endpoints, domain and the database.
<li>The use of partial application to do dependency injection.
<li>The use of composition to avoid attribute based programming.
<li>Testability</li>
</ul>
<p>This is a short overview of the app structure:</p>
<p style="float: left">
<ul>
<li><strong>API</strong>: Contains the controllers. The controllers are very lightweight. They are just declarations of endpoints and <strong>dispatch</strong> either a query or a command
<li><strong>App_start</strong>: Contains the regular start up code for a web api project and the the code that bootstraps the <strong>dependency</strong> resolution mechanism
<li><strong>Application</strong>: The application is the part that <strong>coordinates</strong> all the input we receive to either the database or the domain.
<li><strong>Domain</strong>: This contain all our custom logic. Since it’s a sample app, there’s not a lot there, but in a large application, this would be the beef of the app. It raises <strong>events</strong> about changes.
<li><strong>Infrastructure</strong>: Contains PetaPoco as a Micro-ORM to talk to the database and a logging component to demonstrate <strong>composition</strong>
<li><strong>ReadModel</strong>: Flat DTO’s that serve as output for <strong>query</strong> operations
<li><strong>SimpleCode.Tests</strong>: Contains tests for the domain logic inside the User class</li>
</ul>
<p>I’m not going to go over a the code, since most of the patterns are discussed in the previous posts and if you want you can have look at the <a href="https://github.com/Kennethtruyers/SimpleCode" target="_blank">sample application</a>. I do want to highlight the main traits this app has.</p>
<h2></h2>
<h3>Separation of read and write model</h3>
<p>On the write-side, because our domain is free from read concerns, it allows our model to be very expressive. That means we don’t need public properties, but can expose only behavior. On the reading side, we can read directly from the database and project the data immediately into the read-model. This means that we don’t need any mapping between viewmodels and domain entities. A tool like AutoMapper is simply not necessary with this structure.</p>
<h3>Event, Query and Command bus</h3>
<p>The different buses are what composes the application while at the same decoupling components from each other. It allows for a clean separation between domain, database and the public endpoints. An extra benefit is that our application becomes scalable. The sample app consists of in-memory buses, but you could easily create a bus that communicates over a remote channel, thereby splitting up the read, write and endpoints into different physical tiers.</p>
<h3></h3>
<h3>Partial Application and dependency injection</h3>
<p>Because of the usage of partial application and pure DI as a mechanism for composing the application, we don’t need a DI-container. The entire application is configured inside the BootsTrap-method (WebApiConfig-class). A view on this method will tell you exactly how the application is composed. It frees us from learning the details about a specific container, increases readability and reduces complexity. The bootstrap-method can grow quickly once you start adding code to this project. But that’s OK, because that code has value, it tells you how the app works. It’s also simple code, because it’s plain object composition.</p>
<p>Another part of the equation is that we didn’t need to interfere with the way Controllers are constructed. Because the buses have public static methods, we can use them from anywhere. It allows our controllers to be mere declarations of endpoints (that are not at all served by unit tests) and place the dependency graphs one level lower. (see testability for more details about the ‘static’-ness of the buses).</p>
<h3>Composition</h3>
<p>Because we are using partial application, we can wrap different types of functions around other functions. This allows us to create cross-cutting concerns as separate elements and then apply them in the composition root (Bootstrap-method). An example of this can be seen where we wrap a logging-function around a handler function in the composition-root. This frees us of the burden of an AOP-framework and scattered attributes through our code base.</p>
<h3>Testability</h3>
<p>A concern that some people may have is that the Command, Query and Event bus are static classes with static methods and that they are not replaceable in tests. But that’s not an issue, we can actually just use them in our tests as well. If you look at the user test, you’ll see that we fire a few commands at the handler and then check whether we receive the correct events on the bus. Because commands and events are really domain concepts and should be tied closely to the problem domain, it makes for a nice way of testing. It let’s you write tests that say: If I create a user (command) then a user was created (event), which correlate closely with the domain language. We’re issuing commands and then testing whether that command had the correct reaction of the system (events).</p>
<h2>Conclusion</h2>
<p>The combination of these patterns allows us to create an architecture without depending on external tools. Because external tools introduce complexity, such an architecture yields simpler code. I want to stress again that when I say simple, I do not mean easy. The code this app showcases is not really easy (easy as in, a junior could write this first day out of school). This code is not easy, but it is simple in the fact that there is no magic involved by frameworks (generating code at runtime, dynamically resolving factories, reflecting over properties to map from one type to another, …). If a bug would show up in this code, it will definitely be a visible one and troubleshooting will be rather simple.</p>
<p>This code is also not an endpoint, there are still things that I don’t like about it as well:</p>
<ul>
<li>The bootstrap method is not testable, that means that you could encounter an error at runtime where a handler is not resolved. For commands and events, this could be done through a unit test, by reflecting over all classes that implement ICommand or IEvent and trying to resolve a handler (although it’s rather ugly). For the query handlers it’s more difficult as you could theoretically dispatch any combination of a requested result by any type of query. I’m not sure how this could be made testable without an insane amount of reflection code (any ideas here are more than welcome).&nbsp;
<li>There’s still quite a bit of ugly syntax overhead. If you look at the command handlers, they’re signature is <font face="Courier New">public static readonly Action&lt;T&gt; NameOfHandler = command =&gt; </font>. That&#8217;s a lot of words to say you’re declaring an <font face="Courier New">Action&lt;T&gt;</font>, but I guess that is just how C# syntax works. F# would probably offer a huge improvement in this area.</li>
</ul>
<p>As I said, I look at this code more as an exploration of interesting patterns in order to reduce complexity. If you have any feedback, suggestions or improvements, feel free to contact me. Happy to discuss, defend my opinions and/or admit I’m wrong.</p>
<p><a href="https://github.com/Kennethtruyers/SimpleCode" target="_blank">Browse or clone the code on the GitHub</a></p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/20/simple-code-a-sample-app/">Simple code: a sample app</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=XJHaLnzJ1LA:JSxKmbOdgww:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=XJHaLnzJ1LA:JSxKmbOdgww:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=XJHaLnzJ1LA:JSxKmbOdgww:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=XJHaLnzJ1LA:JSxKmbOdgww:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=XJHaLnzJ1LA:JSxKmbOdgww:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=XJHaLnzJ1LA:JSxKmbOdgww:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/XJHaLnzJ1LA" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2014/11/20/simple-code-a-sample-app/</feedburner:origLink></item>
		<item>
		<title>Using partial application for dependency injection</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/vt8R_CytDfc/</link>
		<comments>https://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/#comments</comments>
		<pubDate>Wed, 19 Nov 2014 00:19:21 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[patterns]]></category>
		<category><![CDATA[simplicity]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1137</guid>
		<description><![CDATA[<p>In my post on how to simplify code by using composition instead of an AOP-framework I showed a way of substituting attribute based programming with a more composable paradigm. The result of that transformation will be my starting point for today’s post. For reference, I’m including that code here again: public class TaskCommandHandlers : ICommandHandler&#60;CreateTask&#62;, [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/">Using partial application for dependency injection</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my post on how to <a href="http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/" target="_blank">simplify code by using composition</a> instead of an AOP-framework I showed a way of substituting attribute based programming with a more composable paradigm. The result of that transformation will be my starting point for today’s post. For reference, I’m including that code here again:</p>
<pre class="brush: csharp;">public class TaskCommandHandlers : ICommandHandler&lt;CreateTask&gt;,
                                   ICommandHandler&lt;MarkTaskDone&gt;
{
    ITaskRepository _repo;
    INotificationService _notificationService
    public TaskService(ITaskRepository taskRepository, INotificationService notificationService)
    {
        _repo = taskRepository;
        _notificationService = notificationService;
    }
 
    public void Handle(CreateTask command)
    {
        repo.Insert(new Task(command.Name));
    }
 
    public void Handle(MarkTaskDone command)
    {
        var task = _repo.Get(command.TaskId);
        task.Status = TaskStatus.Done;
        task.ModifiedBy = command.UserName;
        _repo.Update(task);
        _notificationService.NotifyUser(command.UserName);
    }
}</pre>
<p>What we did was combining the different parameters so that we have a common interface. This allows us to compose our code in different ways using the decorator pattern. This has certain advantages as discussed in the <a href="http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/" target="_blank">post</a>, but there are still a few things that I don’t like about this code:</p>
<ul>
<li>There’s a lot of code that doesn’t really add value. In fact only 8 lines (the method bodies) out of a total of 25 are relevant. On top of that we have an interface (<font face="Courier New">ICommandHandler</font>).
<li>The dependencies are scoped incorrectly. They are not really dependencies for the entire class but localized to the separate methods. <em>(Although the repo is used in both methods, in any instance of this class, conceptually it belongs to each of the methods separately.)</em>
<li>A class has state and behavior, but this class does not really have any state apart from the dependencies, which are wrongly scoped. </li>
</ul>
<p>To solve this, we could pass the dependencies into the methods instead of in the constructor:</p>
<pre class="brush: csharp;">public class TaskCommandHandlers : ICommandHandler&lt;CreateTask&gt;,
                                   ICommandHandler&lt;MarkTaskDone&gt;
{ 
    public void Handle(CreateTask command, ITaskRepository repo)
    {
        repo.Insert(new Task(command.Name));
    }
 
    public void Handle(MarkTaskDone command, ITaskRepository repo, INotificationService notificationService)
    {
        var task = repo.Get(command.TaskId);
        task.Status = TaskStatus.Done;
        task.ModifiedBy = command.UserName;
        repo.Update(task);
        notificationService.NotifyUser(command.UserName);
    }
}</pre>
<p>This already looks a bit better, but there’s a problem: now we don’t have a <strong>common interface</strong> anymore. To solve this, we could apply the same technique as we did in getting to the common interface ie:&nbsp; combining the parameters into a new type. That wouldn’t be a good idea though. Instead, let’s first move away from the class-behavior and declare these code block as static <font face="Courier New">Action</font>‘s:</p>
<pre class="brush: csharp;">public class TaskCommandHandlers
{
    public static readonly Action&lt;CreateTask, ITaskRepository&gt; CreateNewTask = 
        (command, repo) =&gt; repo.Insert(new Task(command.Name));

    public static readonly Action&lt;MarkTaskDone, ITaskRepository, INotificationService&gt; MarkTaskAsDone = 
        (command, repo, notificationService) =&gt;
        {
            var task = repo.Get(command.TaskId);
            task.Status = TaskStatus.Done;
            task.ModifiedBy = command.UserName;
            repo.Update(task);
            notificationService.NotifyUser(command.UserName);
        };
}
</pre>
<p>In this code:</p>
<ul>
<li>There’s no constructor
<li>There are no interfaces
<li>Dependencies are correctly scoped
<li>We’ve reduced the ratio of important lines/overhead to 8/14 instead of 8/25. </li>
</ul>
<p>There’s still the issue of a non-conforming interface though. At some point we do need to call these methods and we’d like to pass in just the command and have the other dependencies resolved automatically. Our client needs either an <font face="Courier New">Action&lt;CreateTask&gt;</font> or an <font face="Courier New">Action&lt;MarkTaskDone&gt;</font>, but what we have is an <font face="Courier New">Action&lt;CreateTask, ITaskRepository&gt;</font> and an <font face="Courier New">Action&lt;MarkTaskDone, ITaskRepository, INotificationService&gt;. </font>For this we are going to use partial application.</p>
<h2>Partial application explained</h2>
<p>Before I show how to do this, I want to give a short explanation of what partial application is. Partial application is a concept that stems from Functional Programming (FP), but since C# has some elements of FP, we can use it here as well.</p>
<blockquote>
<p><b>partial application</b> (or <b>partial function application</b>) refers to the process of fixing a number of arguments to a function, producing another function of smaller arity (source: wikipedia)</p>
</blockquote>
<p>Here we can see a sample of partial application in action:</p>
<pre class="brush: csharp;">Func&lt;int, int, int&gt; sum = (a, b) =&gt; a + b;

Func&lt;int, int&gt; add10To = a =&gt; sum(a, 10);

sum(5, 10);

add10To(5);
</pre>
<p>In the above example you can see how we create a <font face="Courier New">Func</font> (sum) which accepts two <font face="Courier New">ints</font> and returns an <font face="Courier New">int</font>. On the second line, we create a <font face="Courier New">Func</font> called <font face="Courier New">add10To</font>, which accepts a single <font face="Courier New">int</font> and returns an <font face="Courier New">int</font>. Inside that <font face="Courier New">Func</font>, we call the original function, with the single parameter and the value 10. What we have done is fixed one of the arguments of the original function to a fixed value. This is exactly what we need to solve our problem above.</p>
<h2>Partial application as a DI mechanism</h2>
<p>Knowing what partial application is, we can now create an <font face="Courier New">Action&lt;CreateTask&gt;</font> as follows:</p>
<pre class="brush: csharp;">var repository = new TaskRepository();
var notificationService = new NotificationService();

Action&lt;CreateTask&gt; create = command =&gt; 
    TaskCommandHandlers.CreateNewTask(command, repository);

Action&lt;MarkTaskDone markDone = command =&gt;
    TaskCommandHandlers.MarkTaskAsDone(command, repository, notificationService);
</pre>
<p>&nbsp;</p>
<h2>Partial application in the composition root</h2>
<p>To be able to resolve these actions we need to declare them in our composition root. Let’s look at how we would do that in an ASP.NET web API project. Because we don’t want to inject multiple <font face="Courier New">Action&lt;T&gt;</font>‘s in our controllers, we will be creating a dispatcher with static methods. A controller will look like this:</p>
<pre class="brush: csharp;">public class TaskController : ApiController
{
    [HttpPost]
    public void CreateTask(CreateTask command)
    {
        Commands.Dispatch(command);
    }

    [HttpPost]
    public void MarkTaskAsDone(MarkTaskDone command)
    {
        Commands.Dispatch(command);
    }
}
</pre>
<p>The Commands-class is a static class with public static methods (see below). The <font face="Courier New">TaskController</font> is in fact just a regular controller with an empty default constructor. We won’t ever need to inject anything in here, because the only thing a controller does is declare endpoints and from these endpoints we dispatch commands. For query operations, we can follow the same pattern as with the commands, but I’ve omitted that part for the sake of brevity.</p>
<p><em>Note about testing: Some will argue that you can’t substitute the dispatcher in unit tests. Personally, I wouldn’t test the controller. All you would be testing is whether it dispatches the command that you pass in, which is not a very valuable test.</em></p>
<p>The dispatcher needs to be able to register handlers and dispatch commands to the correct handlers:</p>
<pre class="brush: csharp;">public class Commands
{
    static readonly Dictionary&lt;Type, Action&lt;ICommand&gt;&gt; handlers = 
	new Dictionary&lt;Type, Action&lt;ICommand&gt;&gt;();
    public static void Register&lt;T&gt;(Action&lt;T&gt; handler) where T : ICommand
    {
        handlers.Add(typeof(T), x =&gt; handler((T) x));
    }

    public static void Dispatch(ICommand command)
    {
        handlers[command.GetType()](command);
    }
}</pre>
<p>In the register-method we save all handlers in a dictionary, with the key based on their type. When we’re dispatching a command, we look it up in the dictionary and call the action. Our composition root can then be as follows:</p>
<pre class="brush: csharp;">public class CompositionRoot
{ 
    public static void Bootstrap()
    {      
        // Create the services we need         
        var repository = new Lazy&lt;TaskRepository&gt;(new TaskRepository());   // use lazy here, because it’s expensive to construct (suggestion by FizixMan on Reddit)
        var notificationService = new NotificationService();
          
        // Use partial application to fix the extra parameters the handlers need
        // and create and register an Action&lt;CreateTask&gt; and an Action&lt;MarkTaskDone&gt;
        Commands.Register&lt;CreateTask&gt;(createTask =&gt; 
            TaskCommandHandlers.CreateNewTask(createTask, repository.Value);

        Commands.Register&lt;MarkTaskDone&gt;(markAsDone =&gt; 
            TaskCommandHandlers.MarkTaskAsDone(markAsDone, repository.Value, notificationService));
    }
 }
</pre>
<p>In the composition root we create and register the handlers we’re going to need. This is where we close the extra parameters that the handlers need and turn them into plain<font face="Courier New"> Action&lt;T&gt;</font>’s. We only need to call the <font face="Courier New">Bootstrap</font> method once when the application starts. This frees us from having to deal with Web API-specific code and this composition root is in fact portable to any other type of project.</p>
<h2>Managing lifetime with Partial Application</h2>
<p>A common treat of most containers is the ability to manage the lifecycle of a dependency. With partial application, this is built-in and it allows you to create custom lifecycles as well. There’s no need to learn special syntax, it’s just regular C#. To explain lifetime management, I’m going to use a slightly different graph:</p>
<pre>Handler
    - CreateTask
    - Repository
    - NotificationService
        - Repository
    - SomeService
        - Repository
</pre>
<p>In the following examples I will focus on the lifetime management of the repository.</p>
<h3>Singleton</h3>
<h3></h3>
<p>For a singleton we need to instantiate the repository inside the <font face="Courier New">Bootstrap</font> method, but outside of the actual handlers, so it will only get instantiated once (when the method is called, on start up). Here the repository is created only once and is then reused every time we invoke the action.</p>
<pre class="brush: csharp;">public static void BootsTrap()
{                
    var repository = new TaskRepository();
    var notificationService = new NotificationService(repository);
    var someService = new someService(repository);
          
    Commands.Register&lt;CreateTask&gt;(createTask =&gt; 
        TaskCommandHandlers.CreateNewTask(createTask, repository, notificationService, someService);
}
</pre>
<h3>Transient</h3>
<p>Here every time the action is invoked, all services are instantiated.</p>
<pre class="brush: csharp;">public static void BootsTrap()
{               
    dispatcher.Register&lt;CreateTask&gt;(createTask =&gt; 
        TaskCommandHandlers.CreateNewTask(createTask, 
                                        new TaskRepository(), 
                                        new NotificationService(new TaskRepository()), 
                                        new SomeService(new TaskRepository());
}
</pre>
<h3>Per request</h3>
<p>Notice the subtle difference with the singleton: the services are created inside the <font face="Courier New">Action&lt;T&gt;</font> instead of in the method itself. So, for every time we call the Action, we’ll create a new the repository, but that instance will be shared by all services. <em>(NOTE: Technically this is not per request, if not per graph, but since we will model our controller to send a single command per request, in practice it will be the same. You could create a pure per request lifetime by using the <font face="Courier New">HttpContext.Current.Items</font> bag, but I’ll leave that as an exercise <img src="https://www.kenneth-truyers.net/wp-includes/images/smilies/simple-smile.png" alt=":-)" class="wp-smiley" style="height: 1em; max-height: 1em;" /> )</em></p>
<pre class="brush: csharp;">public static void BootsTrap()
{               
    Commands.Register&lt;CreateTask&gt;(createTask =&gt; 
    {
        var repository = new TaskRepository();
        var notificationService = new NotificationService(repository);
        var someService = new SomeService(repository);
        TaskCommandHandlers.CreateNewTask(createTask, 
                                        repository, 
                                        notificationService, 
                                        someService);

    };
}
</pre>
<h3>Custom lifetime</h3>
<p>Since we’re using pure C#, we can mix and match any instantiation policy we want. In the following (contrived) example:</p>
<ul>
<li><font face="Courier New">notificationService</font> and <font face="Courier New">someService</font> have a <strong>per request</strong> lifetime.
<li><font face="Courier New">notificationService</font> and <font face="Courier New">someService</font> share the same <strong>singleton</strong> instance of the repository.
<li>The handler gets a new repository every time (<strong>transient</strong>). </li>
</ul>
<pre class="brush: csharp;">public static void BootsTrap()
{               
    var repository = new TaskRepository();
    Commands.Register&lt;CreateTask&gt;(createTask =&gt; 
    {
        var notificationService = new NotificationService(repository);
        var someService = new SomeService(repository);
        TaskCommandHandlers.CreateNewTask(createTask, 
                                        new TaskRepository(), 
                                        notificationService, 
                                        someService);

    };
}</pre>
<h2>Composition</h2>
<p>In my post <a href="http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/" target="_blank">simplify code by using composition</a> I showed how to move from attribute based programming to a more flexible composition based programming. This <strong>composability</strong> is not lost when we use partial application instead of constructor injection. To add logging to the CreateTask handler, we create a new handler:</p>
<pre>class LoggingHandlers
{
    public static Action&lt;ICommand, Action&lt;ICommand&gt;&gt; Log = 
        (command, next) =&gt; 
        {
            Log(command); // do custom logging here
            next(command);
        } 
}</pre>
<p>Now we use partial application again to plug this in from within the composition root:</p>
<pre>Commands.Register&lt;CreateTask&gt;(createTask =&gt; 
    LoggingHandlers.Log(createTask, createTask =&gt; 
        TaskCommandHandlers.CreateNewTask(createTask, repository));
</pre>
<h2>Conclusion</h2>
<p>Partial application is a very flexible and powerful pattern. When applied to DI, it <strong>focuses</strong> the attention on code that matters, correctly <strong>scopes</strong> dependencies and <strong>removes the cruft</strong> that constructor injection sometimes brings with it. It allows the same usage patterns of a DI-container with regards to lifetime management and it does not inhibit composability.</p>
<blockquote>
<p>Update after comments on Reddit by FizixMan:</p>
<ul>
<li>The static Actions on the handler should be readonly, you don’t want other code to overwrite those</li>
<li>In case construction of a dependency is costly, you can consider using a Lazy&lt;T&gt; to speed up application start up.</li>
</ul>
</blockquote>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/">Using partial application for dependency injection</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=vt8R_CytDfc:SFU4kb2Cncs:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=vt8R_CytDfc:SFU4kb2Cncs:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=vt8R_CytDfc:SFU4kb2Cncs:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=vt8R_CytDfc:SFU4kb2Cncs:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=vt8R_CytDfc:SFU4kb2Cncs:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=vt8R_CytDfc:SFU4kb2Cncs:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/vt8R_CytDfc" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2014/11/19/using-partial-application-for-dependency-injection/</feedburner:origLink></item>
		<item>
		<title>How to use Pure DI</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/2n5gtWlLx50/</link>
		<pubDate>Tue, 18 Nov 2014 22:06:03 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[patterns]]></category>
		<category><![CDATA[simplicity]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1135</guid>
		<description><![CDATA[<p>In my previous posts I talked about how you can decrease dependency on external libraries and frameworks while making your code simpler (not easier, simpler). In this post I want to continue on the same thread and show some of the benefits of Pure DI (as opposed to DI with a container). DI-containers are beneficial [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/18/how-to-use-pure-di/">How to use Pure DI</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my previous posts I talked about how you can decrease dependency on external libraries and frameworks while making your code simpler (not easier, simpler). In this post I want to continue on the same thread and show some of the benefits of Pure DI (as opposed to DI with a container).</p>
<p>DI-containers are beneficial if you have a complex application where you can rely on convention over configuration. If your application is not complex (and you should strive for that) or does not rely on conventions, a simpler approach can be followed by using pure DI. Before I dive in on how we can do this, let’s first iterate over a few of the disadvantages a container brings with it:</p>
<ul>
<li><strong>Complexity</strong>: When you configure a container in your composition root, you will usually be relying on conventions to make sure that the right types are resolved at runtime. How simple or complex this is, is directly proportional to how simple or complex your conventions are. If conventions are very clear, it will be relatively easy to spot how a dependency will be resolved at runtime. If not, it will be difficult to see how the application is composed.
<li><strong>Compiler assistance</strong>: Since you’re resolving your dependencies at runtime, the compiler can’t assist you. If you forget to declare a service it will just fail at runtime. This does not only go for registration but also for lifetimes. You can introduce lifetime mistakes and the compiler won’t be able to help you. An example of this is captive dependency (more information see: <a href="http://blog.ploeh.dk/2014/06/02/captive-dependency/" target="_blank">Captive dependency</a> by Mark Seemann)
<li><strong>Learning curve: </strong>Every container has a different API. In order to use it effectively, you need to have quite a good knowledge of that API. With so many containers available, it’s possible you’ll encounter different containers in different projects, which means you need to learn a new API once in a while. </li>
</ul>
<p>The situation where you might want to choose a pure DI approach over a container approach is when you do explicit registration instead of convention over configuration or when your conventions are really complex.</p>
<h2>Pure DI in ASP.NET Web API</h2>
<p>In order to use pure DI, you need to build your dependency graph whenever you code is being called. In an ASP.NET Web API application, this is when your controller is constructed. To intercept at this point, you need to implement the <font face="Courier new">IHttpControllerActivator</font> interface:</p>
<pre class="brush: csharp;">public class CompositionRoot : IHttpControllerActivator
{
    public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
    {
        // Resolve controller of Type here
    }
}
</pre>
<p>Inside this method, you will be passed the requested controller-type and you need to return an instance of that controller. A few things you need to know about this class:</p>
<ul>
<li>You need to register it on application start up <br />&nbsp;&nbsp;&nbsp; ( <font face="Courier new">GlobalConfiguration.Configuration.Services.Replace(typeof(IHttpControllerActivator), new CompositionRoot()</font> );
<li>This class will only be instantiated once and will be reused for the entire lifetime of the application
<li>The create-method will be called once per request </li>
</ul>
<p>With this information, we can now infer how should configure lifetime management:</p>
<ul>
<li>Singleton instances need to be instantiated in the constructor
<li>Per request instances need to be instantiated inside the method
<li>Transient instances need to be instantiated whenever necessary
<li>Any custom lifetime is easily configurable by instantiating it when necessary </li>
</ul>
<p>Let’s see how this looks:</p>
<pre class="brush: csharp;">public class SomeCompositionRoot : IHttpControllerActivator
{
    readonly SomeService singleton;
 
    public SomeCompositionRoot()
    {
        singleton = new SomeService(new SomeOtherSingletonService());
    }
 
    public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
    {
        var customLifeTime = new SomeCustomLifeTimeService();
        var perRequestService = new SomePerRequestService(customLifeTime);
 
        if(controllerType == typeof(MyController))
        {
            return new MyController(singleton,
                                    new TransientService(singleton, new SomeCustomLifeTimeService()),
                                    perRequestService,
                                    new SomeOtherTransientService(singleton, perRequestService);
        }
  
        throw new ArgumentException("Unexpected type!", "controllerType");
    }
}
</pre>
<h2>Pure DI in a console application</h2>
<p>In a console application, we don’t have the per-request lifetime, but the same principles apply. We need to construct the graph on application start, right inside void main (or extract it to a separate class):</p>
<pre class="brush: csharp;">public class Program
{   
    public static void main()
    {
        var customLifeTime = new SomeCustomLifeTimeService();
        var singleton = new SomeService(customLifeTime);
 
        var entryPoint = new EntrypointClass(new TransientDependency(singleton),
                                             singleton,
                                             new OtherTransientDependency(customLifeTime),
                                             new SomeCustomLifeTimeService());
        entryPoint.Run(); // or something similar
    }
}

</pre>
<p>This is obviously a bit more typing work than registering services with a container, but it has certain advantages:</p>
<ul>
<li><strong>Compile-time safety</strong>: You cannot forget to register a service, because the compiler will tell you.
<li><strong>Lifetime configuration</strong>: You cannot make a lifetime mistake such as captive dependency. Because you are declaring the singletons in the constructor you can see that the services that get passed in their constructors will also be singletons. Again this makes it easier to read your configuration.
<li><strong>Explicit</strong>: Since you have very clearly marked places to declare singletons, per request and transient services, it makes the configuration very explicit. All types are also explicitly constructed so you can easily spot how the application is composed.
<li><strong>Learning curve</strong>: This is just object composition, so you don’t need to learn the specific API of a container. </li>
</ul>
<h2>Conclusion</h2>
<p>Pure DI leads to more explicit code. You’ll write more code, but again I apply the same mantra: it’s more code, but it’s simpler code, so I’m happy to type a bit more for the sake of simplicity. One part that this approach does not solve is another pet peeve of mine: the over usage of constructor injection. In a next post I’ll show an example of how to use partial application to tackle this problem.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/18/how-to-use-pure-di/">How to use Pure DI</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=2n5gtWlLx50:XwyesznIoVU:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=2n5gtWlLx50:XwyesznIoVU:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=2n5gtWlLx50:XwyesznIoVU:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=2n5gtWlLx50:XwyesznIoVU:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=2n5gtWlLx50:XwyesznIoVU:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=2n5gtWlLx50:XwyesznIoVU:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/2n5gtWlLx50" height="1" width="1" alt=""/>]]></content:encoded>
			<feedburner:origLink>https://www.kenneth-truyers.net/2014/11/18/how-to-use-pure-di/</feedburner:origLink></item>
		<item>
		<title>How to ditch your ORM</title>
		<link>http://feedproxy.google.com/~r/KennethTruyers/~3/42r14D5ynSI/</link>
		<comments>https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/#comments</comments>
		<pubDate>Sat, 15 Nov 2014 01:06:08 +0000</pubDate>
		<dc:creator><![CDATA[Kenneth Truyers]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[patterns]]></category>
		<category><![CDATA[simplicity]]></category>

		<guid isPermaLink="false">http://www.kenneth-truyers.net/?p=1127</guid>
		<description><![CDATA[<p>In my previous post on how to simplify code by using composition I talked about how we can reduce complexity by removing an AOP-framework (or annotations-based programming). In this post I want to continue on the same line and talk about how we can reduce complexity by removing an ORM and replacing it by a [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/">How to ditch your ORM</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>In my previous post on how to <a href="http://www.kenneth-truyers.net/?p=1104" target="_blank">simplify code by using composition</a> I talked about how we can reduce complexity by removing an AOP-framework (or annotations-based programming). In this post I want to continue on the same line and talk about how we can reduce complexity by removing an ORM and replacing it by a simpler pattern. Before I show how we can get rid of the ORM I want to talk about why I think ORM’s introduce complexity.</p>
<p>ORM’s are not evil, they have certain advantages and disadvantages. These are some of their characteristics and how they may influence a project.</p>
<ul>
<li><strong>Simplicity</strong>: At the start of a project, an ORM is a real productivity booster, because you can load and save objects by writing very little code. There’s probably very little complexity in your domain model, so your model is very similar to your database structure and so mapping is very easy. When your model becomes more complex, mapping will get more complex. When this happens, you have a problem: either your complex domain model will be restricted by how your database is designed (in order to have simpler mapping) or your mapping will become very complex. (Here’s one example of how an ORM can limit your ability to model your application: <a title="http://stackoverflow.com/questions/17275030/how-to-map-a-value-type-which-has-a-reference-to-an-entity" href="http://stackoverflow.com/questions/17275030/how-to-map-a-value-type-which-has-a-reference-to-an-entity">http://stackoverflow.com/questions/17275030/how-to-map-a-value-type-which-has-a-reference-to-an-entity</a> )
<li><strong>Abstraction</strong>: An ORM provides an abstraction. This abstraction is leaky. If you look at documentation of any ORM, you will find a lot of references to SQL concepts. I have never been able to treat the ORM as just an object store, every time I needed to know how the ORM does things in order to get the correct data. Think about it like this: if you wouldn’t know anything about SQL, would you be able to use an ORM?
<li><strong>Learning curve</strong>: Every ORM has a different API. That means that with every ORM, you’ll have a new learning curve. Since it’s a leaky abstraction, it doesn’t free you from learning SQL either so now you not only need to learn SQL, but also (N)Hibernate and later Entity Framework and later &#8230;
<li><strong>Efficiency</strong>: All ORM’s do admit that you will be giving up a bit of efficiency. For small projects, that’s not an issue. When it becomes an issue, you’ll need to bypass the ORM and access the database with plain SQL, again asserting the point that an ORM is a leaky abstraction. (See also the part on ORM’s in my post about <a href="http://www.kenneth-truyers.net/2014/02/20/simplicity-in-software-what-you-see-is-not-what-you-get/" target="_blank">simplicity in software</a>) </li>
</ul>
<p>So, ORM’s have certain disadvantages and in my opinion they are not a good fit for complex applications, because they tend to increase complexity. But they’re not useless either: they can increase productivity at the start of a project (and can be removed/replaced when necessary). If your application is small and is very CRUD oriented, they provide great value as well.</p>
<p>The reason that ORM’s can only provide a leaky abstraction is that object relational mapping is in fact very hard (also known as the <a href="http://blog.codinghorror.com/object-relational-mapping-is-the-vietnam-of-computer-science/" target="_blank">Vietnam of Computer Science</a>).</p>
<h2>The problem</h2>
<p>Object hierarchies are inherently very different from relational hierarchies. Relational hierarchies center around data, whereas objects gravitate towards behavior (at least it should). OO modeling is a lot more powerful than relational modeling and because software development is in fact very difficult, we want the most powerful tool at our disposal. The issue is that we try to create a mapping between a database and an object model. This has some consequences: we will either have a limited object model that is just a representation of our relational model (lowest common denominator), or we will have very complex mappings (which can break down as we continue to model).</p>
<h2></h2>
<h2>Changing the problem</h2>
<p>I’m not pretending that you should write yet another ORM, or that I’m creating a new revolutionary ORM. As I said, object relational mapping is hard, so instead of trying to solve this problem, we want to change the problem so we don’t have to deal with it.</p>
<p>A first step towards changing the problem is realizing that <strong>reading</strong> and <strong>writing</strong> are two very different operations. Typically when you write, you want to ensure <strong>consistency</strong>. To ensure consistency, you need a strong model (DDD is one approach towards a stronger model). When you read, you’re trying to <strong>display</strong> the saved data in a certain form. Two read operations on the same data, may want a different representation. Ideally, when you’re reading you want a simple, flat model. Thus, the requirements imposed on the model are different for reading and writing. If we create a model that caters to reading and writing it will be more complex.</p>
<p>The first step towards easier mapping is to split out our read and write model. This means we can have our <strong>simple</strong> models on the read side, but still have a strong model that ensures <strong>consistency</strong> on the write side.</p>
<h2></h2>
<h3>Tackling the read side</h3>
<p>On the read side models are relatively <strong>simple</strong>, so we don’t need any complex mapping. In fact, reading should just be about <strong>projecting</strong> data into our models. Because it’s just a projection, no ORM is needed and you can just write plain SQL queries. You could use one of the micro-ORM’s available (PetaPoco is my personal favorite on the .NET platform). Although technically these are also ORM’s, they don’t carry the same weight as their full-blown counterparts. The biggest difference is that they don’t try to abstract the database away. I prefer to think of them as <strong>SQL-libraries</strong> rather than ORM’s.</p>
<h3>Tackling the write side</h3>
<p>On the write side, we’ll usually have a complex model that enforces <strong>constraints</strong>. If we want to persist our entire entity (or aggregate root) at once, that means that we need to do some <strong>complex mapping</strong> or write <strong>complex queries</strong>. If you were to use a repository pattern, when you call the save method, that repository will somehow have to find out what has changed and how that relates to what is in the database. This is <strong>hard</strong> and is the root of most complex mapping. If these notifications are fine grained, the listeners can be very simple. Let’s see an example of how we would persist a user;</p>
<p>Example 1: Persisting a user using a repository</p>
<pre class="brush: csharp;">public class User
{
    public int Id {get; set;}
    public string Name {get; set;}
    public List&lt;User&gt; Friends {get; set;}
}

public class UserRepository
{
    public void Save(User user)
    {
         // what has changed?
         // added, removed friends?
         // updated name of a friend
         // changed name?
    }
}
</pre>
<p>In the example above, when we save a user, so many things could have changed that the save-method has a hard time figuring out what to persist. An ORM takes this work out of your hands but then your mapping can become complex (is it a many-to-many? what happens if I change a friends name? if a friend does not have an ID, does it do an insert?, …). This is a trivial example and most ORM’s can handle this fairly easy, but in more complex scenarios, mapping can become really <strong>difficult</strong> and <strong>obscure</strong>. Handling this manually is very difficult as well, since you need to manage all these changes yourself (and essentially you’d be writing your own ORM).</p>
<p>In order to circumvent this complexity, we need a different strategy. We could let the model <strong>notify</strong> what has happened and then have a dedicated <strong>listener</strong> listen to those changes:</p>
<pre class="brush: csharp;">public class User
{   
    int id;
    public void AddFriend(User friend)   
    {
        EventBus.Raise(new FriendAddedToUser(id, friend));
    }

    public void RemoveFriend(User friend)   
    {
        EventBus.Raise(new FriendRemovedFromUser(id, friend));
    }

    public void ChangeName(string name)   
    {
        EventBus.Raise(new UserNameChanged(id, name));
    }
}

public class UserEventHandlers
{
    public void Handle(FriendAddedToUser @event)
    {
         // pseudo code
         // insert into user_friends (userid, friendid) values (@event.Id, @event.Friend.Id);
    }
    public void Handle(FriendRemovedFromUser @event)
    {
         // pseudo code
         // delete from user_friends where userid = @event.Id and @event.Friend.Id);

    }
    public void Handle(UserNameChanged @event)
    {
         // pseudo code
         // Update users set name = @event.Name where id = @event.Id;
    }
}
</pre>
<p>The User-class does not have any public properties, only method calls. We don’t need properties, since we’re not using this class to read and we’re not using an ORM. Whenever a method is called, the User class will push some event onto a bus. The bus will then look up one or more handlers to handle those events. In this case there’s just one for each event. Because these events are very fine grained, the resulting SQL is very <strong>easy</strong> to write (again you could use micro ORM to make life simpler, PetaPoco has an excellent SQL-builder that makes this trivial. Did I say I like PetaPoco yet?). <br />An added advantage is that it makes it easier to enforce constraints on your model. Here the methods just raised an <strong>event</strong>, but they could do anything to enforce invariants. In the first sample, public properties are exposed so there’s no way to control what happens. Of course you could add some checks, but if you’re going to a use an ORM, you will need public properties. You do need a bit of infrastructure code to set up the event-bus, but it’s fairly trivial and it’s a one-off investment.</p>
<p>We’re replacing a repository pattern with a <strong>publish-subscribe</strong> pattern in order to <strong>decouple</strong> domain logic from data access logic.</p>
<p>One part that I didn’t include here, is loading a user. This is a different use case than reading for displaying a user. In order to be able to enforce invariants, you need the whole aggregate to be loaded. Let’s say that a user can have a maximum of 10 friends. In order to enforce this, the method AddFriend needs to know how many friends there currently are. For that you can use the memento pattern:</p>
<blockquote>
<p>Memento pattern: Without violating encapsulation, capture and externalize an object’s internal state so that the object can be restored to this state later. <em>(Design patterns, elements of reusable Object-Oriented software)</em></p>
</blockquote>
<pre class="brush: csharp;">public class User
{   
    int id;
    List&lt;User&gt; friends;
    public void AddFriend(User friend)   
    {
        if(friends.Count() &lt; 10)
        {
            friends.Add(friend);
            EventBus.Raise(new FriendAddedToUser(id, friend));
        }
        else
        {
             throw new Exception("nope");
        }
    }

    public void RemoveFriend(User friend)   
    {
        EventBus.Raise(new FriendRemovedFromUser(id, friend));
    }

    public void ChangeName(string name)   
    {
        EventBus.Raise(new UserNameChanged(id, name));
    }
    public static User FromMemento(UserMemento memento)
    {
        var user = new User();
        user.id = memento.id;
        user.friends = memento.friends;
        return user;
    }
}

</pre>
<p>In this case, we’re using only half of the memento pattern (the <strong>restoring</strong> part) since the data is already <strong>captured</strong> in the database. Since the memento object is a simple bag of properties, it can easily be read from the database in the same way as we are projecting data into our read-model.</p>
<h2>Conclusion</h2>
<p>Using these patterns comes with the cost of more infrastructure code, although this code is relatively easy to write. In my experience, on complex domains it certainly pays off. In simpler domains, it’s often not worth the overhead. Object relation mapping is hard. ORM’s help solving this, but they come with the cost of complexity. In order to reduce complexity we can eliminate the ORM, but this is only possible if we can eliminate the object relational mapping. By leveraging some well-known patterns, we can <strong>simplify</strong> our database access. This allows for a<strong> rich write model</strong> and a <strong>light read model</strong>, while still allowing us to free domain logic from data persistence concerns.</p>
<p>The post <a rel="nofollow" href="https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/">How to ditch your ORM</a> appeared first on <a rel="nofollow" href="https://www.kenneth-truyers.net">Kenneth Truyers</a>.</p>
<div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:dnMXMwOfBR0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=dnMXMwOfBR0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:D7DqB2pKExk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=42r14D5ynSI:ILYZdVmSHn0:D7DqB2pKExk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=42r14D5ynSI:ILYZdVmSHn0:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=42r14D5ynSI:ILYZdVmSHn0:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=qj6IDK7rITs" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:KwTdNBX3Jqk"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=42r14D5ynSI:ILYZdVmSHn0:KwTdNBX3Jqk" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:l6gmwiTKsz0"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=l6gmwiTKsz0" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?i=42r14D5ynSI:ILYZdVmSHn0:gIN9vFwOqvQ" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/KennethTruyers?a=42r14D5ynSI:ILYZdVmSHn0:TzevzKxY174"><img src="http://feeds.feedburner.com/~ff/KennethTruyers?d=TzevzKxY174" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/KennethTruyers/~4/42r14D5ynSI" height="1" width="1" alt=""/>]]></content:encoded>
			<wfw:commentRss>https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/feed/</wfw:commentRss>
		<slash:comments>2684</slash:comments>
		<feedburner:origLink>https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/</feedburner:origLink></item>
	</channel>
</rss>
