This article originally appeared on IG's blog
It is funny how things turn around. For fifteen years I have been preaching TDD, or at least for developers to write some unit tests. However, in recent times I have found myself saying more often, "Why did you write that test?" instead of, "You should write a test."
What is going on?
While walking around the office, I was asked by a developer to help him with some unit tests. It seems that he had trouble using Mockito to test the following piece of code:

I think he was very surprised with my response: "You don't need to test that."
"But I have to!" he said. "How do I know then if the code works?!"
"The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code."
"But without a test, anybody can come, make a change and break the code!"
"Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it."
"But what if you had to write the test?"
"In that case, this is how I would test it:"

"But you are not using Mockito!"
"So what? Mockito is not helping you. Quite the opposite: it is getting in your way and it is not going to make the test more readable or simpler."
"But we decided to use Mockito for all the tests!"
Me: "…"
Next time that I bumped into him, he proudly stated that he had managed to write the test with Mockito. I understand the mental satisfaction of getting it working, but nonetheless it made me sad.
Another example
I got pulled in by a developer all excited about the high code coverage of one of their new applications and their new found love for BDD. Looking around the code we found the following Cucumber test:

If you have used Cucumber before, you will not be surprised about the amount of supporting code that it needs:


And all of that to test:

Yes, a simple map lookup.
I had enough trust with the developer to bluntly say, "That is a big waste of time."
"But my boss expects me to write test for all classes," he replied.
"At the expense of?"
"Expense?"
"Anyway, those tests have nothing to do with BDD."
"I know, but we decided to use Cucumber for all tests"
Me: "…"
I understand the mental satisfaction of bending the tools to your will, but nonetheless it made me sad.
Where is the tragedy?
The tragedy is that two bright developers (both of whom I would take to a team interview) are wasting time writing those kinds of tests, tests that are pointless, and that will need to be maintained by future generations of IG developers.
The tragedy is that instead of using the correct tool for the job, we decide to keep plugging away with the wrong ones, for no particular good reason.
The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.
Instead, we just mechanically apply it without too much thought, which usually means that we end up with at best mediocre results, losing most of the benefits but paying all (or even more) of the cost. In my experience writing good unit tests is hard work.
So is 100% code coverage worth pursuing?
Yes, everybody should achieve it … in one project. I am of the opinion that you have to go to the extreme to know what the limit is.
We already have plenty of experience of one extreme: projects that have 0 unit tests, so we know the pain of working on those. What we are usually lacking is the experience in the other extreme: projects where a 100% code coverage is enforced and everything is TDD.
Unit testing (especially the test first approach) is a very good practice but we should learn which tests are useful and which ones are counterproductive.
But remember nothing is free, nothing is a silver bullet. Stop and think.
We decided to use screwdrivers for everything.
Finally got that nail in. I think screwdrivers are hard to use. For our next project let's use hammers for everything. They look simpler.
Rolf. Very accurate summary :)
😄👍🏻
This reminds me of what I read from Uncle Bob and DHH (linked from Uncle Bob's article): blog.cleancoder.com/uncle-bob/2017...
Recently I've come to the conclusion that we made an Idol out of unit tests and greatly undervalue integration tests. The examples above are clearly show code that could be tested as part of the bigger picture, because in itself it doesn't mean anything: it's glue code and integration tests are supposed to check exactly that... The gluing of components together!
So the bottom line is: write a good mixture of unit and integration tests and, most of all, THINK! That's our job.
The most important thing you have pointed out is : "THINK".
Yaaas! Blindly following metrics and using tools without applying critical thinking of cost vs benefit is the worst! I especially hate when it causes other devs or management to get a bad taste in their mouth around quality and start pushing devs to skip things that are actually important.
I've found tools like cucumber are awesome at the acceptance test level, but the cost vs benefit breaks down very quickly as you try to apply it at lower levels. You're basically maintaining this alternate human readable text and its mapping to code in addition to your regular unit test code. But is any non-developer going to read that feature file? No. So why do it when developers can easily (and probably more easily) just read well-written unit test code??
Definitely going to refer my teams to this article. Thanks for writing it!
Thanks a lot for the feedback!
I haven't done much formal testing myself, and definitely not any unit testing. While trying to understand it a bit more, I came across a really good paper: rbcs-us.com/documents/Why-Most-Uni...
What are your thoughts?
Mr. Coplein is above my pay-grade, so I will let Uncle Bob argue with him :)
m.youtube.com/watch?v=KtHQGs3zFAM
Personally, I always say that system level test are the best and only ones that we should write, if they could be run in a few seconds, in isolation by several developers at the same time.
In my personal experience, the systems that I work with are composed of several docens of moving parts, so system level test end up being too slow, brittle and hard to debug.
I wasn't there, but I think that is a big reason why unit test were born, out of pragmatism.
As Stéphane says in the comment, you need a good mix of test, but tests are not a substitute for thinking!
I would recommend Rich Hickey talk "Hammock Driven Development" m.youtube.com/watch?v=f84n5oFoZBc (all talks by Rich Hickey are excellent!) and Uncle Bob blog blog.cleancoder.com
Thanks for the question and the pdf!
100x this!
Oh and don't forget: Collaborate with the business representative (product manager, product owner, business analyst, whatever) on your high-level test cases! You will be surprised how good both of your understanding of that tested feature will become.
From my experience code coverage metrics are rarely defined by the development team but a management team. For every marginal increase in test coverage there is a less proportional increase in value. Sounds like this organisation has more money then sense to be pursuing 100% coverage. Apply tests (both unit & integration) where it makes most sense.
This is where I draw the line between experience and knowledge of developer tools. Most developer feel that they must use every tool, apply every methodology without critically thinking through the use cases.
This article is on point, I will share with my team.
Thanks for sharing!
Very informative, thanks!
Thanks you for reading!
Loved this article.
Loved that you loved it! Thanks!
Docker and testcontainers are changing the game. Pair that with Jenkins on Cloud wth auto scale group - you get what you wanted below - it's not seconds but it's tens of seconds for single test and minutes for full test suite. Oh and it's easy to debug.
This is where I draw the line between experience and knowledge of developer tools. Most developer feel that they must use every tool, apply every methodology without critically thinking through the use cases.
I will share with my team.
Achieving 100% coverage does not need to be all bad. From what I see, is that they are enforced into a frame - cucumber or mockito. Both great tools, but not for everything.
That said; personally I prefer spike and stabilize these days. More code in less time.
100% test coverage gives the illusion of complete QA. Senior management like easily measurable illusions.