This is a story about a developer, let’s call him Leon. Leon was young and full of energy and good will. He wanted to write quality code and to avoid regressions. He wanted to write tests, to refactor verbose methods and to remove unnecessary comments. Yet, there he was sitting in front of a complicated class, unable to write anything, while the cursor kept blinking. Now, as you may have guessed, most of the code was written a while before Leon joined the team. He had read a small part of the code, which he had worked on, but did not know much about the rest.
On Monday, September 18th, Leon was desperately looking for a documentation or some tests to help him understand a bit more what HahaTryAgainLoser class was supposed to do, but he never found any. Suddenly, a small icon appears at the bottom of his screen that might end his struggle. It was an invitation to ‘Agile Testing Night’ held at Société Générale La Défense and the conference title was ‘Unit testing implementation on Legacy code’. He decides to give it a try.
Tuesday, september 25th, Leon’s watch is displaying 7:08 pm he is sitting on a comfortable chair, the event started a few minutes ago with a presentation of which he only heard a few words, agile, scrum, testing and so. He is waiting for the main event. Then a keynote starts.The keynote is entitled ‘Mutation testing’, Leon has no idea what that is, he starts listening.
The speaker, Nicolas CHAZEAUX, starts with a statement: ‘Code coverage is a misleading KPI’. To prove it he writes a test case which contains a single line called the tested method. The code coverage is indeed 100% since all the code in the method gets executed, however, the test itself contains no assertions and does not challenge any logic, thus, it is completely useless. Writing unit tests defines a few rules and expected behaviors that the code must respect. These rules usually depend upon business logic. However, how can we be sure we have tested all conditions ? This is where mutation testing comes in play. A mutant is a change of code. That can be replacing a + with a – or changing a constant value. Consider the following code:
Test case should_return_20_times_daily_salary must fail and turn red, if we replace the ‘*’ with a ‘/’. If it does not, we say that the mutant has survived and we probably missed something while writing the test. By doing so we did not respect the business specification and we altered the target rule. This is what usually materializes as a production issue or a bug. The speaker goes on presenting a few available frameworks that enable mutation testing:
- For .NET: Stryker.Net
- For java: Pitest
- For Python: Mutmut.
To close up the keynote, Nicolas points out the shortcomings of mutation testing which are mainly:
- Long running time, which makes it not adapted to run with every commit or code change.
- Complete analysis becomes difficult when a lot of mutants survive.
Leon looks again at his watch, it is 7:55 pm, he is on his way to the other room where the conference is being held, he now knows about mutation testing but still has no clue about how to fix his legacy code issues. He is processing what he has just learned in his head, while waiting for the conference to begin.
Patrick Giry starts the conference with the golden rule ‘never change code, unless you have an improvement request or a bug regarding it. He explains that while it is true we work in IT, it is also true that we must deliver added value to the business, otherwise it is lost time. When changing code that has nothing to do with our current requests, we run the risk of breaking things without delivering an expected change. If you are willing to do so, you can probably expect people yelling at you on the phone about some component acting weirdly in production that was not supposed to be impacted by the last release.
To improve the legacy code we have, there are two tasks to be done and must not conflict with the golden rule:
For testing, let us suppose, we are working on some code that has no unit tests. We should start creating the tests at the most shallow branch. Consider the following code:
In this example, the most shallow branches are the nested ‘if’ branches. A good starting point here is to specify the should_raise_MyError_when_not_condition_b() testing the raise error in the else branch. The same goes for the code inside the ‘if condition_b’ branch. However, we notice that some other class code is getting called. This makes the unit test harder since it involves outer logic. As a solution we could either mock or redefine that code to override its behavior in the unit test and make it return some value we are expecting. This helps isolate the tested class and avoids getting distracted by external issues. Keep going from there, isolating and writing tests for branches until the code is covered.
Regarding refactoring, we start at the deepest branch. We will consider the same previous code example:
In this example the deepest branch is the while loop on condition_a. We should start by extracting the while condition in a method giving it a name that preferably explains a business logic. This leads us to the following:
Now we move on to the next deepest branch which is the ‘if’ branch inside the while and we extract it. From there we should keep on refactoring the same way going from one branch to a shallower one.
It is now 9:05 pm, Leon is eating some pizza and having a drink with the people he met at the conference. In his head, he is memorizing what he has learned tonight, he wants to share his knowledge with the rest of the community. He now knows the following:
- NEVER change code unless you are working on an improvement or a bug.
- When writing tests start at the shallowest branch
- If a class calls some external class code, either mock or override the class behavior
- When refactoring start at the deepest branch
- Mutation testing is about challenging your tests logic
- Mutation testing is slower than unit testing both in execution and analysis.