I will share an experience of an alternative way to doing unit testing.
Usually, UT involves: coding, then UT to achieve some coverage degree.
TDD specifies to first have the tests, let them all fail, and then do the code until all tests pass.
An alternative is to mix them, in the coding phase.
First, a (say, skillful) programmer develops some code including all the interfaces and some initial implementation, as complete as possible (how complete?). When the whole system compiles (warning-free, of course!), passes the code to a second (not so skillful) programmer, that develops the UTs by playing with the provided code, not only with the sole goal of achieving a desired coverage degree, but to test the functionality, in a grey-box like testing: this second programmer sees the interfaces, writes the tests for semantic testing, then sees the implementation for ensuring that the tests exercises the most important (complex) functions. In the way, this second programmer learns the advanced techniques used by the original developer.
Hopefully, some tests will fail. Then, the code goes back to the original developer, who fixes the implementation until all tests pass.
Outcomes? A partition of the testing space is done: integration code + new code.
By ensuring that the new code (alone, isolated) works as expected (since all UTs pass), the first developer can focus on the integration part, fixing the gluing issues. Plus, the second developer is now more skilled.
So what is expected from the second developer? That he/she uses the new code (which should be provided as if it were a reusable library), plays with it, learns the techniques, and generates the UTs.
We tried this in the enhancement of a Windows kernel driver which involved some algorithms and smart data structures, with the outcomes described above.
The name of this variant could be "CTC" (because of Code-Test-Code), but I prefer calling this GUT.
Give it a try and let me know how it went.
Bastard Electrons
Monday, January 2, 2017
Thursday, November 24, 2016
My alternative to large coding guidelines.
There are great coding guidelines and coding standards. Moreover, there are great guidelines (and standards!) for writing coding guidelines.
The problem is their deployment: developers have to read, understand and apply them. Usually, static code analyzers have to be used to enforce the rules, leading to extended review cycles, usually because the developers don't memorize all the rules.
Also, the coding guidelines are usually shared and "reused". They are great because they capture good industry practices, but not all the rules apply for a particular project. So I'm talking about efficiency.
My previous job involved aeronautic, space and precision agriculture projects. From past experiences and from these projects, I noticed that not all the coding mistakes where repeated in all the projects. I think that the reasons were: a combination of different factors both at developers and context.
At developers level, each project required a particular skills set, particular seniority levels, and a specific distribution of educational backgrounds (electronic engineers, computer scientists, telecommunications, etc.).
At context level, projects differed in nature: embedded programming, compilers development, distributed systems, etc.
In each project, more or less the same mistakes occurred again and again. Even with high team members rotation.
I then came with this concept: rather than a large, generic and project-agnostic coding guideline for everyone, develop a culture-based tool to focus on the specific mistakes for the project. I created a practice I named "frequent common errors lists" or FCEs, which -surprisingly for many people- started empty, and where filled with only those recurring mistakes for the project.
I stated these rules and practices:
In short: instead of a large document written by others, the FCE list is collaboratively written by the the team: you and the guys that seat next to you; it is short and easy to learn. Leave the coding guidelines to the static checkers rules writers, and focus on the common errors that frequently show up in your particular project.
Of course, use static checkers as well.
Coding guidelines are for machines, FCE lists are for humans.
The problem is their deployment: developers have to read, understand and apply them. Usually, static code analyzers have to be used to enforce the rules, leading to extended review cycles, usually because the developers don't memorize all the rules.
Also, the coding guidelines are usually shared and "reused". They are great because they capture good industry practices, but not all the rules apply for a particular project. So I'm talking about efficiency.
My previous job involved aeronautic, space and precision agriculture projects. From past experiences and from these projects, I noticed that not all the coding mistakes where repeated in all the projects. I think that the reasons were: a combination of different factors both at developers and context.
At developers level, each project required a particular skills set, particular seniority levels, and a specific distribution of educational backgrounds (electronic engineers, computer scientists, telecommunications, etc.).
At context level, projects differed in nature: embedded programming, compilers development, distributed systems, etc.
In each project, more or less the same mistakes occurred again and again. Even with high team members rotation.
I then came with this concept: rather than a large, generic and project-agnostic coding guideline for everyone, develop a culture-based tool to focus on the specific mistakes for the project. I created a practice I named "frequent common errors lists" or FCEs, which -surprisingly for many people- started empty, and where filled with only those recurring mistakes for the project.
I stated these rules and practices:
- Keep the list as short as possible.
- Don't feed it "just in case" or from past experiences. Let errors show up and repeat.
- Identify recurring errors during code reviews. Once an error is identified, ask the code author to add the entry in the FCE list (with an FCE [number] identifier), stating:
- What shall not be done
- The right way to do it
- Optional: a short rationale with an explanation.
- Optional: one or two bibliographic links (maybe to a coding guideline rule :-) )
- When doing code reviews, highlight the FCEs in the review tool by just commenting the FCE identifier and nothing else. This will make the author to look into the FCE list and learn about the error. After a number of times having to look into the list, the author will finally learn it :)
- Only add coding errors, not stylish issues (e.g. bracing style).
- Maintain the FCE list in a wiki
- All the developers should have a bookmark in their favorite browser
- If the wiki has the feature, all the developers should be subscribed for changes and additions so everybody is up-to-date.
- From time to time, make an FCE test, in the form of a code review simulacrum: provide a piece of code to the developers (if possible, real code from the project), and ask them to find the FCEs.
Once we implemented the FCE list in the projects, we noticed that the lists grew steadily at the beginning but then stabilized at about 20 items. Reviews turned more effective, took less time and required fewer iterations,
The first FCE was always the same:
FCE 0: frequently read the FCE.
In short: instead of a large document written by others, the FCE list is collaboratively written by the the team: you and the guys that seat next to you; it is short and easy to learn. Leave the coding guidelines to the static checkers rules writers, and focus on the common errors that frequently show up in your particular project.
Of course, use static checkers as well.
Coding guidelines are for machines, FCE lists are for humans.
Subscribe to:
Comments (Atom)