I will share an experience of an alternative way to doing unit testing.
Usually, UT involves: coding, then UT to achieve some coverage degree.
TDD specifies to first have the tests, let them all fail, and then do the code until all tests pass.
An alternative is to mix them, in the coding phase.
First, a (say, skillful) programmer develops some code including all the interfaces and some initial implementation, as complete as possible (how complete?). When the whole system compiles (warning-free, of course!), passes the code to a second (not so skillful) programmer, that develops the UTs by playing with the provided code, not only with the sole goal of achieving a desired coverage degree, but to test the functionality, in a grey-box like testing: this second programmer sees the interfaces, writes the tests for semantic testing, then sees the implementation for ensuring that the tests exercises the most important (complex) functions. In the way, this second programmer learns the advanced techniques used by the original developer.
Hopefully, some tests will fail. Then, the code goes back to the original developer, who fixes the implementation until all tests pass.
Outcomes? A partition of the testing space is done: integration code + new code.
By ensuring that the new code (alone, isolated) works as expected (since all UTs pass), the first developer can focus on the integration part, fixing the gluing issues. Plus, the second developer is now more skilled.
So what is expected from the second developer? That he/she uses the new code (which should be provided as if it were a reusable library), plays with it, learns the techniques, and generates the UTs.
We tried this in the enhancement of a Windows kernel driver which involved some algorithms and smart data structures, with the outcomes described above.
The name of this variant could be "CTC" (because of Code-Test-Code), but I prefer calling this GUT.
Give it a try and let me know how it went.
No comments:
Post a Comment