The problem is their deployment: developers have to read, understand and apply them. Usually, static code analyzers have to be used to enforce the rules, leading to extended review cycles, usually because the developers don't memorize all the rules.
Also, the coding guidelines are usually shared and "reused". They are great because they capture good industry practices, but not all the rules apply for a particular project. So I'm talking about efficiency.
My previous job involved aeronautic, space and precision agriculture projects. From past experiences and from these projects, I noticed that not all the coding mistakes where repeated in all the projects. I think that the reasons were: a combination of different factors both at developers and context.
At developers level, each project required a particular skills set, particular seniority levels, and a specific distribution of educational backgrounds (electronic engineers, computer scientists, telecommunications, etc.).
At context level, projects differed in nature: embedded programming, compilers development, distributed systems, etc.
In each project, more or less the same mistakes occurred again and again. Even with high team members rotation.
I then came with this concept: rather than a large, generic and project-agnostic coding guideline for everyone, develop a culture-based tool to focus on the specific mistakes for the project. I created a practice I named "frequent common errors lists" or FCEs, which -surprisingly for many people- started empty, and where filled with only those recurring mistakes for the project.
I stated these rules and practices:
- Keep the list as short as possible.
- Don't feed it "just in case" or from past experiences. Let errors show up and repeat.
- Identify recurring errors during code reviews. Once an error is identified, ask the code author to add the entry in the FCE list (with an FCE [number] identifier), stating:
- What shall not be done
- The right way to do it
- Optional: a short rationale with an explanation.
- Optional: one or two bibliographic links (maybe to a coding guideline rule :-) )
- When doing code reviews, highlight the FCEs in the review tool by just commenting the FCE identifier and nothing else. This will make the author to look into the FCE list and learn about the error. After a number of times having to look into the list, the author will finally learn it :)
- Only add coding errors, not stylish issues (e.g. bracing style).
- Maintain the FCE list in a wiki
- All the developers should have a bookmark in their favorite browser
- If the wiki has the feature, all the developers should be subscribed for changes and additions so everybody is up-to-date.
- From time to time, make an FCE test, in the form of a code review simulacrum: provide a piece of code to the developers (if possible, real code from the project), and ask them to find the FCEs.
Once we implemented the FCE list in the projects, we noticed that the lists grew steadily at the beginning but then stabilized at about 20 items. Reviews turned more effective, took less time and required fewer iterations,
The first FCE was always the same:
FCE 0: frequently read the FCE.
In short: instead of a large document written by others, the FCE list is collaboratively written by the the team: you and the guys that seat next to you; it is short and easy to learn. Leave the coding guidelines to the static checkers rules writers, and focus on the common errors that frequently show up in your particular project.
Of course, use static checkers as well.
Coding guidelines are for machines, FCE lists are for humans.