I was reading an interesting article on The Register today called “What Compsci textbooks don’t tell you: Real world code sucks”. Whilst the title of the post is quite brash, I did find myself nodding my head at pretty much everything in the post. As a Lead Developer at a financial services company, code quality is something that bothers me constantly, both in the legacy systems I have inherited and the new systems we are developing. I thought I would add comment to one of the points in the original post.
“A tremendous amount of source code written for real applications is not merely less perfect than the simple examples seen in school — it’s outright terrible by any number of measures. Due to bad design, sloppy or opaque coding practices, non-scalability, and layers of ugly “temporary” patches, it’s often difficult to maintain, harder still to modify or upgrade, painful or impossible for a new person joining the dev team to understand, or (a different kind of problem) slow and inefficient. In short, a mess.
Of course there are many exceptions, but they’re just that: exceptions. In my experience, software is, almost as a rule, bad in one way or another.”
I feel like I have been living this every day over the past 18 months. I have inherited some systems that are very challenging to work with. They contain code that is very hard to maintain, doesn’t scale and has virtually no unit test coverage. Pretty much all of the original authors have now all moved on, but that still leaves a waste ground of code for my team to maintain.
I now have a decent team working under me, and they are trying as best as they can to tame this huge legacy beast, and they are doing a pretty good job with what they have got, but we still find ourselves having to make compromises that make us uneasy due to time pressures, shifting priorities and lots more reason.
I have found that if I look at this code too closely and try to judge it by how I would like it written, I just end up getting annoyed at what I find. I would say that the best way to try and drive up the overall code quality, isn’t to criticize the actual code all the time as there are not enough hours in the day to go over everything. The best tactic I have found is to guide the team with the tools and techniques that they use. If you get the working practices sorted out and ingrained in the team, then the code developed from that point should start to sort itself out. Everyone has their own way of implementing code. The way things are done reflect the personal style and experience of the developer in question. But if the following things are adhered too then the style that someone has implemented that particular class in doesn’t worry me as much because the code will have been built to defined practices. The practices we introduced to the team include:
- Introducing a continuous delivery pipeline – All systems have gated builds (We use TFS here) so every check in gets built, the tests executed and MSI’s generated. If the build fails for any reason, i.e. compiler error or failing test then the code fails to check in.
- Enforcing Unit Testing Discipline – All developers now routinely write unit tests for their code. This was an uphill battle, but we are very close to winning that one. We monitor build over time reports in TFS which give a color coded clue to test coverage. We encourage developers to be around 70-75% covered on new code and code they are maintaining.
- Use of standard Visual Studio Code Metrics – We encourage developers to keep an eye on certain metrics like Cyclomatic Code Complexity and Maintainability indexes. This gives a good high level indicator of any code smells brewing. These metrics are aimed at helping the developer to keep their code readable by reducing complexity.
- Static Code Analysis – All new code and a lot of legacy systems have static code analysis rules enforced on local and TFS server builds. For new projects we have a custom rule set that is a subset of the ‘Microsoft All Rules’ set. This caused a lot of heated debate in the teams when we started enforcing this, but once people got used to the rules, they just got used to working with it. For old legacy systems we start off applying the ‘Microsoft Minimum Rule’ and then work our way up from there.
- Code Productivity Tools – We make all our developers use a code productivity tool. We settled on CodeRush as it has a lot of extra tools for guiding less experienced developers, but tools like ReSharper and Telerik Just Code are just as good. The things I like about these tools are the visual on screen feedback they give you. You can enable a colored bar at the side of the code window that informs you of any code issues. These issues are driven from a rule set, so if you get the team into the mind set of getting rid of the color blips whilst they are working (the tool even does most of the work for you) then you are on the road to better code. Generally the refactoring helpers provided by these tools are better than those provided in Visual Studio too.
I won’t pretend that we now churn out systems so beautiful that angels will weep tears of joy, but by enforcing these points we are driving up the code quality standards and the difference have been very noticeable.
I have also started using these tools to guide code reviews. Code reviews used to just be a bunch of developers sitting around the projector picking holes in code. These code reviews were not very effective. Instead I propose the following process for running a code review:
- Get the code out of source control fresh.
- Does it build? Yes then continue, No then stop the code review.
- Run the unit tests.
- Do they run and all pass? Yes then continue, No then stop the code review.
- Check the unit test code coverage.
- Is the coverage around >60%? Yes then continue, No then stop the code review unless there is a good excuse for the coverage that the review team are happy with.
- Check the code metrics (Cyclomatic Complexity and Maintainability Index)
- Are the metrics within agreed boundaries? Yes then continue, No then stop the code review.
- Run the static code analysis against the agreed rule set?
- Are there any warnings / errors? Yes then stop the code review, No then continue.
- Once you get to this point, the development practices have been followed and you can proceed to review the actual code.
By following this basic template for formal code reviews you can ensure that by the time you get looking at the actual code, that the development practices have been followed and that what you are about to review is to a certain quality level so you can judge the code from an agreed baseline.