How toLife at Calcey

What is in a code review? Here is how Calcey Technologies does it.

Calcey

Code reviews are an important recurrent gatepost in agile software development, and a good engineering practice we follow at Calcey. As most software development teams know, frequent code reviews ensure the containment of poor code quality such as inefficiencies in unit-level design and lack of adherence to coding standards. Historically, the practice of code reviews existed in methodologies like RUP as both informal code walkthroughs and the more formal Fagan Inspection. At the onset of the agile revolution, code reviews were re-branded as peer reviews (which actually meant peer code reviews), as a necessary ingredient to building a stable software in an evolving fashion. The bottom line justification for the time spent on code reviews is that they are essential if we are to end up with a scalable and extensible piece of software, as opposed to a hack-job that is both unstable (difficult to scale) and impossible to extend later on, for emerging market needs.

I’d like to outline our approach to code reviews, and how we conduct them. We have a rule of thumb which developers and Scrum masters use to initiate code reviews –  any new release to a test environment must have been preceded by one. This simple rule gives Scrum masters the flexibility to plan the review, but binds them to conducting it within a given development sprint. Our review setting is that of an informal workshop, where the developer concerned projects the code on screen and walks through sections of the code based on the prompting of the reviewer. The review team consists of an architect and at least one other senior developer outside of the project under review, with competency in the programming language and frameworks concerned if possible. Other members of the project team are welcome to listen in and give their feedback. The Scrum master updates the code defects in the task backlog and assigns them to the developer(s) concerned. The duration of a code review session could vary from between 30 to 90 minutes, depending on the scope of work accomplished during a given sprint. We take our time, as faster is not better when it comes to an effective review; we inspect at most 300 lines of uncommented code for an hour.

The reviewers keep an eye out for all the typical code vulnerabilities during the review. We begin with readability, style and conventions –  there cannot be code that an experienced outsider cannot understand after a brief explanation by the developer concerned. If there is, the code is likely to be either poorly structured (design defects) or poorly presented (style defects), or both. Calcey generally follows the industry accepted coding style conventions for the major programming languages, such as the C# coding conventions from Microsoft. Unit tests are often a good place to assess the stability of the new functionality implemented, and the obvious presence of stringent unit tests can help reduce the subsequent line-by-line review effort. We’d then move on to trapping major issues in earnest, checking for algorithmic inaccuracy, resource leakage, exception propagation, race conditions, magic numbers and suchlike. There are several online sources that closely portray the Calcey code reviewer’s mindset, such as this checklist from projectpatterns.org.

One of the biggest benefits of a workshop-style code review is that the authors of the code themselves realize defects and improvements, as a direct result of trying to explain how the code works to reviewers who might not be fully acquainted with the design. In situations where pair programming is not feasible, the code review mitigates the risk of “coding in silos”  to a great extent.

Having said this, we also do our best to automate humdrum quality checks. Our .NET based app development projects are integrated with StyleCop (downloadable from CodePlex), to check for style issues like custom naming conventions or compulsory comments for XML. We also advocate enabling Code Analysis in Microsoft Visual Studio to warn us of potential code defects when compiling, from the viewpoint of the Microsoft .NET Framework Design Guidelines. Apple iOS development comes with its own set of code analysis tools –  we use Instruments for Performance and Behavior Analysis for profiling our code at runtime to identify memory leaks, a tendency when programming with Objective-C.

Coding review metrics such as code coverage and defect count are gathered from the individual reviews by the Scrum masters, and submitted to our principal architect for statistical analysis, strictly to improve the effectiveness of the review process (and not for finger-pointing). Junior developers can hope to learn a lot from well-conducted code reviews, not only about the specific technologies and design principles involved, but also about working together as a team to engineer a quality product. After all, our aim is to perform what Jerry Weinberg named nearly half a century ago as “egoless programming” .

“The objective is for everyone to find defects, including the author, not to prove the work product has no defects. People exchange work products to review, with the expectation that as authors, they will produce errors, and as reviewers, they will find errors. Everyone ends up learning from their own mistakes and other people’s mistakes.”  – Jerry Weinberg, “The Psychology of Computer Programming”, 1971