Code Review Process
At Jemurai, we do a lot of code review. We do it for quality and we do it for security. We believe code review is one of the very best ways to understand the quality of your codebase, while providing contextual (its your application) feedback to developers about how to write better more secure code. We also believe that developers can learn to do awesome security code reviews. In this post, I'll outline the basic thought process and approach I use when doing code reviews. Hopefully this post may help make it easier for people to do this kind of code review.
Overview of the Code Review Process
Typically, a code review involves several manual passes through a large number of files analyzing code following different trains of logic, checking for a large number of different kinds of issues. I almost never start at file A and work to file Z in that order based only on their names.
I use checklists extensively. Some of what I'm going to learn, I know right away looking at code. That
will happen when you've been doing this as long as I have. But I still use checklists to ensure that I
give each class of item appropriate attention. I do use tools to find specific potential issues. Grep
is your friend, and its easy to make a catalog of things to look for.
grep -n -C 2 -R 'innerHTML' *
Still, the manual part of the review is essential. Perhaps most importantly because it
gives me the ability to communicate with developers about how they are doing things in terms they really
understand. But also because there are limits to automated analysis. Ask yourself how you can automatically
check for "Insecure Direct Object Reference"?
Strategy: Review Paths
As I indicated above, I usually start by thinking from a particular angle - what am I looking for in this code? Here are some examples of the paths I follow.
- Overall design, abstraction, packaging, layering, etc.
- Attack surface (exposed web endpoints for example)
- Data flow
- Trust boundaries
- Data layer security
- View layer security - roles, encoding
- Resource usage (threads, apis, etc.)
- Random sampling
In some cases it is possible (and desired) to manually review every line of code comprehensively.
Security Checklist Items
The following are among (but not necessarily all of) the items I specifically look at with security code reviews.
- Input validation
- Output encoding
- SQL Injection
- Direct Object Reference
- Information Leakage
- Error/Exception Handling
- Secure Configuration
- Encryption / Transport Layer Protection
- Session Management
- Appropriate logging / auditing
With the security items, I want to first see that a broad solution is in place to handle the potential issue. For example, I want to see output to be rendered in a browser going through encoding by default. For direct object reference, I want to see that the application is checking access to objects on all different types of access. Again, first I'm checking to see if there is a general approach or solution in place to solve the issue. If there is not, that's a big problem.
Then, after verifying that a top level control is in place, I'm going to dive in and try to find examples where exceptions have been created and output is not being encoded properly. This is a little different in different technologies, but once you know how it might look you have a good idea what to look for. In a system using annotations and Spring security for authorization to modify objects, it becomes very straightforward to check all of the service methods for appropriate annotations.
With some classes of items, for example SQLi, I will literally look at every query generation method to try to catch potential issues. This is a factor of prevalence and severity - these types of issues are common and can cause big problems!
There is an OWASP Code Review project and that serves as a point of reference for checklist items during security code review. As with any resource, it should be looked upon critically and put in the proper context.
Quality Checklist Items
The following are items I look for in general quality reviews.
- Source code design
- Object Oriented principles (encapsulation, inheritance)
- Application layering
- Performance of code
- Appropriate use of 3rd Party libraries
- Consistency, readability, maintainability of code
- Use of current and appropriate language features
- Transaction strategy / caching
- Use of threads
- Query structure / performance
- Appropriate unit tests
It is impossible to identify all quality criteria up front. Experienced developers make judgements based on their experience and the results for this review will be no different. The noted criteria are baseline examples of things that will be checked.
Did you know that communication is part of code review? I've definitely had security code reviews done where the deliverable was a big PDF report that said my code was crappy. Often, I didn't understand the potential issues before the review and it left a very substantial amount of work to figure out what the real problem was and how to fix it.
In my code reviews, I establish a relationship with the developer early and work with them as I find things. I try to ask them whether my understanding of a potential issue is correct rather than point at a flaw. In some cases, I have missed a whole control layer that is being applied with an aspect or a framework I haven't seen before. By asking, the developer gets a chance to think through their understanding of how the control should work and either admit that they missed something or point me in a direction to clarify. Also, I work hard to make sure that the resulting report of items includes solutions already discussed with the developers. That way the result feels more like an agreement and a step forward.
We found XSS, but here's how we're going to solve it and this is the timeframe.
Code review metrics make it easier to put the results in context. Metrics like "LOC - Lines of Code" are commonly used but really don't mean that much other than the size of the project. Identifying a small set of actionable metrics can be a helpful part of a code review process.
At Jemurai, we are working to standardize the metrics we provide as output for a code review. We hope this may help clients evaluate comparative severity of different projects. In theory, if these metrics can be standardized and have value, then they may be used in code reviews of open source projects ... which itself might lead to an interesting broader code review project.
|Application 1||Lines of Code||100,090|
|Test LOC/Non-Test LOC||23%|
|Major Design Issues||3|
|Trusted 3rd Party API||1|
|Trusted 3rd Party Libraries||25|
|Appropriate Complexity (CRC < 7)||72%|
Next Steps: SDLC Integration
Not all code review items are amenable to easy automatic "metrification". That's why I recommend training developers and making code review an explicit part of the SDLC. Without that type of ongoing process, we can hardly blame developers for focusing on features to the exclusion of security and quality.
Although not all code review metrics are easily gathered automatically, some are (LOC and Test coverage). Building a process to automatically gather those metrics can provide a useful measure of code maturity. Integration into the SDLC through continuous integration systems like Jenkins and Sonar can help give developers the tools they need to measure and maintain the quality points raised during the review.