Wednesday, 23 May 2012
Phillips Scale of Code Quality
I created this scale of code quality as a bit of a joke many years ago. It is a generalization so not always applicable to the real world. But you may find it useful for something.
Note that it is probably not a good idea to start comparing the code of different projects or different programmers using this scale. Nevertheless I think it is useful for getting an idea of how the software you work with compares with others. I have been working with C (as well as C++ and C#) for almost 30 years and the code I have worked with has varied from about a 1.5 to a 3.5.
At what level is your code on the scale?
0. Really Bad
I have only ever seen this sort of code written in BASIC and Fortran. It is characterised by lots of goto statements (ie, spaghetti code). It can be impossible to understand even a small routine of 20 lines without many hours (or days) of study.
This sort of code can be correct (ie behaves according to specifications), so some people may not rate it too badly. However I rate it really bad as first and foremost the code must be understandable, if anyone is to have a chance of modifying it.
The code shows that whoever wrote it does not understand the language and/or simple low-level design. There are no comments. Very basic things like correct indentation may not be followed.
At the high-level there has been little or no thought put into decomposing the problem. Modules (if there are any) are tightly coupled.
This sort of software is characterised by many problems with reliability and maintainability. For example, there will be run-time errors and crashes, "unreproduceable" bugs etc. It is usually impossible to enhance the code without rewriting large sections.
Generally C code is better than this. This is basically the standard of most university students after an introductory programming course.
At the low-level the code may be verbose, inefficient but understandable. There are probably problems with reliability, maintainability, portability etc. Cutting and pasting has resulted in duplicated code (which is bad for maintainability and other reasons). There may be redundant and unreachable code which makes the code difficult to understand.
Some effort has been put into the design and separation into modules. However, the choice of module boundaries is poor and the communication between modules is not well understood. For old code the initial design will have been compromised over time. Decoupling has been compromised with hidden interactions (such as use of global variables).
There are still some problems with reliability such as occasional crashes and inexplicable behaviour.
The software is difficult to maintain. For example, minor changes may introduce unexpected bugs. It is also very hard to predict how long even minor enhancements or bug fixes will take to implement since seemingly simple changes may require major redesign.
Initially the overall design was good but the documentation on the design is lacking - either it is nonexistent, out of date, or hard to understand.
The code is probably fairly reliable, understandable, portable, reuseable etc, and correct in most cases. At a high level the design is hard to understand and the system is difficult to maintain.
Generally there are some coding standards so that the code is consistent and understandable. There would be comments describing each source file, global function etc (as prescribed by the coding standard). At the low-level some variable declarations are commented and there may be comments throughout the code though they will often just reiterate what the code is doing rather than provide useful information.
Small code changes can be made easily as the low-level design is straightforward. Bugs introduced by code modifications are usually caught by testing (as the code is readily testable) and assertions.
It is hard to make significant changes as the overall behaviour is difficult to comprehend. This leads to code changes that are not done properly for expedience (ie fear of breaking something unexpectedly). This leads to further brittleness of the system. After many major enhancements the system will become as hard to maintain as the abovementioned "Poor" system.
The overall design has decomposed the system into well-defined modules. The interfaces between modules are simple and well understood.
The best coding practices are followed. There are useful comments - for example, all modules and their interfaces are described, the purpose of all functions are commented, including parameters and return value, and difficult to understand code or algorithms are fully described.
The code uses many assertions which makes code changes less likely to introduce bugs. Moreover, assertions help to document exactly what the code is doing by explicitly declaring assumptions.
This sort of project has little difficulty coping with small changes or even anticipated major enhancements as long as the standards are maintained. There will probably be problems with significant unanticipated changes, which may contort the design causing subsequent problems.
Finally, what I consider the ultimate level is the "Good" level but with the addition of automated regression testing of all modules (which also obviates the need for assertions). This form of testing is nowadays colloquially known as "unit testing".
Unit tests should be performed whenever any code is changed to ensure any existing behaviour has not been broken. Units tests should have complete code coverage (a coverage analysis tool can ensure this as well as detecting unreachable code), exercise all boundary conditions, and all code paths (if possible) including all error conditions.
Unit tests allow the code to be completely refactored without breaking existing behavior. Hence major changes to the system (though taking some time) can be handled without compromising the existing behavior.