Friday, 23 November 2012

Verifiability

The verifiability (sometimes called testability) of software is how easy it is to check your code for bugs.  I often talk about the importance of making software maintainable but verifiability is another attribute that does not get enough attention.  (See some of my previous posts for more on quality attributes such the Importance of Developer Quality Attributes).

There are many ways to create software that makes it difficult to verify.  If software is hard to test then bugs will inevitably occur.  I talk about how to make code more verifiable below.

But before I go on, I should mention TDD (test-driven development).  I will probably talk more about TDD in a later post, but I believe one of the main (but never mentioned) advantages of TDD is that it makes code more verifiable.

Testing

Verifiability plays an important part in testing.  After all, you can't test something properly if it is difficult to verify that it is correct or, conversely, show that it has no bugs.  I give an example of code below that shows how code can be hard to verify and makes bugs far more likely.

In extreme cases, software changes can be impossible to test.  For example, I was recently asked to fix up some dialogs that had some spelling mistakes.  The problem was that the software was old and nobody even knew how to get the dialogs to display or even if the code that displayed them was reachable.  (Some of the dialogs were not even used anywhere in the code as far as I could tell.)  It makes no sense to fix something if you cannot verify that it has been fixed!


Debugging

Verifiability is also important in the debugging process.  Removing bugs has three phases:
  1. detecting the presence of a bug
  2. tracking down the root cause of the bug
  3. fixing the bug
The last stage is associated with the maintainability (or modifiability) of the code.  The first two stages are usually associated with verifiability, though I prefer to use an extra quality attribute, which I call debuggability to describe how easy it is to track down a bug, and use verifiability to strictly describe how hard it is to find bugs.

How Do We Make Software More Verifiable?

Verifiability can be increased in three ways that I can think of:
  • external testing tools
  • extra facilities built into the software such as assertions and unit tests
  • how the application code is actually designed and written

Testing Tools

There are many software tools that can be used to assist software testing. Automated testing tools allow testers to easily run regression tests.  This is very useful to ensure that software remains correct after modifications.

One aspect of code that is not often well-tested is error-handling.  Test tools, mock objects, debuggers and simulators allow simulation of error conditions that may be difficult to reproduce otherwise.  For example, a debug heap library is invaluable in detecting memory allocation errors in C programs.

Debuggers obviously make software more debuggable, but they can also be useful for verifiability.  Stepping through code in a debugger is extremely useful for understanding code and finding problems that may not have otherwise been seen.

Assertions

Assertions are an old but crucial way for software to self-diagnose when it has problems.  Without assertions bugs can go unnoticed and the software may silently continue in a damaged state (see Defensive Programming).

In my experience, assertions are even more important for debuggability.  I have seen problems in C (almost always rogue pointers) that have taken days if not weeks to track down.  Careful use of assertions can alert the developer to the problem at its source.

Unit Tests

Unit tests (and TDD) are important for verifiability.  If you have proper and complete units tests then you can get a straightforward and obvious indicator whether the code is correct.

In my experience most bugs that creep into released code are due to modifications to the original design.  Poor changes are made because the modifier does not fully understand how the code works -- even if the modifier was the original designer, they can still forget!  This is further compounded by the fact that the modified code is often less thoroughly tested than the original version.

Mock objects are useful by themselves, for example for testing error-handling code.  (When not used with unit tests they may be called simulators, debug libraries, automated test harnesses. etc)  Mock objects are particularly useful with unit tests as they allow testing of a module to be isolated from other parts of a system.

I will talk more about verifiability and unit tests in a future post, hopefully soon.

Software Design

Many aspects of good design that improve the maintainability of code are also good for verifiability.  For example, decoupling of modules makes the code easier to understand but it also makes the modules much easier to test individually.  Testing several simple modules is always simpler than testing a single large module with the same features, even if only because the permutations to be tested increase dramatically as the number of inputs increases.  (See SRP and other design principles in Software Design).

Code

Perhaps the simplest way to increase verifiability is simply to write good code.  The same principles that a software architect or designer uses also apply at the coding level.

One of the worst programming practices is to use global variables to communicate between modules or functions.  This results in poorly understood coupling between different parts of the system. For example, if the behaviour of a function depends on the value of a global variable, it makes it very difficult to verify that the function works correctly, since you can't ensure that some outside influence will change that variable.

Another problem is large functions that do too much (remember SRP).  Again the more parameters (or other inputs) to a function, the harder it is to check that all combinations are handled correctly.

Similarly, code with many branches (ie, if statements) is hard to verify since there may be a large number of different paths through the code.  Even ensuring that all code is tested (with a code coverage tool) is insufficient. You may need to test every combination of code paths to be sure the code has no bugs.

Code Example

Finally, I will give an example of some C code that is virtually impossible to properly verify due to the way it was written.

Some time ago I worked on some C code for an embedded system.  A lot of the code made use of time (and date) values returned from the system.  Time was returned as the number of milliseconds since midnight.

One particular function was used to determine if a process had timed out.  Like the rest of the code it used this system "time" value. The code was very simple to implement except for the special case where the start time and the end time straddled midnight.  (Normally the code below would not be executing in the middle of the night but under unusual circumstances it could.)  Here is a simplified version of the code:


/* return 1 if we have already passed end time */
int timed_out(long start, long end)
{
     long now = get_time();
    
     if (start == end && end != now)
          return 1;

     if (end > start)
     {
          if (now > end || now < start)
               return 1;
     }
     else if (end < start)
     {
          if (now < start && now > end)
               return 1;
     }
     return 0;
}

Do you think the above code always works correctly?  It is difficult just looking at it to tell if it is correct.  It is virtually impossible for testers to verify that the above code works correctly, even if they were aware that midnight was a special case in the code. Setting up a unit test would be difficult since the software was not permitted to change the system time (though a mock object could have been created to emulate the system time).

You might also notice that the  code suffers from poor understandability (and hence poor maintainability) as well as poor verifiability.

A much better way to handle such timing events is to use a type like time_t or clock_t which does not reset at the end of day.


/* return 1 if we have already passed end time */
int timed_out(clock_t end)
{
     clock_t now = clock();
     assert(now != (clock_t)-1);
     return now >= end;
}

Note that the above code is only for low-resolution timing since the C standard says nothing about the resolution of clock_t – ie, the value of CLOCKS_PER_SEC is implementation-defined. However, in this example 1/10 second was sufficient and CLOCKS_PER_SEC == 1000 for the compiler run-time being used.
This code is more verifiable. First, it is fairly simple manually verify since anyone reading the code can understand it fairly easily.  It is also easy to step through the code in the debugger to make sure it works.  Moreover, it does not need special testing for midnight.

Summary


It is important to create software that is easy to test. We talked about using tools and code (such as unit tests) to enhance verifiability, but more important is creating good code. Now that I think about it, verifiability is probably affected more by the data structures that are used (such as the time-of-day format used in the above example). The data structures define the code; get these structures right and the software is likely to be much more verifiable, and consequently less likely to have bugs.

Furthermore, in my experience writing software with testing in mind is more likely to produce better code. This is one of the (many) advantages of unit tests (and even more so with TDD). If you create units tests as you create the code (or even before you create the code in the case of TDD) then you will create code that is not only more verifiable but also better designed in general.

Tuesday, 13 November 2012

Reusability Futility

The software industry regularly gets carried away with the some new fad or other. Sometimes these are good ideas but because they are over-hyped and taken to extremes can actually be counter-productive. I have seen many such fads in my career. A big one that took off in the late 1980's, was the emphasis on making code reusable. It's even said that this lead to the development, or at least popularization, of object-oriented languages.

Now, I think reusability of code is great. It has had enormous benefits on productivity almost from the birth of the industry. However, many people tried to take it too far. There are more important attributes of software, and when reusability is applied it can be to the detriment of other attributes, such as understandability and efficiency.

It is also often attempted on modules of too large a size. My rule of thumb is that the larger a module the less likely it is to be usefully reusable. Of course, I expound on this further below, but first will explain about software quality attributes.

Quality Attributes

You may recall that in February I mentioned the difference between two types of quality attributes of software which I gave the names user and developer to emphasize who directly deals with their effects (see Importance of Developer Quality Attributes). User quality attributes are things like correctness, reliability, usability, efficiency, security, compatibility, etc.

Developer quality attributes are things like understandability, maintainability, portability, verifiability, etc. Typically the users of the software (and other stakeholders) are not troubled by, or even aware of, these developer quality attributes but that does not mean they are not important to them. In fact, my contention has always been that, for most large-scale projects, maintainability (and the closely related understandability) are by far the most important attributes. (Yes, even more than correctness since if software is incorrect you can fix it, but if it is unmaintainable then, unless you never have to change it, it is worthless.)

Another attribute of concern to developers is reusability. It is important to understand that maintainability and reusability are very different, though, there is generally some correlation between the two. I mention this because I worked with a colleague (AK - an OO "expert") in the late 1990's who could not even understand the distinction so I will explain:
  • Maintainability (or sometimes modifiability) is how hard it is to fix or enhance the software. It depends somewhat on other attributes, most importantly, understandability.
  • Reusability is how hard it is to make use of the same code in more than one place. It is important for the DRY principle (see Software Design)

The Rise of Reusability

In the 1970's a great debate began in the software industry about the software crisis. Rapid advances in computer hardware meant that more and more ambitious software projects were being attempted. Lack of experience with large software projects meant that most software was late, over-budget and of poor quality. It was thought that this situation would become much worse in the future due to the anticipated hardware advances (which, of course, did eventuate). In brief, the crisis was that there would be a huge shortfall in the number of experienced programmers required to develop the software that would make good use of the hardware.

At the same time, it was seen that there was a large amount of redundancy, as the same or similar software was being written over and over. To stop developers continually reinventing the wheel ways were sought to increase the reusability of code. There was already a lot of reuse, using low-level libraries (such as the C run-time library) but some people thought that high-level (ie generally large) components could be created which could then simply be slotted together to create useful software with little or no programming involved.

This emphasis on reuse reached such ridiculous levels by 1995 that I remember seeing this full page ad from Microsoft in many magazines I read at the time. This featured a (probably fictitious) recycling fanatic named Edwin Hoogerbeets, who lived in a house made from beer bottles.

An idea that a lot of people had was to try to emulate the advances in hardware, by using similar techniques to do the same in software. For example, a major hardware advance was the IC (integrated circuit, popularly known as the silicon chip), which lead to a push for similar software components. Many people even used the term "Software IC" to describe these components.

Also around this time (late 1980's) object-oriented languages started to gain popularity. To a large extent this was fuelled by the obsession with reusability as it was believed that OO designs enhanced reusability. Indeed many OO evangelists heavily emphasized this.

Many of these ideas were adopted by, or even originated in, educational institutions. The effect was a whole generation of software designers who were indoctrinated into the importance of making code reusable. This is the root cause of many problems with software created in the last 20 years (and probably still being created today).

YAGNI

When I first read about Extreme Programming (late last century :) I liked most of what I read, but I was not certain about the idea of YAGNI (You Ain't Gonna Need It). This principle says to only do enough to get things working according to current requirements. The idea is that developers should refrain from adding extra features for increased flexibility, perceived future needs, or just because they are cool.

My first thought was that it is stupid to design software without regard to any future requirements. On the other hand I could see where the idea was coming from since I had worked with many software components (such as the Windows API) that were tedious and time-consuming to use since they made you consider all sorts of irrelevant details. This problem was caused by a design that allowed you to accomplish almost any conceivable task (including many pointless ones) without regard to how easy it was to accomplish simple ones.

I don't want to go into a detailed discussion of YAGNI (maybe later). My main point here is that I believe YAGNI was a reaction to the behaviour of many software developers and architects. Instead of designing software components for the particular task at hand there was a push to make them as flexible and general purpose as possible in the name of reusability. In all companies I have worked for the software "architects" would often design the equivalent of a "Swiss Army Knife" when a simpler, more specific, tool was required.

This extra complexity in the design greatly detracts from understandability and hence maintainability (which I have already said is the most important "ility").

An Analogy

I am loathe to analogize software design with the design of physical machines as I believe the two are very different. However, I think there is some value in the analogy below.

But first I would like to point out that the history of invention is littered with failed designs that emphasized reusability. I read recently about 19th century American invention of a system of firearms made from interchangeable parts, which could be assembled to create various weapons like guns, rifles, etc. It was an ingenious system but was never successful because machines designed for a specific purpose are easier to use and maintain than ones designed for flexibility and reusability.

The analogy I have chosen is in aircraft design. If the reusability obsession of the software industry was applied to the aeronautical industry a manufacturer of a plane would design a single wing "module" that they would attempt to fit to all their aircraft from a tiny twin-engine to a super-jumbo. Of course, the wing would have to be flexible enough to work with all size of aircraft. Perhaps for large aircraft you would use multiple wings (4, 8 or even 16). For small aircraft perhaps you could get away with one wing.

We all know that planes are not designed like that. As far as I know (which is very little) every plane has its own custom wing design. Perhaps the same wing design could be used on two different models as long as they had similar size and performance characteristics, but what aircraft manufacturer is going to create two nearly identical models that will just compete with each other?

Of course, even in aircraft manufacture there is a lot of reuse. It's not inconceivable that all models from one manufacturer use the same rivets for the airframe, electrical wiring, seats, cables, pneumatic pumps, etc. This has also always been done in the software industry where the same low-level routines are used for a myriad of things.

The electronics industry is lucky in that fairly high-level components can be created that can be used in different products. Other industries, like the aircraft and software industries, are not so lucky. In software design, as in many other areas of design, creating high-level reusable components results in cumbersome products that just do not work.

Summary

Emphasizing reusability was one of the big mistakes of the software industry in the last few decades. I remember almost 20 years ago telling my colleagues "forget reusability, concentrate on maintainability". The trouble is software designers have been taught to make even high-level components flexible and reusable. This flexibility comes at the cost of increased complexity. Reusable components are usually harder to understand (and often also less efficient) which leads to software that is hard to build and maintain.

Fortunately, many in the industry are now saying the same thing. I believe that overly complex software lead to the idea of YAGNI. More recently criticism of reusability has become more open -- for example, Kevlin Henney's recent book, 97 Things Every Software Architect Should Know gives the rule Simplicity Before Generality, Use Before Reuse. I couldn't agree more!