Saturday 26 October 2013

Change (Unit Tests Intro)

A perennial problem in software development is coping with change.
It's happened again!! I started writing a blog about Unit Tests (see my next post coming soon) and ended up writing a lengthy introduction about Change. So I have split the intro into this separate post. This is so I don't overload you and so you can skip it, if you're not interested. (However, you probably should read the Conclusion below anyway.)
This post explains the types and reasons for change and the costs involved. It also looks at how change is traditionally handled and how Agile development and Unit Tests can help.

The Cost of Change

First, it may be helpful to understand one aspect of change - how much it costs, and particularly how the cost varies over time. This has been extensively studied and documented under the Waterfall development model - for example, search for Barry Boehm's articles on the subject if you are interested.

However, I can save you the trouble by giving this summary: the longer it takes before you find and fix a mistake the greater the cost. As the graph below shows the rate of growth in the cost is much worse than linear, even exponential.



Diagram 1. Cost of Change (Waterfall model)

The rule of thumb I was given (about 20 years ago) is that for every phase of development the cost goes up one order of magnitude. So if an analysis mistake is not picked up until testing this means that it would cost 1000 times more to fix, since it passed through three phases (analysis > design > coding > testing), while a coding bug picked up in testing will only cost 10 times more to fix than if it had been found straight away.

I believe there are two fundamentally different reasons for this, which I have called the cost of delay and the cost of rework.

Cost of Delay

Experienced programmers know that if you find a bug while working on the code, you can often correct it quickly or even immediately, but fixing the same bug a month or two down the track can take hours or even days, and even then may not be fixed properly. This is primarily due to limitations of the human brain - you will have forgotten exactly how the code works. (Of course, if the original developer has left the company and there are no Unit Tests or comments/documentation then it can take much longer.)
The Cost of Delay is ...
due to limitations of the human brain.

Worse, you may not even realize you have forgotten how the code works and make changes based on a misapprehension. In my experience, these are the worst source of bugs. I have often written, thoroughly tested and understood a module which was working perfectly. Then a simple change made later causes all sorts of problems. Again, as I discuss later, Unit Tests help here.

The cost of delay is also due to other tedious and time-consuming problems like setting up an environment for testing and debugging. You may not even be able to reproduce the problem in the latest version of the code so you need to find and rebuild an older version in which the problem occurs in the field - this may require tracking down old versions of compilers, tools, libraries etc, if this information was even written down somewhere.

All this should convince you that it is best to find bugs as soon as possible - something I previously discussed in JIT Testing. Actually this is a good example of the principle of DIRE (Don't Isolate Related Entities). In this case we are talking about not isolating the coding from the testing in time. This is one advantage of using Unit Tests, and particularly TDD (Test Driven Development).

More generally, reducing the cost of delay is one of the principal advantages of Agile methodologies. The continuous feedback from the customer/users (eg at least at the end of every Sprint in Scrum) means that problems are found and fixed much more quickly when the context is still fresh in everyone's mind.

Cost of Rework

The other problem with delaying changes is that in the meantime a lot of work may have been done, based on the original software. This problem is not due to the length of time that has passed but simply due to the fact that this work has to be repeated. For example, extensive testing may have been performed that would need to be redone to ensure that no new bugs were introduced.

Another cost, for software already released, would be notifying and updating users of the problem. Rolling out a new release, even just for a bug fix, can be costly.

There can be coding costs too, since the change may require a major internal re-design. In this case, the behaviour of the code must be understood once more, the code modified and once again a great deal of regression testing is required to make sure nothing has been broken.

Unit Tests can help reduce the costs of rework too. If the original software had comprehensive Unit Tests then changes can easily be made to the code. Generally, if all Unit Tests pass, it is safe to assume that no bugs have been introduced by the changes. This can reduce the costs associated with coding, debugging and even regression testing.

But there is an even greater advantage to Unit Tests. It is not uncommon to find that, due to the large cost of rework (as explained above), software changes are not done properly but in a manner that minimises these costs. That is changes are made in a way that minimises the risk of introducing new bugs.

I can give many examples of how this occurs but a common one is to duplicate a complete function or module and modify the copy leaving the original untouched (so that the original can handle the commonly used existing functionality in exactly the same way). This strategy results in code duplication (often on a massive scale) which contravenes the principle of DRY (see Principles of Software Design).
I briefly discussed this in early 2012 (see When Good Programs Go Bad) and a reader responded with a quote from Bruce Eckel:

"Management's reluctance
to let you tamper with
a functioning system
robs code of the resilience
it needs to endure."

This is very insightful but Units Tests can help to assuage that reluctance.

This sort of practice is so ingrained in the industry that most developers do not even realise they are doing it. However, developers are not entirely to blame here since it is often the managers that react very badly when bugs are found (and it was probably the same managers who imposed draconian deadlines that precluded creation of Unit Tests).

If you haven't seen the light yet here is a summary: Unit Tests allow software to be changed without compromising the original design and without fear of introducing new bugs. The code can then adapt and evolve and never need to be completely rewritten or discarded.

Reasons for Change

Why do we need to change software? Ask most people and you get two answers: fixing bugs and adding features. But there is a lot more to it than that! More generally, changes are made to improve the quality of the software (of which bug fixing is just one aspect) and, yes, to add enhancements.

Enhancements

I really don't have much to say about adding functionality, except that there is often a reluctance to add new features due to the cost and the possibility of breaking existing features. Unit Tests help by making the code more modifiable (see next blog) and by catching bugs caused by the changes.

Fixing Defects etc (User quality attributes)

The other reason for change is to improve the quality of the software. There are many aspects to the quality of code not just correctness (which is what fixing bugs is all about). The software might also need to be improved if specific problems have been identified in areas such as performance, usability, reliability and other "user" quality attributes.

Refactoring (Developer quality attributes)

What is often neglected are changes that improve developer quality attributes, especially maintainability (but also includes verifiability, portability, reusability, etc). See my previous post on The Importance of Developer Quality Attributes for an explanation of the difference between user and developer quality attributes. Changing the software to improve developer quality attributes is known as refactoring and many people are realising that it is essential to the long-term viability of any software.

There are actually many benefits to software that is easily modified (and hence easily refactored) as I discuss later. I will mention Unit Tests again here as they allow you to improve the software without fear of introducing bugs.

Change Management

Before Agile methodologies came along change was thought of as something to be avoided. Of course, considering the cost of change graph (above) this was completely understandable. I will first look how traditionally change has been managed and then look at the Agile approach.

Eliminate Mistakes

If you don't make mistakes then you don't have to fix them. This is
“ Right
 First
    Time ”
the attitude epitomized by the SQA motto Do It Right the First Time or simply Right First Time. Over many decades starting sometime in the 1960's there have been a huge number of software projects fail and the reason given has invariably been inadequate, poorly documented, or continually changing requirements.  (Some studies have found poor estimations as the primary cause but that is just a result of poor requirements.). In other words: we didn't really know what we were doing.

So the thinking was always that more time should have been spent on analyzing the problem in order to eliminate all the mistakes and omissions in the requirements and anticipate any "unanticipated" changes.

This is why there has been a huge amount of research, and even more debate, on how to avoid the mistakes including:
  • better and more thorough analysis techniques
  • better communication with the customer
  • the invention of various estimation techniques
  • using prototypes so the customer better understands what is specified
  • modelling languages and diagrams
  • formal proofs of correctness
  • etc
The problem is that with any reasonably large, real-world software project you can never get it right the first time! Moreover spending a lot of time and effort trying to do so is time-consuming. The customer or sponsor also becomes concerned when nothing appears to be happening except a lot of people trying to understand the problem.
You can never get it right first time!

Further, not all change comes about from mistakes. Even if you can avoid making mistakes (which you can't) there will be other reasons for change. You can't anticipate unanticipated changes. Some change you just can't avoid (such as regulatory changes).

Discourage Change

Whether deliberate or not, another strategy commonly used in a large project (under Waterfall) is to do everything possible to prevent changes from being made.

First, a complex and tedious procedure, with lots of forms, is set up for the approval of all changes. All proposals go to a change review board consisting of managers, analysts and architects with the knowledge and ability to give reasonable grounds for rejecting almost any proposal.

In this sort of environment refactoring the code is never even considered. This leads to a snowball effect; if code is not refactored to make it more maintainable then it becomes more and more expensive to make changes for other reasons.

In the end only the worst bugs and the most desirable new features are approved. It is hard to quantify, but the resulting lost opportunities can make a huge difference to the long-term viability of the product. Advances in software and hardware are accelerating -- if the software cannot be adapted to use them then it will be at a disadvantage to competitors who can.

Software that does not adapt to change will eventually atrophy and die.

Minimize Risk

OK, a change has been approved -- a major new feature or perhaps a serious bug needs to be fixed. But the problems don't stop there. There is the even more pernicious problem of how the change is made.

Most of the time changes are made, not in the best way, but in a way that reduces short-term risks. This may be a conscious decision of the designers but more often is due to the way the programmers work. (See my example above under The Cost of Rework).

Why do programmers work like this? It is sometimes due to laziness or fear (which is the main motivation behind the XP idea of Courage). But it's more often due to conditioning by poor management practices (see Why Good Programs Go Bad for full explanation).
  •  programmer do things the easy way (Code Reviews and Unit Tests can help here)
  •  unrealistic deadlines, with no time allocated to later refactor
  •  management intolerance of bugs caused by making changes properly (Unit Tests help here)
The end result is software that degenerates into the classic unmaintainable Ball of Mud.

Agile Approach

Agile methodologies take a completely different approach, by recognizing that when creating software you can't even get close to getting it right first time. The Agile catch cry is instead Embrace Change. Many people think this simple means we have to accept change (and the costs), but Agile actually questions the Waterfall assumptions, and tries to find ways that actually reduce the costs of change.

In other words rather than cope with that horrible cost of change curve above (associated with Waterfall methodologies) it changes the shape. There has been a lot of debate about how the curve looks under Agile but it might be something like this:

Diagram 2. Cost of Change (Agile)

However, in all the debate about the shape of the curve, an important point is missed. Mistakes are caught sooner due to continuous feedback from the customer, so we don't get so far along the curve before finding and fixing defects. For example, many problems are found under Scrum in the Sprint Review which would not be found until months later under the Waterfall model.

Further, Unit Tests, which I consider a fundamental part of Agile, have an even greater benefit. If Unit Tests are written at the same time as the code (or better still TDD is practiced) then many bugs will be found straight away that would not be found till later. This reduces the costs of delay.

Even more important is that if changes are required, Units Tests allow them to be made more easily and reliably. This reduces the costs of rework.

Finally, the disincentive to refactor the code is greatly reduced by Units Tests, which can greatly reduce the costs of lost opportunities, and extend the life of the software.

Conclusion

Using a Waterfall development methodology the costs of change are prohibitive and so are avoided or performed in a way that reduces risks (eg, the risk of introducing new bugs or needing a full regression test). Software developed and maintained like this becomes very expensive to maintain. To remain competitive (unless you have a nice cushy monopoly on your market) it will need to be discarded and rewritten from scratch.

An Agile development methodology turns this problem on its head by embracing change and reducing the costs of change. For example, the continuous customer feedback means problems are found much more quickly.

Further, the Agile approach to change is greatly enhanced by use of Unit Tests since they:
  • make code more maintainable and verifiable
  • reduce the cost of change by allowing changes to be made easily
  • allow code to be refactored to take advantage of better design/new technology
  • facilitate making changes properly, thus avoiding a maintenance nightmare
  • allows changes to be made without fear of bugs or lots of regression testing
I will elaborate on this and other advantages of Unit Tests next...

    No comments:

    Post a Comment