Saturday, 25 February 2012

Scrum Problems

Having used Scrum and read a lot about it I have found it has some problems.  If you try to explain these problems to an proponent of Scrum then the reply is always "you don't understand it" or  "you aren't doing it properly".  To an extent this is true but some teams try and try and eventually give up on Scrum.  There must be problems inherent in Scrum -- even if just the problem that says some people just can't learn it or can't do it properly.

Here I give an account of some problems I have experienced with Scrum or similar agile methodologies.  For some I give methods to fix or avoid the problem.  For the others I am open to suggestions.

Management

The common problem that Scrum (or any improvement requiring cultural change in an organization) encounters is lack of management commitment.  Often management think all they have to do is spend some money on Scrum training and they will have an immediate boost in productivity and responsiveness.

Many companies spend a small fortune on Scrum training for the team (the "pigs") but give managers and other "chickens" no guidance.  It is most important that management understand how Scrum works.  In particular, managers (and even the scrum master) need to refrain from interfering with the process and letting the team "self-organise".

Management also needs to be responsive to requests from the scrum master.  If the scrum team perceive that management are not committed to the process then Scrum will be doomed to failure.

Product Owner

Most product owners do a good job but they can be the weak link.  There can be major problems if the product owner cannot do their job properly.

One problem is that the client may not be able to or willing to provide a product owner.  In this case it is up to the team to provide their own proxy product owner - typically someone with a title like "business analyst".  Even when the product owner is provided by the client they can be so far removed from the actual users of the software that they are not in touch with what is really required.

Of course, the whole point of Scrum is to make things visible.  If the product owner is not doing a good job then, even as early as the first sprint, problems should be obvious.  In practice, however, everybody tends to trust the product owner and nobody else (ie, chickens) checks how the team is progressing.  The project may proceed for many sprints with the product owner perfectly happy, but it's not until a version is released to real users that anybody realises that the product owner's understanding of the project is very different to the users or other stakeholders.  This problem may have been visible but there was nobody there to see it.  Again, if managers (see above) take an interest (eg, attending sprint reviews) then this situation can be avoided.

A related problem is when the product owner cannot make proper decisions due to conflicting priorities from stakeholders.  It is common for different employees of a customer's organization to have different agendas.  (Of course, this is not a problem of Scrum but may occur with any type of project.)  It is the responsibility of the product owner to communicate with all stakeholders and resolve any differences.  If necessary, the customer's senior management may need to be called in to resolve any conflicts.

Finally, the most common problem I have encountered with product owners, is their tendency to vacillate.  I am not sure why this happens but I suspect that there is an element of megalomania having the team bend to their every whim.  Some product owners are oblivious of how their decisions can result in major headaches for the team.

The main trouble here is that *when priorities keep changing the team has no clear focus*.

As an example of how a bad product owner can cause chaos I will relate a story that happened to me.  The product owner had created a product backlog which was topped by some simple but crucial tasks, followed by a large (but less crucial) enhancement.  Since the crucial tasks and the large task would have taken too long to complete in one sprint we resolved to finish the crucial tasks and a few other lower-priority tasks and to do the large task in the next sprint.

When it came time for the next sprint the product owner had reorganized the backlog and the task we had all been expecting had been pushed way down the list.  The first problem was that we had been mentally preparing ourselves for this large task by thinking about it and even doing a bit of research (grooming).  Worse was the demotivating effect of the product owner making seemingly arbitrary changes.

It's important to make sure the product owner understands Scrum and the development process.  The scrum master does have the authority to "fire" the product owner but this would typically be due to their interactions with the team.  The scrum master could be largely oblivious to the above-mentioned problems.

Backlog

Another problem related to the  product owner  is the prioritization of the product backlog.  A major rule of Scrum is that the  product owner orders the product backlog according to value to the business or business functionality.  I think this is a bad approach.

What I really hate about this approach is that it tends to ignore any user interface niceties that make the program enjoyable to use.  Many Scrum teams produce software that could be at best described as Spartan and I would just call plain ugly.

However, a worse problem is that most product owners have no appreciation of the development process.  Because the backlog is ordered on business functionality there is no room in the process for the most important work such as re-factoring to maintain the integrity of the design and prevent it becoming an unmaintainable "ball of mud".

The whole emphasis in Scrum is on "user quality attributes" not the more important "developer quality attributes".  This is covered in more detail in my previous blog post "The Importance of Developer Quality Attributes".

Team

As I have learnt more about the "proper" way to do Scrum I have discovered many things that surprised me ...

The first time I used Scrum all the team members were basically programmers.  We did not have separate analysts, designers, testers, etc.  Everyone contributed to the design to some extent, though of course, the more experienced team members did most of the architecture and high-level design.  Analysis was done by the team together with the  product owner .  Everyone tested their own code (though the customer had acceptance tests and other tests).

Everyone in the team was involved in all the steps towards delivering a product to the customer ready for their rubber stamp (acceptance tests) and subsequent release.  This was very empowering and motivating.

I have since learnt that the team in Scrum is supposed to be *multi-disciplinary*, which most people take to mean that the traditional roles of analysts, designers, architects, programmers, testers, etc are preserved.  This to me goes against the principles of empowerment and reverts to the "production line" mentality of Waterfall models.

It also means that there is far less flexiblity for teams members to self-organise.  If all team members have assigned roles then the work is effectively assigned to each member simply by the nature of the task - the programmer(s) write the code, the tester(s) test it, etc.  And what happens if your tester(s) are away - who does their work?  It also makes sprints like "mini-waterfalls" where at the start of the sprint there is nothing for the testers to do, and by the end the analysts and designers are twiddling their thumbs.

Another problem I have seen is that Scrum teams can become insular.  Although communication within the group can be greatly improved, communication with people outside the group can decline.  This is a common misunderstanding of how Scrum works.  Though Scrum restricts how outside influences affect the team this does not mean that the team should not talk to people outside the group.  In fact, for the sake of visiblity, a fundamental tenet of Scrum, they should do so.

Other

I think those are the major problems I have encountered.  Please feel free to comment on why these are not really problems or how they could be avoided.

There are other problems which I could mention, but they will have to wait for a later time, but I will briefly mention them.  One is simply that some types of projects are not suitable for an agile approach.

Another difficulty is deciding on the amount and type of documentation that needs to be produced in Scrum (or any agile methodology).  Generally, I favour code over technical documentation for many reasons, but mainly because code is much more likely to be correct and complete.  I will explain all of this in an upcoming article on documentation.

On thing I just rememebered is what I have always felt is a major failing of Scrum, which is that it fails to discuss any sort of technical techniques and processes - the sort of things that XP goes into in depth.  The most important technique (and one I have been promoting since 1996), is the idea of unit tests.  Now, with agile methodologies the use of unit tests is crucial, but, amazingly, none of the books I have read on Scrum even mention unit testing.

Thursday, 23 February 2012

Shotgun Initialization

Today I had a debate with a colleague about initializing local variables in C.  I have had this debate many times over the last 20 or more years and thought I should clarify what is the best thing to do, or at least, my opinion of the best thing.

The Problem

Uninitialized variables are a perennial problem in C.  Writing through uninitialized (but not necessarilly NULL) pointers is particularly nasty, as it can overwrite memory anywhere (notwithstanding hardware memory protection).  Of course, any variable can be uninitialized so we want to ensure that all variables are set to some (correct) value before they are actually used.

Years of bad experiences with this problem means that experienced C programmers often use a form of defensive programming where steps are taken to avoid the possibility of using memory that has not been set to some value.  For example, I have seen malloc() #defined so that the real malloc() is bypassed in favour of calloc().  The standard library function calloc() is useful because it initializes the allocated memory with zero, but it is not always appropriate.

Defensive Programming

Defensive programming is part of the culture of C.  For example, it is common to see code like this:

  for (i = 0; i < 10; ++i)   // defensive
    a[i] = f(i);

rather than:

  for (i = 0; i != 10; ++i)  // non-defensive
    a[i] = f(i);

Now these should do the same thing unless by some strange action the variable 'i'
Note that I use the literal 10 for simplicity. In real code you would not use a magic number but something like sizeof(a).
somehow skips having the value 10, in which case the non-defensive code will go into an infinite loop corrupting memory until the program terminates due to a memory exception or something else nasty happens.  How could  'i'  possibly do this?  Well stranger things have happened.  It may be due to errant behavior of the function f().

So which is better?  Well, at least when debugging, the non-defensive code is definitely better as the defensive code will mask the symptoms of the bug in f() and you may never be aware of it.

For a release build the defensive code is arguably better.  But surely it is better to find and fix the bug (a nasty one probably lurking in f()) than try to recover (perhaps unsuccesfully) from it?

Don't get me wrong.  I think defensive programming is useful.  It is preferable to use software that might behave a little strangely once in a while, as long as it doesn't crash and lose all your data.  However, as we saw, defensive programming can hide bugs.  So you might find that you were using a word processor which did some odd things but seemed to recover and allowed you to save your document - but you might not discover till much later that the document you saved was actually corrupted.

The best strategy is to be defensive and (at least in debug builds) also try to detect the bugs you are defending against.  I use code like this:

  for (i = 0; i < 10; ++i)
    a[i] = f(i);
  assert(i == 10);

Uninitialized Variables

To get back to the point of this article consider a function like this:

void g()
{
  int i;
  ....
  i = f();
  ...
  use(i);
  ...

The worry is that the initialization of  'i' is bypassed somehow and then 'i' is used with an invalid value.  Personally, I think, as far as possible, variables should be initialized with their proper value when they are declared.

  int i = f();
  ...
  use(i);
  ...

but this is not always possible in conventional C (at least in the more commonly used C-89) which requires you to declare variables at the start of a compound statement.
You can still have the problem in C++ if the initial value of all members is not known at the time of construction.
Luckily, this is not required in C++ and is even actively discouraged.  A greater part of the problem is that convention (and many C coding standards) require all local variables to be declared at the start of the function.

Debugability

The result is that many C programmers take the shotgun approach:

void g()
{
  int i = 0; // shotgun initialization
  ....
  i = f();
  ...
  use(i);
  ...

The only thing is that zero is not the correct initial value for 'i'.  The only good to come from this is that this makes the value of  'i' deterministic, since otherwise it could contain any random value that happened to be left over in the memory that is used for the local variables.

Making the code deterministic makes it more debugable (see my previous blog post on software quality attributes).  So it can save time in tracking down a bug, because it is easy to reproduce.  However, it can also make the code less verifiable since it can hide the symptoms of the bug.  It may be better to initialize the variable to some strange value so that it is obvious when it has not been initialized properly.

This is the reason that the Microsoft VC++ debug code initializes all local variables with the hex value 0xCC (uninitialized heap memory is filled with 0xCD).  This obviates the need to do shotgun initialization in that environment since the run-time behavior cannot randomly change.

Disadvantages

We saw that shotgun initialization has the advantage in increasing code debugability, but there are many disadvantages?

First, it can make the bug harder to detect (the software is less verifiable).  If you are debugging and 'i' has the (incorrect) value of zero you may think that it has been set correctly.  But if it contained 0xCCCCCCCC your interest should be immediately aroused, and it's more likely to trigger an assertion or a run-time error.

Another problem is when you initialize to some arbitrary value it is harder for the compiler to tell you that something is wrong.  Many compilers will attempt to warn if you use a variable that has not been initialized.

Further, if the code that uses 'i' is removed at a later date it will prevent many compilers from warning of an unused local variable.  Because 'i' is initialized it is not flagged as being unused.

However, my biggest concern is that the code becomes confusing.  The initialization of 'i' to zero serves no purpose and may confuse someone reading the code.  One of the biggest code smells is finding redundant code.  I have often found that such redundancy inhibits changes to the code and can itself lead to bugs.

Last (and least), it is less efficient to write the same memory twice - once with zero and then again with the real initial value.  One of the few things I dislike about C# is that you cannot avoid having an array initialized to zeros, even if you are going to set the elements to some other value.  For a large array this can take an appreciable amount of time.

The Bug is The Problem

The bottom line is that rather than trying to handle bugs gracefully, the first priority is not to have bugs.  Shotgun initialization can even hide bugs and makes the code harder to understand (and hence modify).  There are few, if any, advantages to using it and these are far outweighed by the disadvantages.

Wednesday, 1 February 2012

The Importance of Developer Quality Attributes

This is a brief discussion of different aspects of the quality of software and how I believe the development process (including newer agile methodologies) gets the focus wrong.

You can divide the aforementioned quality aspects into two general categories: those that affect the users of the software and those that affect the developers.  To help you understand the discussion below, I first need to explain the different types of software quality.  If you are familiar with the idea of quality attributes of software (often discussed in SQA literature) then you can skip the next section.  Here I present a shortened list and a brief explanation.

Quality Attributes

The quality of software depends on many inter-related attributes of the software.  Some of the more important ones are:

One attribute that I think is very important but I have never seen mentioned before is what I call debugability or how easy it is to track down the cause of a bug.  The debugging process consists of three steps:

1. detecting the presence of a bug (aided by verifiability)
2. tracking down the root cause of the bug (aided by debugability)
3. changing the code to fix the (aided by modifiability)

Sometimes the second step can be the most time-consuming; this is common in poorly written C code.  In C debugability can be greatly improved by such things as assertions and Unit Tests (which also improve verifiability, modifiability, reliability, etc).
correctness the extent to which the soft- ware does what it is meant to
reliability how often it crashes or behaves erratically
efficiency how well it uses resources (memory, CPU cycles, etc)
usability how easy the software is to use
reusability how easy the software is to adapt to other uses
verifiability how easy it is to test (ie, verify its correctness)
understandability how easy it is to grasp the design and source code
modifiability how easy it is to make changes to the source code

It is useful to think about these quality attributes for many reasons.  First, you may want to consider which attributes are important for your current project.  Generally, I think modifiability is the most important attribute but for a one-off program it may not be important at all.

You should also be aware of trade-offs between different attributes - for example, how fast the code runs (an aspect of efficiency) is often traded-off with one or more of the other attributes.  On the other hand, some attributes are highly correlated - for example, modifiability is strongly related to understandability.

Note that there are also other attributes (eg, security, compatibility, portability, etc) that I have not mentioned.  The list above is enough for our purposes.

User vs Developer

I like to group the quality attributes into what I call user quality attributes, or those that directly affect the users of the software (correctness, usability, etc), and developer quality attributes, or those that directly affect the people creating the software (modifiability, portability, reusability, etc).  Another way to think of this is that user quality attributes can only be measured by running the working software, whereas developer quality attributes may be determined by evaluating the source code.

Now we come to the main point.  My contention is that developers and the processes they use tend to focus on the user quality attributes, whereas their primary focus should be on the developer quality attributes.

Now before you dismiss this idea, I will introduce an analogy to try to get across my point...  A few decades ago a new car model had been on the market for several years.  This was a nice car: it had all the latest features, was easy to drive, very safe and had very good reviews, the only problem was that nobody was buying it.  The problem was that it was very difficult to repair and this fact became well known to many people especially car mechanics.  Nobody is going buy a car that costs a fortune to fix no matter how nicely it handles.  For this reason car manufacturers now spend more effort on making cars easy to maintain than they do on other niceties like the interior design.

Unfortunately, in the software industry the users are not so savvy.  They push the user quality attributes with the effect that developer quality attributes are neglected or at best downgraded.  Unless the developers are aware of this influence and actively resist it the long term effect is to the detriment of the software product.

We have probably all seen legacy applications where making small changes takes an inordinate amount of time due to the original design and subsequent modifications.  This is the equivalent, going back to our analogy, of removing the engine to change the spark plugs.

The software industry is light-years behind the automobile industry in this regard.  Even modern software practices like Scrum emphasize giving the customer what they want.  In Scrum the Product Owner (sort of a representative of the users of the software) has absolute power over what the team implements.  Sure the team can propose things for the backlog like refactoring for maintainability but generally Product Owners have little appreciation for how software is created and the importance of developer quality attributes.

Correct Software is Not the First Priority
The idea that removing bugs from software is not important would induce heart-palpitations in every software manager I have ever known.

The idea that removing bugs from software is not important would induce heart-palpitations in every software manager I have ever known.  Their attitude is typically that they have to keep the developers focused on delivering user quality attributes to keep the customer happy.

About 25 years ago I came across a book nestled amongst some technical books on the bookshelf at work (a company called Encom).  It was called the Psychology of Computer Programming by Gerald Weinberg.  This was a great book in many ways but sometimes I think it fell short.  [Interestingly I went back to work at Encom again about 12 years later and the book was still on their bookshelf, even though they had moved offices several times, and it still had an old monthly DJ's statement of mine in it as a bookmark.]

There was one anecdote in this book that I still recall.  It was about a large new project where the team basically were incapable of producing a solution that worked.  (I think it was something to do with configuring options for new cars and written for a large automobile manufacturer.)  The developers had created a design for the system which was very efficient but had no other quality attributes.  They spent many months trying to get the proper outputs for all inputs but changes in one area affected other areas so that they could never get the thing to work properly for all situations.

Eventually the team gave up trying to get their design to work.  On his way home on the plane the author came up with a completely new design which avoided the problems of the troublesome design.  He went back and explained his solution, but the team pointed out that it would be too slow.  Eventually they adopted the new design which actually worked even though it ran much more slowly.

The point that Mr Weinberg was trying to make with this anecdote is that the first priority is to have correct software.  That is, there is no point in having something that is blindingly fast that does not work.

One reason I actually don't like this story is that it implies generally there is a trade-off between efficiency and other quality attributes like correctness.  Though this can be true, it is rare.  Usually, buggy software is also inefficient; in contrast most well-written software is both efficient and correct.
I like the story in that it highlights how a team can become sidetracked (by technical issues) from the real purpose of what they are trying to achieve.  However, I would go further than Mr Weinberg and say that there is no point is concentrating on quality attribute A when attribute B is more important.  In particular, for a typical software project there is no point in having correct software if it can't be modified.  After all even if a program produces the wrong results, as long as it is easily modified, it can always be fixed.

Software Rusts

Software is just information so it should not be subject to the wear and disintegration that physical objects suffer.  Nevertheless there is a perception that software decays or rusts over time.  Every program I have ever worked on has suffered to some extent as modifications were made.  If you are lucky you can start with a brilliant design that becomes a bit messy as unanticipated modifications are necessary; typically you start with a small mess that evolves into a really big mess.  The result is that some larger companies completely rebuild their products every ten years or so, but most can't afford this luxury.  (eg, Microsoft completely rebuilt Visual Studio for VS2002 and again for VS2010.)

What are the causes of this effect?  Well, I could write a whole article on that [later note: I did exactly that in March 2012], but here are some off the top of my head:
  •  programmers can't (or can't be bothered to) understand the original design
  •  quick and dirty fixes made due to time pressures
  •  avoidance of fixing things properly due to the risk of bugs
Which of these do you think is the most common reason for rust?  (Or, if you think the most common cause is something else please let me know what you think!)  After many years of being oblivious to it, I have slowly come to the belief that the last (risk avoidance) is the most common and most insidious cause.  Let me explain why.
risk avoidance is the ... most insidious cause   

First, managers are obsessed with defect counts.  They believe that every time the customer finds a bug that everyone is horrified and that the customer thinks less of them and their team.  Also, programmers themselves are ashamed to make a mistake and loathe to incur the criticism of their supervisor.  The net effect is that code modifications are not done properly but in such a way as to minimise the likelihood of causing or exposing bugs.

This is a classic example of emphasizing user quality attributes over developer quality attributes.  The whole emphasis is placed on correctness at the expense of understandability and maintainability.  The effect is that in the long term the code becomes of lower quality, so becomes harder to modify (among other things) which in turn actually causes incorrectness/bugs (among other things).

Customer Management

Maybe the next buzzword will be this term I just thought of: customer management.  Actually, that sounds a little demeaning - what about customer re-education ... hmmm, maybe not.

Anyway, the point here is that we have to change the way customers (and managers) think about software development.  Buyers of software have to become as savvy as buyers of cars and remember that all the shiny bells and whistles are not as important as what is under the hood or more particularly how easily the developers can modify what is under the hood.
But don't forget unit tests.  (All my posts seem to come back to unit tests, because they are at the heart of all good development practices.)  With unit tests you can refactor the code and be confident that you have not introduced new bugs.

The first rule is: Bugs are not bad.  If you know your developers are not lazy or incompetent then the more bugs they create means the more refactoring they are doing.  This is a good thing as down the track this will mean changes can be made more quickly and consistently, with less bugs and unreliability.

Conclusion

OK, I know you don't agree with all I have said above (but please think about it and read it again later).  Everybody knows that the whole point of creating software is to create something that is useful to the users of the software - so bugs are bad and correctness, usability, etc are good, right?

I agree that whole point of creating software is to create something that is useful to the users of the software, but the current emphasis on the user in software development works against this in the long term.  Does the user really want a shiny new sports car (not another car analogy! :) that turns into a Yugo after a few weeks?

Concentrating on the aspects that are important to the user, while ignoring aspects that affect developers (modifiability, verifiability, etc) is the worst thing you can do for the users of the software in the long term.