Thursday, 17 November 2016

Overview


Here are links to all of my posts so far, loosely categorized.

Note that there are a few bonus links to my Code Project articles - marked with [CP]


Software Design

Design Principles

Handling Software Design Complexity - what software design all boils down to
DIRE - an obvious thing we often forget
Developer Quality Attributes - or why fixing bugs is not important
Verifiability - software is useless unless you can verify its correctness
Why Good Programs Go Bad - risk avoidance causes software to "rust"
Book Review: 97 Things Every Architect Should Know

Design Practices

Fundamentals of Software Design - 8 ways to create a good design
Agile Design - how emergent design almost always works better than BDUF
Inversion of Control - IOC is a technique for better decoupling using DIRE
Dependency Injection - an example of IOC

Anti-Patterns

Gas Factory Anti-Pattern - a mistake even (or especially) good designers make
Reusability Futility - "Simplicity before Generality, Use before Reuse"
Shotgun Initialization - an example of the dangers of defensive programming
Layer Anti-Pattern - the problems of a common, obvious approach


Agile

Principles

Agile's Fifth Element - favor simple design over re-usability and generality
JIT (Just In Time) - an example of DIRE that is core to much of Agile
DIRE (Don't Isolate Related Entities) - how you divide and conquer is the key
Agile Design - evolving software one small step at a time
Agile and Code Reuse - all about YAGNI (you ain't gonna need it)
Software Quality Assurance & Agile - how Agile evolved from, but is different to, SQA
Lean is not Agile - applying "eliminate waste" to software design leads to BDUF
Software Development Methodologies [CP] - Agile and other methodologies by analogy

Team

Scrum Team Size - teams should be small to avoid social loafing and other phenomena
Scrum Team Composition - "feature" teams are the key
Collaboration - traditional development discourages collaboration + why Scrum works

Making Agile Work

Scrum Standup - it's more about visibility than communication
Developer Quality Attributes - what benefits developers eventually helps users
Agile Version Control - Agile requires the right version control practices & software (Git)
Scrum Problems - management "buy-in" & other things that help Scrum work properly
Why Scrum Fails - intransigence, non-collaboration, etc
Written vs Verbal - when, who, why, and how of Agile documentation
JIT Testing - testing as you go (continuous testing) is an example of JIT (Just In Time)

Unit Tests

Change - how Unit Tests help you to embrace change
What's so great about Unit Tests - Unit Tests are not about finding bugs
White Box Testing - the best Unit Tests use "good" white box testing
Personal Experiences with Unit Testing - it took me 20 years to truly appreciate them
Challenges - why getting started with Unit Tests seems, but is not, insurmountable
Unit Tests Best Practice - a few things to avoid
Arguments Against Unit Tests - common arguments and why most are invalid
Summary - Unit Tests concisely summarized


Coding

Essentials

Zero - bugs are less likely if you don't treat zero as a special case
Asymmetric Bounds - in code and GUI design this is an important way to avoid bugs
Book Review: Clean Code - a great book on creating the best code

C Coding

Best Practice in C for Modules - strong-coupling and other things to avoid
Defensive Programming - how it works and how it can hide bugs
Shotgun Initialization - a defensive programming practice to avoid
Alignment and #pragma pack - make structs "alignment agnostic" to avoid surprises
Making Code Testable - coding for testability improves correctness, reliability, etc
Ten Fallacies of Good C Code [CP] - 10 more things to avoid

C++ Coding

STL's Dark Secret - vectors are slower than they should be
Iterators Through the Looking Glass - subtleties of the STL reverse iterators
C++11 and Lambda Functions - lambda functions make STL so much better
Nested Functions using Lambdas - you can finally have nested functions in C++11

C# Coding

Overflow Checking using checked/unchecked [CP] - C# has some cool features
Nested Functions using Lambdas - includes an example of using C# lambdas

Maintainability

Long Identifiers make Code Unreadable - don't try to put too much info. into a name
Self Describing Code - why it's a bad idea and why you should comment your code

Other

The Phillips Scale of Code Quality - how good is your code?
Version Control - Personal Experiences - hands on version control

Version Control - Personal Experiences

Last month we looked at how to use version control when using Agile development. My conclusion was that you should be using Git. This is simply because using CI (Continuous Integration) there is a lot of branching and merging going on and Git is the only version control system that allows a version to have to have two parents. This is not to say that you can't use other version controls systems (and in fact I like SVN better in many ways - see below) just that Git keeps track of what needs to be merged for you.

This month I take a leisurely stroll back through time and look at all the version control systems I have used. I have a long personal history of using version control systems (generally being the administrator for such systems). I have used the best (and worst) but you should note that there are some excellent systems (like the proprietary Perforce and open-source Mercurial) that I have not used (yet?).

UNIX

I first experimented with version control while at Sydney University in the early 1980's using the Computer Science department's VAX 11/780. This ran a variation of UNIX that included a primitive version control system called SCCS (Source Code Control System) I think.

PVCS

I first used version control for my C source code in several MSDOS/C jobs during the mid-1980's. At the time the only serious option for MSDOS was PVCS (Polytron Version Control System) which I used at several companies.

I can't say I loved PVCS but it did the job. It efficiently stored changes to text files as "reverse deltas" and had all the basic features like branching and tagging.

CVS, etc

In the late 1980's I  moved back to UNIX where I was a system administrator and system programmer. Under UNIX I tried SCCS, RCS (Revision Control System) and an early version of CVS (Concurrent Versions System) all of which worked butwere difficult to use, in some way.

TLIB

When I moved back to MSDOS/MSWindows systems in the early 1990's I used TLIB. This was similar to PVCS, but quite a bit better. However, this was still a command line driven system which I found tedious to use.

VSS

In the mid-1990's Microsoft included a GUI based version control system with their Windows IDE (Developer Studio). This seemed like a great idea to me after my experiences with command-line version control systems. However, Visual Source Safe (VSS) turned out to be by far the worst product I have ever used - it was not only poorly designed and very confusing, but also had a tendency to lose and corrupt files and whole repositories! Moreover, it made multi-site development impossible due to poor performance - there were 3rd party extensions to VSS (I later used one called VSSConnect) that were developed purely to improve performance over the Internet - but even then the performance was barely acceptable.

ClearCase

In my next job I used ClearCase (originally developed by Rational before being bought by IBM). This is the sort of product you would expect from IBM - thorough but confusing due to its plethora of features and options and requiring a lot of work to maintain. Luckily, I got to work on a new project where I had the opportunity to try a new open-source control system called Subversion (SVN).

SVN (SubVersion)

I set up SVN as an Apache module on one of the companies servers and was amazed at the performance. Using an Apache server allowed SVN to easily work over the Internet since it used HTTP/WebDav. (SVN also provides its own protocol and server call svnserve but the Apache option has advantages.)

The team for this project was split between Australia and Belgium but the two developers in Belgium got great performance (through VPN over the Internet) even though the server was in Sydney. Generally we spent about 10 minutes a day updating and committing changes.

This success with SVN encouraged me to use SVN for my own personal projects.  I put my HexEdit open-source software (see http://www.hexedit.com) into an SVN repository which was hosted on SourceForge.

SVN was the first version control system I actually enjoyed using. One reason was that there was a Windows shell extension called TSVN (Tortoise SVN) that allowed you to easily do all your version control tasks using Windows Explorer.
SVN was the first
version control system
I enjoyed using

Another favorite thing is that, even if you are disconnected from the repository (eg if Internet connection is lost), you can still compare your current changes with the repo. This is because SVN keeps a local copy of all files as they were when you last updated from the repository.

TFS

In my next job I found that I was again dealing with the horrible VSS.  Luckily, the company decided they had had enough problems with VSS and moved to TFS. Now TFS is much much better than VSS but still inferior in many ways to SVN. TFS does provides "shelving" which is a good idea but I have not found it all that useful in practice.
TFS does not
conform to the
Observer Pattern

TFS is more of a "centralized control" system than SVN. For example, it keeps track of all the files you have checked out into your WC (working copy) in its central database, whereas SVN only stores the actual files (the repo) in its central database and tracks things to do with the WC locally. To me the SVN approach makes more sense (conforming to the "Observer Design Pattern") and indeed many developers encounter problems when the local WC becomes inconsistent with TFS's idea of what it should contain.

Git

Finally, I last came to try Git a few years ago as I was intrigued by its branching model.  This solved the only annoying thing I found with SVN - the problem of merging changes between the trunk and a long term branch. I like to merge often (as Agile and CI say you should) but SVN forced you to manually keep track of which versions you have already merged between branches.  Git automatically tracks your merges so you can't forget to merge or merge the same thing twice.
Git makes it easy
to branch and
merge

There is a lot to like about Git but in all honesty I do not find it as enjoyable to use as SVN. First, there are a plethora of confusing commands and options. For example the ability to "stage" a commit before actually committing I never found that useful. It just adds another layer of complexity.

But the worst thing about Git is that it is all command line driven. I always find it much easier to remember how to use GUI software than to remember obscure command names and options. Luckily Atlassian provides a GUI interface to Git using a free product called SourceTree.

One good thing about Git is that it has an excellent book called "Pro Git" that explains in detail how to use it. However the book does get a little evangelical in its praise for Git at times. For example, it goes on about atomic commits (SVN has atomic commits), how fast it is to clone a repo (SVN checkout is faster) and that it has the killer feature of  lightweight branching (SVN has that too).

Then there is the fact that Git is distributed whereas SVN is centralized. Now people rave on and on about the advantages of distributed version control but I really don't see it.  Sure if you have an open-source project with one or more different "forks" then it's probably useful. Personally I prefer one central "master" copy of the source where changes are merged to as soon as possible. I think having multiple repositories floating around would lead to a merge nightmare and contravenes the idea behind CI.

Anyway, I don't want to go into too much depth on the "centralized vs distributed" debate here (I may later). So that's all for now. Bye.

Monday, 26 September 2016

Agile Version Control

Introduction

A mistake often made when adopting Agile is insisting on certain Agile practices and outcomes without converting to using the necessary tools and techniques (see the CASE STUDY below for an example). This is one major deficiency of Scrum or, at least, of using Scrum by itself.  Scrum does not require necessary development tools (and even some essential processes) that allow Agile to work. I have talked about this previously (eg the Summary of November 2013 post).

A crucial practice in Agile is Continuous Integration (CI).  CI is difficult, if not all but impossible, without certain tools and practices, such as automated builds (ye olde build box), Agile (JIT) Design, etc. I will also mention Unit Tests here (again :) as without their safety net you cannot hope to make CI work.  CI also depends on using a modern version control system, like Git, and using it in the right way.  This is what I want to talk about.

CASE STUDY
A few years ago I was working on a project where management insisted on a move to Agile with the aim of creating new software releases every few weeks, instead of every few months, as was previously done (ie, about 4 to 6 times more frequently). However, no new tools or development infrastructure was introduced to facilitate this. Moreover essentially the same procedures were used.  The development procedures alone were onerous, but not as bad as testing and release procedures (of which I had little understanding and will make no comment).

For an unlucky developer there was a tedious and error-prone procedure for every new release. It was bearable when done a few times per year but less bearable when it had to be done more often. This was a typical Waterfall development approach where the project was branched for the new release so that bug fixes could be made on the branch without affecting ongoing development. (I will explain this sort of approach in detail below.)

The major steps were essentially
• Branch the project in VSS, then delete some of the unneeded branched files
• Branch and move some global headers shared between projects
• Manually modify project files to handle VSS problems and change global header locations

This whole process usually took one developer at least a day if everything went well. This is not an exaggeration, though the whole process was exacerbated by the use of VSS and a large manual process that should have been automated.

I will get to the point of this post in a moment but first I give a brief overview of how version control relates to the development process and how it was used before Agile came along.

NOTE: If you are familiar with version control concepts then you can skip to the Continuous Integration section below.

Version Control

All version control systems allow you to keep track of the changes made to source files. One advantage of this is that you can see how the software has evolved over time. This can provide a deeper understanding of code than can be obtained by just looking at the current state. Being able to compare source files from different times is invaluable when investigating why a change was made, how bugs were introduced, etc.

Moreover, you can get a snapshot from any point in time. For example, in the diagram below you could use the version control system to "checkout" the source as it was at the time of Release 1.0. You can then build that specific version if you need to investigate its behavior.

Diagram 1. Basic Version Control

Each box in the diagram represents a check-in of one or more files. Of course, this is a simplified diagram - real projects have many more check-ins (hundreds or even thousands).

Another essential facility of a version control system is branching. This allows variations to be developed from a common base version. Traditionally, branching has two uses:
  • release branching - a branch is created when a new version is released
  • feature branching - a branch for an experimental or long-term development

Release Branching

Release branching (sometimes called fix branching) is very common (if not ubiquitous) in pre-Agile development.  It allows released versions to be quickly fixed while not interfering with ongoing development. For example, consider a software project with two releases: versions 1.0 and 1.1, with ongoing development on version 2.0.
Version Control Jargon      

Repository (repo) = file historical storage
Checkin = add or update file(s) to the repo
Checkout = obtain a local copy of file(s)
  usually in order to update and checkin
Commit (v) = checkin
Commit (n) = files that were checked in
Merge = combine changes from 2 sources
Working Copy (WC) = local copy of files
HEAD = pointer into the repo for the WC,
  usually the most recent commit on the trunk
Branch = fork in version history
Trunk = ongoing development "branch"

Now imagine that a user has found a critical bug in version 1.0 (Bug 2 in the diagram below). You can't reproduce the bug in the latest version but you can reproduce it in 1.0 (and 1.1). Of course, you can't simply give the customer a copy of 2.0 as they have not paid for the new features and, in any case, it is not ready for release. You need to provide a fix for version 1.0.

You check out the code for 1.0 to view and debug it and quickly find the problem. Now you can check-in your fix to the branch for version 1.0. You also port and check-in the fix to the version 1.1 branch as well. (For completeness you also check why the bug no longer occurs in 2.0 - it may simply be hidden by other changes or obviated by some later development.)

Diagram 2. Release Branching

Feature Branching

Feature branching is traditionally used for a development that needs to be separate from the main ongoing development. This may happen for various reasons:
  • the development is experimental and may not prove to be viable
  • the development is not certain to be needed (eg, for proposed legislation)
  • the development is for a large feature that overlaps with other release(s)
Diagram 3. Feature Branching

These branches are always intended to be merged back into the trunk, but it can happen that the branched code is not required and so is discarded, eg if the experimental development is found not to be viable.

I have been involved with a few feature branch developments and they are notoriously tedious and troublesome. The first problem to avoid is that by the time the feature branch is merged back into the "trunk" there are so many incompatibilities caused by the divergent code that it can be difficult or even impossible to merge the differences. In this case a great deal of work is required to integrate the changes and often this involves workarounds and kludges that corrupts the integrity of the software design. It's not uncommon for the feature to have to be completely rewritten to be compatible with the current ongoing project.
“feature branches
can be difficult
or impossible
to merge”


Because of the above problem developers have learnt to "merge early and often". That is, changes on the trunk should be regularly merged into the feature branch to avoid divergence. Of course, this is a tedious and time-consuming process that tends to get skipped due to more urgent tasks. It often also requires discussions between members of the feature and maintenance teams to understand what the code does and how best to merge the differences.

Diagram 4. Merging Trunk Changes

Diagram 5. The completed feature is merged into the trunk

Continuous Integration

These sorts of problems of merging and integrating code (as well as other problems) led to the practice of continuous integration (CI) which is core to the Agile approach to software development. But even without Agile, CI avoids integration headaches, improves common understanding and communication in the team and generally results in a better design and less bugs. It is an example of DIRE since you are not isolating the new features from the rest of the code as it evolves.

Agile Approach

CI enables the agile approach of delivering small improvements that slowly but surely moves the development towards the target. The target, of course, is the PO's understanding of what is needed and which may itself be moving.

Each atomic development task, called a User Story, needs to be small enough to be completed in a few days (and certainly within the current sprint). If the task is larger than that, then it needs to be split up.
What is a User Story?   

User Stories are used in Agile as a replacement for "specs". A User Story is a simple statement about a change or enhancement to the software. This is often written on a small card in the format:

As <A> I want <B> so I can <C>  where:

<A> = the person/group requiring the enhancement -
  often a software user, but can be anyone
<B> = a simple description of the enhancement
  from the perspective of <A>

<C> = the purpose or benefit of the enhancement -
  this can be skipped but I highly recommend it

A User Story is almost all the written documentation you need to specify all changes to the software.  Of course, for a large feature you will have many User Stories grouped into an Epic.

The other written documentation you need is a handful of Acceptance Criteria written on the back of the related User Story card. These explain how you can check that a User Story is complete.

Example:

As an administrator I want to be able to change my password so I can ensure the security of the system

Acceptance Criteria:
1. old password must be entered first
2. new password must be entered twice to catch typos
3. new password must be different to old password

The common argument against this approach is that it is inefficient - it's better to understand the problem, come up with a solution and implement it all in a controlled manner. In theory this sounds like a good argument, in practice it doesn't work (see May 2014 post on Agile Design for more on the evils of BDUF).  If BDUF did ever work as it's supposed to (which it very rarely - if ever - does) it would be more efficient. But even then the Agile approach is more reassuring to the PO/users/stakeholders; even in that worst case it still has the perception of greater productivity since everyone can see progress being made.

A stronger argument against the Agile approach is that there are some complex tasks that cannot be decomposed into simpler ones - they cannot be tackled at all with an evolutionary approach. Again this may be theoretically possible but I have never encountered such a situation in practice. Once you get the hang of it,  it's easy to find a way to work towards a goal while keeping the software useable and useful at every point along the way (or at least at the end of every sprint).

The crucial point is that User Stories are designed such that at every stage the software can be used. At the end of every sprint the PO will have a working, bug-free piece of software that can be tested and even delivered to real users. To make this work you need a certain type of version control system.

So what sort of version control do you need for Agile?

In the end many things in Agile - short sprints, small User Stories, JIT Design, feature teams, and CI - work together and depend on a version control system that allows easy branching and (especially) merging. Having a clumsy or manual merging process is not an option as User Stories are continually being merged back into the trunk.

Conventionally version control systems treat the relationship between versions as a tree. If you look back at all the above version control diagrams (ignoring the dashed arrows) you will see that they are all tree diagrams. (I know, it's obvious that you need branches to form a tree.) Modern version control systems help you merge code between branches (the dashed arrows leading into the blue boxes) but you still need to manually keep track of where the merge comes from and which bits have been merged already.

This is where Git  comes in.

Git

In my opinion Git is the only version control system that should be used for Agile development. Git has one killer feature - a version can have two parents. Git can automatically merge versions always keeping track of things so that it does not miss versions or try to merge the same thing more than once.

This means that a version "tree" becomes instead a "DAG" (directed acyclic graph) because each version can have two - not just one - parents.

Before I discovered Git I used another fine version control system called SVN (short for Subversion), starting about 10 years ago, and found it a joy to use except for one thing - on occasion I would need a long-term branch which was painful to keep updated with trunk developments. To avoid a nasty surprise when the branch had to be merged back into the trunk I regularly merged trunk code into the branch (as in Diagram 5 above). However, to make sure that changes were not missed, or the same change merged more than once I had to manually keep track of what versions from the trunk had been merged into the branch. This was tedious and error-prone and something that Git does for you.

Agile Version Control

Agile version control using Git is simple. A developer branches the code to work on a User Story. Git makes it easy to merge the branch back into the trunk. A simple example is shown in the following diagram where all User Story branches are merged back into the trunk by the end of each sprint.

Diagram 6. Agile Version Control

However, generally you need control of what features are delivered to "production". This is often accomplished by having dual streams - an on-going "development" stream (or branch) and a separate "delivery" stream (trunk) allowing control over when features are delivered.

Diagram 7. Dual Streams

This is very different from traditional version control where branches are eventually discarded (after possibly having been merged back into the trunk) - instead you have two on-going streams. This approach is only possible with a version control system such as Git where a version (ie, a node in the diagrams) can have two parents - in the diagrams this is any node with two outgoing arrows.

For a large project with multiple teams I have even seen the suggestion of multiple on-going "development" branches (eg: see Version Control for Multiple Agile Teams). I have not tried this but I have reservations because code merges between the teams would occur irregularly and might easily be forgotten (remember the rule of merge early and often). The two teams might create conflicting changes which are not discovered until the conflicting code is merged from the trunk into the other teams stream.


Diagram 8. Multiple Development Streams

Summary

Agile version control is very different to traditional version control. It is performed using many small feature branches which are being continually merged back into the trunk (or main development stream). This is necessary for the practice of Continuous Integration (CI) which is a core part of the Agile approach.

CI is an example of JIT (and hence DIRE) allowing problems to be found as soon as possible. It also supports other Agile practices such as short sprints and evolving the software using small, simple, user-centric User Stories. Use of CI depends on a version control system that allows easy branching and merging.

Most Agile teams also have two ongoing code streams (see Diagram 7) - the development "branch(es)" and the delivery "trunk". Again, this relies on a version control system that supports easy merging.

As far as I know Git is the only version control system currently available where a version node in the repository can have two parents. In other words Git allows you to automatically and safely merge code from different sources.

Although Git is not without it's problems (which I will discuss next month) I think using it is essential for Agile development to work smoothly. I will discuss the day-to-day use of different version controls systems (including Git) next month.