Sunday, 20 August 2017

The Essence of Go

The Go programming language - what do you know about it? I was looking for my next favorite language a couple of years ago.  I tried and rejected Go based on a fair bit of research. However, I have been working on a project for the last few months using Go and I must admit my initial impressions were wrong. Here I hope to describe what using Go is really like for anyone thinking of trying it so they don't jump to the same erroneous conclusions that I did.
“… what using Go is really like …”

Last year I mentioned here another new language, Rust, that I am very excited about (and which I hope to talk more about soon). Go is similar to Rust in many ways but Go is not a low-level language like Rust (despite what Google and others say). I would classify Go as a medium-level language on a par with C#. For one thing a language can't be low-level unless it allows control of memory allocations, but Go (like C#, Java and many interpreted languages) uses GC - ie, memory management is handled by the run-time using a garbage collected heap (I might talk more about that later).

Initially I decided to steer clear of Go, for several reasons but mainly due to its lack of support for templates (generics) - which BTW Rust does have. But now that I have used Go I think one particular aspect is brilliant - it makes programming simple (in a myriad of ways), even more so than C# (which I talked about last time).  And if you have read much of my blog you may have noticed my love of simplicity. I believe complexity is at the root of all problems in software development (see my very first post Handling Software Complexity).


Whenever I try a new language I quickly get a feel for its style - what the designer(s) think is important (performance, flexibility, safety, etc). I can also usually pick what other languages influenced it. But Go's style and influences confusing.
Go's style and influences are confusing

First, it's overriding premise seems to be to keep things simple (and most of the time it does that very well), but the motivation for this is a mystery. The initial impression I got was that it was that everything was done "on the cheap" - that is, the Go creators just found it too hard, or too time-consuming, to think about adding exception handling or templates or whatever. This is clearly wrong when you consider some of the other features that are present. I next formed the opinion that it was based on an intense dislike of C++, so anything that Bjarne Stroustrup invented was simply rejected out of hand. My current (final?) impression is that the creators of Go consider the average programmer to be lazy and incompetent so everything is kept as simple and safe as possible (though sometimes making things safer detracts from simplicity as we will see).
“ …making things safer detracts from simplicity…”

Go's influences are many but as a generalization I would say that Go is a bit like a simplified C#, with all the C++ derived stuff removed, plus some additional features that support concurrency. For example, it has features similar or the same as these from C#: GC (memory management), lambdas (called closures in Go), metadata, reflection, interfaces, zero initialization, immutable strings, partial classes, extension methods, implicit typing (var), etc. And like C#, Go also tries to avoid lots of the problem areas of C/C++ such as expressions with undefined behavior.

Go is like C# with the C++ stuff removed
On the other hand almost all the things that C# inherited from C++  Go completely discards - like generics, exception handling, operator overloading, implicit conversions, and even classes and inheritance.

However, there are a few more things to Go that seem to be new or come from elsewhere.  One of the most interesting is its support for concurrency, including channels which I first encountered in Occam about 25 years ago. And, of course, like all popular modern languages Go is heavily influenced by C and C++ (though in the case of C++ the influence is in the form of avoidance).  Let's look at these influences just a bit more.

Safer C

Go has strong links to C, ostensibly because Google wanted a modern alternative to C. I think a big reason is that one of the Go architects is Ken Thompson who (along with Dennis Ritchie) created the C language (and the Unix operating system) and even precursors to C in the 1960's.

The syntax of Go is obviously derived from C, but there are many "improvements" to avoid problems that many beginners in C encounter. A good example is that there is no automatic "fall through" between cases in a switch statement which is a perennial source of bugs in C. On the other hand side-effects in expressions have been removed (eg, no assignment chaining etc) which is bad (see below).

Most of the problems with using pointers that C is notorious for have been eliminated by disallowing pointer arithmetic and by automatic memory handling using the garbage-collected heap.

Another common problem for beginners in C is due to uninitialized variables which can lead to bugs which are almost impossible to track down due to unreproducibility. Go's solution is the "shotgun" approach where everything is guaranteed to be initialized to a "zero" value.  This is good for reproducibility but introduces its own problems as I discussed in Shotgun Initialization.

Finally, I should mention that Go tries to address perhaps the worst problem of most C code (yes worse than the above problems!). This is the problem that there is an enormous amount of C code that ignores error conditions or does not handle them correctly. Go has a more systematic approach to error handling but, in all honesty, it is only marginally better than the C approach. Now I am not saying that Go should have exceptions and I understand and agree with the decision not to have them (even though it effectively does have them in the form of panic/recover). All I am saying is that Go should do something about relieving the tedium and complexity of C-like error-handling and in fact I have a proposal to do just that - more on this later.
“Go error-handling is
only marginally better than C”

In summary, the designers of Go have honored the age-old tradition of finding ways to improve on C.  Some of the improvements are actually useful but some are just denigrating to competent programmers.


Go is the first language I have encountered since Occam to use the brilliant idea of communication between concurrent processes using "channels".  Apparently channels are heavily used in Erlang and other languages that I have not had the fortune to use.  However, at one time (more than 25 years ago) I was very interested in the Inmos Transputer which you can read more about here but basically it was designed to allow multiple CPUs to be hooked together easily.

Occam had other features which I thought were going to be very important such as the ability to say whether instructions could be parallelized or must be run sequentially.
I became interested in the Transputer as it was a little like my own (even more way-out) idea for a bit-oriented processor. At the time single chip CPUs were 8 or 16 bit.

My thinking was that there large benefits in moving to single-bit processors which read from memory, and other CPUs serially - ie, one bit at a time.  Each CPU would be like a neuron. They could be hooked together to make something more powerful.

The industry successfully advanced in the opposite direction and soon 32-bit processors (and now 64-bit CPUs) are the norm. I'm not sure that this was the right direction. Maybe one day single-bit computers will find a place.

At the time I thought the software industry needed to embrace multi-threading (or multi-tasking as it was called then). At the time even the major operating systems from Microsoft and Apple did not support multi-tasking (using a kludge called "co-operative multi-tasking). Unfortunately, it was another decade before mainstream operating systems and languages were supporting multi-threading.

It's a shame that the Transputer and Occam never became popular. I think they were ahead of their time.

Not C++

As I said above I think one of the main influences on Go is that it avoids anything invented by C++. Here is a list and my opinion of whether that is good or bad.
  1. No support for inheritance, ie. no vtables etc, as used in compiled OO languages. This is not a bad thing, since inheritance is over-used, badly used and even when used well there are simpler alternatives. When required, Go has mixins (struct embedding) and interfaces, which are perfectly adequate.
  2. No templates. Templates (which made possible the amazing STL) are the thing I like most about C++.  This is a bad thing.
  3. No exceptions. I appreciate the argument for no exceptions, but I wish the alternative provided by Go was better. (Rust has no exceptions, but handles errors better using algebraic data types.) Go tries to improve on C's error-handling (which is very bad) but it is almost as easy to ignore errors in Go as C and the tedium of creating masses of error-handling code is back. Not being one to simply criticize, I propose how to improve Go's error-handling below.
  4. “I propose how to
    improve Go error-handling”
  5. No namespaces. Go has a similar concept of packages (sort of like a combination of assembly and namespace of C#). Unfortunately, packages can't be nested, which is limiting.
  6. Const was one of the simplest and most brilliant inventions of C++. I can't believe Go (and other modern languages) even consider leaving it out.
  7. No public/private/protected keywords. Go has a system where capitalized variable names are public, but a lower-case first letter means it is private to the package (a bit like internal in C#). This is simple but I am undecided whether it is sufficient.
  8. No user-defined operator-overloading. (Of course, like most languages there is operator-overloading for built-in types.) I don't really miss this.
  9. No function overloading (except you can overload methods with different receiver types - otherwise interfaces would not work). I miss this ability but I guess it is not essential.
  10. No optional parameters. I understand the problems of optional parameters, but they are rarely seen in practice. Not a biggy but the alternatives in Go are all error-prone and/or tedious to implement.
  11. Go eschews the C++ philosophy that you should be able to create user-defined types that are as simple and flexible to use as the built-in types (eg see 7 above). Some people hate this about Go, but personally I am not that fussed as long as it gives me everything I need to do the job simply and effectively (which it usually does)
  12. There is nothing as flexible and useful as the STL.  This is bad.
  13. One thing Go does use that comes from C++ is the // end of line comments. (Maybe the Go designers were not aware where that came from :)
In summary, leaving all the C++ stuff out greatly simplifies things (and I don't like a lot of the complexity of C++ either, as you may have guessed from some of my previous posts). However, I think the designers of Go may have thrown out the baby with the bath-water. In particular, I think templates (generics) and const are essential for any modern language.


Leaving out lots of features found in C++ and other languages makes Go programming simpler, but there are a lot of other niceties. Here is a list of all the things I like about Go.
  • Simple Builds Probably the best thing about Go is how easy it is to build a project. In the past (on C projects) I have spent hours or even days trying to sort out problems with header file inclusion, linking or even run-time problems due to incompatible struct member alignment, not to mention DLL Hell. C++ improved on C by using name-mangling so the linker could catch many mistakes. Then C# (actually .Net) further alleviated build problems using metadata to completely describe how to attach to an assembly. Go takes this a step further and makes building software quick, painless and trouble-free.
  • Integration I have always found integrating an open-source library into my own project or even downloading and building a complete application time-consuming and frustrating. (For example, I have spent weeks trying to streamline the build process for my own project which uses several open-sources libraries - see HexEdit.) In Go you simply use go get which finds and installs an open-source project and dependent projects so you can start using it straight away.
  • Compilation Speed Go's build speed is amazing (mainly because the metadata for a package also stores all info about its dependent packages). I never used to think build speed was that important since you can take a break while the project is building. However, Go's almost instant builds makes things possible like the IDE checking your code for all errors (not just syntax errors) as you type, which is great.
  • Type Safety Because of the quick build speed developing with Go has the feel of using an interpreted language like Python. The problem with an interpreted language, though, is that a lot of errors aren't found until you run the program. Go on the other hand catches many errors at compile-time resulting in greater confidence that the code will run without error.
  • Memory Management Another huge tedium in many C programs is keeping track of heap memory and knowing who has to free it. And, if you get it wrong then you can have memory-leaks, double-frees and even nastier problems such as overwriting memory that is used for something else. Go avoids all of this by using a garbage collected heap while somehow avoiding any major performance hit. (My guess is it makes typical code about 30% slower, but at least it does not suffer from the large pauses that you see in multi-generational GC systems like C#). Rust, BTW, does not use GC, having an even better system that allows it to track and free used memory (or anything) automatically.
  • OO One of the best things about Go is its support for inheritance - there is none. I have spent decades trying to remember all the intricacies of C++ inheritance (and still forget things). Go is simple but has all you need in interfaces and mixins for most modern design patterns. There are some inheritance-based design patterns, such as the template pattern (see Template Method Pattern) which you would not implement in Go but in my opinion these are not good design patterns anyway.
  • Concurrency Many forms of concurrent programming are greatly simplified with the use of go-routines and channels, as I mentioned above.
  • Closures Some narrow-minded people claim that Go is not a modern language because it does not support any modern features like OO, generics and exceptions. This claim is discredited by its support for closures - something that did not even make it into C++ (as lambdas) until 2011. Though not as powerful (ie complicated) as lambdas in C++ they are simple and adequate for use with modern design patterns.
  • Interfaces Interfaces in Go are simpler and more flexible than other languages like Java and C# or the abstract base type of C++, since a type does not need to explicitly state that it implements an interface. People have mentioned complications with this system but in reality it works fine.
  • Information Hiding Go has a very simple system of denoting that a value is visible outside a package. If the identifier starts with a capital letter it is visible (called public in C++, C# etc), otherwise it is only usable inside the package (like internal in C#). Once again this is simple and effective.  I have at times found this cumbersome when I want to hide parts of a package from other parts but I think a refinement that avoids this problem (and may be a better design in general) is to have small "internal" packages.
  • Unit Tests If you have read my blog you know that I love Unit Tests (see my posts from the end of 2013 and the start of 2014 such as this). I have tried different C# and C++ Unit Test frameworks and they can be tedious to understand, set-up and start using. In contrast, the unit test framework that comes with Go and is very easy to use. Note that the name "automated package tests" is preferred (I'm not fond of the term unit test either but it seems to be the term commonly used).
  • Code Coverage A very good thing to do is run code coverage tests on your Unit Tests to ensure that all or most of the code is tested. (Note that even 100% code coverage does not mean you have no bugs but it is a good indicator of how thorough your tests are.)  Despite the benefits, I rarely ran code coverage checks until I started using Go simply because it was too hard. Go makes it easy to get simple code coverage statistics and not that much harder to do even more sophisticated analysis.
  • Benchmarks I use benchmarks a lot, for example to check the performance of a function or compare algorithms and have created my own C++ classes for this purpose. Go comes with a simple, flexible, and effective benchmarking system which can even be used for timing concurrent algorithms.
  • Profiling Profiling allows you to find performance bottlenecks in your software.  Go has built-in support for profiling too, though I have not yet needed to try it.
  • Identifiers Finally, though this is more the culture of Go than the language per se, but I really like the emphasis on short, simple (but meaningful) identifiers instead of some of the monstrosities you see in Java and other languages. See Long Identifiers Make Code Unreadable for more info.

Despite its emphasis on simplicity Go does not always succeed. Many features seem to be designed to make things simpler for "beginners" but end up making things more complex for those who know what they are doing. I am all in favor of making things safer but it should not get in the way of competent developers.  Here are a few examples:
  • Side effects Expressions in Go are based on C expressions but (most) side effects have been removed (ie the operators ++, --, =, +=, etc). I guess this is to avoid expressions with undefined behavior that novices often write in C (like a[i] = i++). The consequence is that things that are easily and sensibly expressed in C/C++ cannot be in Go, often requiring use of temporary variables and more lines of code. This makes code more complex and harder to read. [Note that function calls can still cause side effects in Go expressions resulting in undefined behavior - eg, see this.]
  • Operators Similarly, the ternary (?:) and comma operators are not supported (plus maybe others I have not noticed yet). This cuts down the large and confusing precedence table of C, but makes it painful to produce concise and easily scanned code.
  • Map Iteration Order If you iterate over the elements of a map in Go the order of elements is deliberately unpredictable to prevent someone depending on the order. When I first read of this I thought it was an interesting idea, but what idiot would depend on the order of elements from a hash table?!!  I didn't really foresee any problems with it until I came to debug a problem with data from a map. In that circumstance it is an absolute pain to restart a debug session and find your way back to the same point.
  • Unused Variables One of the biggest annoyances is that the Go compiler won't under any circumstances allow the code to build if there are unused variables or unused imports. Now this is great for finished code but is a complete pain in the q*** when you are writing, testing or debugging.  The justification for using errors and not just a warning is (from the Go website) "if it's worth complaining about, it's worth fixing in the code".  I agree entirely that it's worth fixing just NOT RIGHT NOW. Perhaps this problem can be addressed with a good IDE and better debuggers but it needs urgent addressing.  Here are some examples to demonstrate the problem:
    • I accidentally paste some code into the wrong source file and the IDE (Gogland in this case) "kindly" adds a dozen imports at the top of the file. After I delete accident, the code no longer builds and I have to manually go and delete the bogus imports.
    • I add an import I know I will need and save the file and the IDE (VSCode in this case) "kindly" deletes the unused imports for me - annoying.  Later on when I use the imported function the IDE can't seem to determine how to re-add the import and I have try to remember where it was and add it again manually.
    • I declare a variable and the IDE draws a red squiggle under it to indicate an error. So I click on it to find it's just trying to tell me it is unused. Of course, it's unused I just declared it!
    • I want to see the value of an expression while testing so I assign to a temp variable so I can inspect it in the debugger but then the compiler won't even build the code.
  • Shadowing A source of bugs (in C/C++, Pascal/Delphi, Java, Python, Go and other languages that allow it) is shadowing, where an identifier in an inner scope hides the same name in an outer scope. For example, you might add some code with a new variable which changes the behavior of code at the same level which uses an outer variable of the same name.  In decades of programming in the above languages I have encountered bugs caused by this rarely but Go seems to make it much more likely because of the special if statement and how easy it is to use ":=" instead of "=". Go would be safer if it disallowed shadowing, like in C#, or the compiler could at least warn about it (as many C compilers do).
  • Assertions One of the first things I discovered about Go is that it doesn't have assertions.  (There is even a dubious explanation of this in the official Go FAQ.) I love assertions as they have saved me (literally) years of debugging time but I admit with the advent of exceptions and unit tests they have less use. Luckily, Go actually does have them under the name of panic.
    • - C: assert(a > 0);   // meaningful explanation
    • - C#: Debug.Assert(a > 0, "Meaningful explanation");
    • - Go: if a <= 0 { panic("meaningful explanation") }
  • Exceptions Another thing that there has been a lot of complaint about is Go's lack of exceptions. Admittedly, there are many complexities caused by exceptions (such as very subtle undefined behaviors in C++) but I think it is more a reaction against the rampant misuse of exceptions in much Java code (either that or because modern exception handling was essentially a C++ invention and we don't want anything from C++ in Go :). But again we discover that Go does have exceptions but uses the keywords panic and recover, instead of throw and catch.
  • Error Handling So Go does have exceptions (ie panic/recover) but according to the Go ethos (ie, autocratic pronouncement) this mechanism is only supposed to be used for handling software failures (ie, bugs) not errors. The problem with this is that the distinction between a bug and an error is not always straightforward (eg, see Error Handling vs Defensive Programming in my Defensive Programming post); moreover the Go standard library itself often uses panic in situations that are clearly errors, not bugs. Go does have a few tricks (like defer, multiple-return values, the error type, etc) that make error-handling a bit easier than in C, but the simple fact is that in many situations (and without using panic) error-handling in Go can be a mass of complex, repetitive code. (Of course, Rob Pike and others deny this and demonstrate approaches that are unconvincing, being either just as complex or not universally applicable.)
  • Reference Types Go has pointers but it also has reference types (slices, maps, interfaces, chans, and functions) and the way reference types work is complex. For example a nil slice is treated exactly the same as an empty slice (except when you compare the slice to nil, of course) - and these sorts "split personality" behaviors lead to confusion (like arrays and pointers in C). You can also treat nil maps the same as empty maps except that you can't assign to an element of a nil map.
  • Value/Pointer Receivers The rules about when receivers are converted is confusing. Moreover there seem to be contradictory conventions: (1) use a value receiver for "const" methods and a pointer receiver for methods that modify the receiver (in lieu of "const" methods - see Not C++ point 5) and (2) use a pointer for large types (such as a large struct or an array).
  • Implicit Typing Implicit typing is achieved very simply in Go by using := instead of = for the first use of a variable. (Implicit typing has become very popular - int recent years it has been added to C# [var], C++ [auto], Rust [let] etc.). However, my experience is that I am continually adding/removing explicit declarations, converting between = and := and moving statements into or out of the initialization part of if statements.  This (along with shadowing mentioned above) makes refactoring code tedious and error-prone.
  • Memory Allocation Performance One thing that makes me a little uncomfortable is not knowing where the memory for a variable is coming from. In other languages you know if memory is being allocated from the stack or the heap; but in Go you just have to trust the compiler. I must admit that I have not yet encountered any performance problems but I can't shake the nagging feeling that I (or someone else) may make some innocuous code change and suddenly a lot more memory allocations need to be done from the heap. (My benchmarks have shown that this could slow them by more than 20 times!!)
  • Unit Tests Tests are so easy to write but there is one fatal flaw - tests are part of the package. Anyone with any experience of unit testing knows that tests should only test through the public interface (ie Go automated tests should not be part of the package) otherwise test maintenance becomes far too burdensome (see White Box Testing).
  • Formatting Having programmed in C and syntactically similar languages for almost 4 decades I have encountered a lot of different code formatting and debates on the merits thereof. Personally, I have always strictly conformed to K&R styling rules (even including the slight variations with the release of the 2nd edition of the book) with one important exception - I always place an opening brace on a line by itself. I started doing this in C in the mid-1980's and have been doing it ever since (in C/C++/C# etc). It makes the code easier to scan and I believe it has avoided numerous bugs and build problems by doing this.


I started writing this post many months ago and it keeps changing as I learn more about Go. I have now been using Go for almost six months. I have really found that Go makes life so much easier than any other language I have used, in many many areas. On occasion I yearn for an STL container or algorithm but generally you can do a lot of useful stuff in Go.

That said, there are a few things that I really don't like. Possibly the main one is the condescending attitude of the creators of the language. (I have also detected this to different extents in Smalltalk, Pascal, and Java.) In fact one of the things that I believe turned programmers away in droves from Pascal and towards C in the 1980's (me included) was treating programmers like idiots. Note that I am all in favor of making languages safer, as everyone makes mistakes, but not if it gets in the way of doing things properly.

I guess a good example of this is the fact that it is mandated exactly how code should be formatted. If you get the formatting wrong the code won't even compile. To be honest, I actually like the formatting rules, except for one biggy - that a left brace cannot be on a line by itself, but must be a the end of the preceding line. This convention (from K&R) is an anachronism from a time of paper terminals. In my experience it makes code hard to read and leads to bugs. (In fact I seem to recall Brian Kernighan stating in an interview that the only reason they chose this convention was to keep down printing costs in the original edition of The C Programming Language.)

On a practical note, Go does a good job of avoiding many errors that occur in C. However, in one area, it greatly magnifies the problem - Go makes it extremely easy to create bugs by accidentally shadowing a variable. (In fact just last Friday I discovered another bug of my own due to this.) Something urgently needs to be done about this.

Finally, I just realized that I was going to explain how Go error-handling could be improved, without resorting to exception-handling. Writing error-handling code in Go, like C, is tedious - the sort of job a computer should do. I have a proposal to relieve the tedium but I've rambled on long enough for now so I will have to leave you in suspense until next time.

Next Month: A Proposal for Better Go Error-Handling

Monday, 8 May 2017

Why I like C#

I was going to talk a bit more about version control and Git after my posts on Agile version control late last year but I have been very busy with a new job.

“it was 20
years ago today”
However, I have had a draft of this post for many years and significantly it was 20 years ago today (actually last Thursday) that I invented C#. I remember it distinctly as a colleague suggested we go out and celebrate "Star Wars Day" after work. I had never heard of Star Wars Day (May the Fourth) before then.

It was during a jovial conversation over a few drinks that I mentioned my idea that Microsoft should invent their own language which, as it turned out, was rather like C#. The reaction was that I was a complete idiot, after all Microsoft were not known for that sort of thing - the closest they had come was COM (Component Object Model) which we had all used and agreed was an unwieldy mess. Microsoft had also invested heavily in their own dialect of Basic called VB (Visual Basic), as well as their C compiler, so why would they want a new language? This was in the days when it was novel for a company to create a language, begun, I guess with Sun and Java.

My idea was oddly triggered by my problems with the latest Visual C++ 4.0 (which we had been using for a few months). I had written a new SCSI library in C++ for some 32-bit code but was greatly dismayed to find that VC++ was not designed to create 16-bit code (for MSDOS, Windows 3.1) or even Win32s - I wanted to use the C++ code with some legacy C applications.

We had also encountered lots of frustrations with C++.  We had done a lot of stuff in C previously but needed some modern languages features. Other languages around like Delphi and Java were not suitable either.

I decided that what was really needed was a language that Microsoft would properly support (C++ support was not good at the time), had modern features that C was lacking (like templates, exceptions and even a few niceties from Java, like GC). There was also a technological development (see next), where Microsoft was at the forefront, which was crucial for my new language.


I had been greatly intrigued by p-code since 1978 when I had first used Pascal at University. Pascal (at least the compiler I used) compiled to p-code which thence required a p-code interpreter for whatever CPU you were using. (This idea was later used Java in byte-code and .Net MSIL.) Not long after that I did a lot of programming in Forth (sort of a p-code assembler) on my Commodore 64.

The problem with p-code is that it is slow. Running p-code (via interpreter) is about an order of magnitude slower than the same program compiled to native instructions. A lot of people thought that this would become less of a problem as computers got faster. The trouble with this argument is that while the interpreted code got faster so did the native code. (The idea of p-code also seemed to be fading until Sun created Java, but a lot of people did not like the poor performance of Java.)

However, some people at Microsoft realized that once processors were fast enough you could compile on the fly (later called JIT-compilation) rather than interpret the p-code. Sooner or later the compilation time would become negligible.
"once processors
were fast enough
you could compile
on the fly"

Microsoft had done a lot of research on JIT-compilation, on their (then) flagship language VB. (Basic is an interpreted like p-code).  During the mid 1990s I did some tests and found that VB6 was similar in speed to C++ in many areas due to use of JIT-compilation. All the C++ programmers I told this to just did not believe me (or did not want to).

Why Microsoft needed a new language in 1997

  1. C (and an emerging C++) were the driving force behind almost all the successful MSDOS and Windows products and most C/C++ programmers, myself included (perhaps unfairly), avoid VB like the plague.
  2. C/C++ development was painful and slow. COM was supposed to address this but made it worse!
  3. There were a lot of error-prone areas in C/C++ that could be fixed by new syntax and semantics.
  4. There were a lot of things that could be taken from C++ like exceptions, templates and STL containers.
  5. They could wrap/hide the horrendous Windows API (like MFC did for C++).
  6. There was a lot of interest in Java (especially its innovative memory management system) but Microsoft were impeded using Java for their own ends by a litigious Sun. (Sun had already successfully sued Microsoft for not properly implementing Java.)
  7. A language like Java generating intermediate code (p-code) would be slow but this problem could be alleviated by Microsoft's research into JIT-compilation from the early-mid 1990's.

.Net and C#

I did not really think much more about it but a few years later I started hearing about NGWS, VB7 and later C#. In 2001 I downloaded and tried the C# public beta. It was then that I realized that C# was very similar to the language that I had previously been advocating. My only disappointment was that C# lacked templates and STL-like containers, though C# 2.0 later added generics.

To be fair to Microsoft (and Anders) there were a hell of a lot more great things in C# than I could ever have come up with, though many came from Java and C++. And even since then, C# has added some brilliant new things of its own, like lambdas and LINQ.

C# Problems

You may have noticed from previous posts that I like C# in many ways. I guess this may be partly due to the fact that I feel I independently invented it. However, there are a few things that it got wrong, but remember that these are very small in number compared to the large number of things it got right.  (Also note that some of these things are due to the .Net CLR upon which C# depends.)
  1. One thing that I really thought C# should have had from the start was (what came to be known as) generics. In 1997 I had become a huge fan of templates, and especially the STL, in C++. I remember reading that they would be added later (and were added in C# 2.0). Why not delay the release of C# 1.0 until they were ready? This has caused a lot of problems of maintaining backward compatibility, especially when implementing generic interfaces.
  2. I really hate the containers in C# compared to C++ STL containers or even those of other languages. Related to the previous problem is the later addition of generic containers to replace the earlier ones.
  3. Apart from the containers most of the C# run-time library is excellent. But there were some simple, obvious mistakes which I picked up straight away. An obvious one was that Int.Parse() throws an exception if it encounters a non-digit rather than the more sensible behavior of something like C's strtoi().  This was later addressed with Int.TryParse().
    Another one I encountered almost straight away is that String.Substring() throws an exception if the string is not long enough. This might be good behavior sometimes but more commonly you would just want a shorter string returned rather than throw an exception.
  1. One of the stupidest things in C# is the Decimal type. (Anders seems to have an irrational penchant for stupid numeric types such as the Delphi real48.) As soon as I saw it I thought it would be better and much simpler just to add fixed point facilities to existing integer types, attached as metadata (ie, using an attribute attached to an integer variable).

    I wrote at length about this over a decade ago - eg see Why do we need Decimal?
  2. Const is one of the simplest and most useful additions to C++. I do not know why C# (and other languages) continue to ignore it. TBD: elaborate?
  3. The fact that all static and heap variables are cleared (made zero) at run-time is sometimes unnecessary and inefficient.  What's the point of a huge array being initialized to zeros then immediately have all its values set to some other value(s)?  (Note that the security argument is a furphy.)
  4. I previously mentioned that the behavior of the default test for equality (Object.Equals()) is flawed. (See the C# section in my post on Zero).  Actually having recently used Go (the language from Google) I now realize that all the "object-oriented" stuff, that C# copies almost exactly from Java, is unnecessary and actually encourages poor designs (but at least there is no multiple-inheritance!).  I may talk about this more in a future post on Go.


C# made things so much easier after using C (and then C++) for decades. There are so many things I could mention but a few immediately spring to mind:
  • metadata system which avoids all sorts of configuration problems that plague C due to header files, linking issues, DLL hell, etc
  • the garbage collected heap which frees you from the tedium of having to track who allocated what and who needs to free it and making sure there are no memory leaks and double-frees, etc
But there were many other little things - for example, see the Code Project article that I wrote in 2004 called C# Overflow Checking.

Sure C# borrowed a lot of things from Java, but that is de rigeur for language design (and Java got a lot from C++).

Thursday, 17 November 2016


Here are links to all of my posts so far, loosely categorized.

Note that there are a few bonus links to my Code Project articles - marked with [CP]

Software Design

Design Principles

Handling Software Design Complexity - what software design all boils down to
DIRE - an obvious thing we often forget
Developer Quality Attributes - or why fixing bugs is not important
Verifiability - software is useless unless you can verify its correctness
Why Good Programs Go Bad - risk avoidance causes software to "rust"
Book Review: 97 Things Every Architect Should Know

Design Practices

Fundamentals of Software Design - 8 ways to create a good design
Agile Design - how emergent design almost always works better than BDUF
Inversion of Control - IOC is a technique for better decoupling using DIRE
Dependency Injection - an example of IOC


Gas Factory Anti-Pattern - a mistake even (or especially) good designers make
Reusability Futility - "Simplicity before Generality, Use before Reuse"
Shotgun Initialization - an example of the dangers of defensive programming
Layer Anti-Pattern - the problems of a common, obvious approach



Agile's Fifth Element - favor simple design over re-usability and generality
JIT (Just In Time) - an example of DIRE that is core to much of Agile
DIRE (Don't Isolate Related Entities) - how you divide and conquer is the key
Agile Design - evolving software one small step at a time
Agile and Code Reuse - all about YAGNI (you ain't gonna need it)
Software Quality Assurance & Agile - how Agile evolved from, but is different to, SQA
Lean is not Agile - applying "eliminate waste" to software design leads to BDUF
Software Development Methodologies [CP] - Agile and other methodologies by analogy


Scrum Team Size - teams should be small to avoid social loafing and other phenomena
Scrum Team Composition - "feature" teams are the key
Collaboration - traditional development discourages collaboration + why Scrum works

Making Agile Work

Scrum Standup - it's more about visibility than communication
Developer Quality Attributes - what benefits developers eventually helps users
Agile Version Control - Agile requires the right version control practices & software (Git)
Scrum Problems - management "buy-in" & other things that help Scrum work properly
Why Scrum Fails - intransigence, non-collaboration, etc
Written vs Verbal - when, who, why, and how of Agile documentation
JIT Testing - testing as you go (continuous testing) is an example of JIT (Just In Time)

Unit Tests

Change - how Unit Tests help you to embrace change
What's so great about Unit Tests - Unit Tests are not about finding bugs
White Box Testing - the best Unit Tests use "good" white box testing
Personal Experiences with Unit Testing - it took me 20 years to truly appreciate them
Challenges - why getting started with Unit Tests seems, but is not, insurmountable
Unit Tests Best Practice - a few things to avoid
Arguments Against Unit Tests - common arguments and why most are invalid
Summary - Unit Tests concisely summarized



Zero - bugs are less likely if you don't treat zero as a special case
Asymmetric Bounds - in code and GUI design this is an important way to avoid bugs
Book Review: Clean Code - a great book on creating the best code

C Coding

Best Practice in C for Modules - strong-coupling and other things to avoid
Defensive Programming - how it works and how it can hide bugs
Shotgun Initialization - a defensive programming practice to avoid
Alignment and #pragma pack - make structs "alignment agnostic" to avoid surprises
Making Code Testable - coding for testability improves correctness, reliability, etc
Ten Fallacies of Good C Code [CP] - 10 more things to avoid

C++ Coding

STL's Dark Secret - vectors are slower than they should be
Iterators Through the Looking Glass - subtleties of the STL reverse iterators
C++11 and Lambda Functions - lambda functions make STL so much better
Nested Functions using Lambdas - you can finally have nested functions in C++11

C# Coding

Overflow Checking using checked/unchecked [CP] - C# has some cool features
Nested Functions using Lambdas - includes an example of using C# lambdas


Long Identifiers make Code Unreadable - don't try to put too much info. into a name
Self Describing Code - why it's a bad idea and why you should comment your code


The Phillips Scale of Code Quality - how good is your code?
Version Control - Personal Experiences - hands on version control

Version Control - Personal Experiences

Last month we looked at how to use version control when using Agile development. My conclusion was that you should be using Git. This is simply because using CI (Continuous Integration) there is a lot of branching and merging going on and Git is the only version control system that allows a version to have to have two parents. This is not to say that you can't use other version controls systems (and in fact I like SVN better in many ways - see below) just that Git keeps track of what needs to be merged for you.

This month I take a leisurely stroll back through time and look at all the version control systems I have used. I have a long personal history of using version control systems (generally being the administrator for such systems). I have used the best (and worst) but you should note that there are some excellent systems (like the proprietary Perforce and open-source Mercurial) that I have not used (yet?).


I first experimented with version control while at Sydney University in the early 1980's using the Computer Science department's VAX 11/780. This ran a variation of UNIX that included a primitive version control system called SCCS (Source Code Control System) I think.


I first used version control for my C source code in several MSDOS/C jobs during the mid-1980's. At the time the only serious option for MSDOS was PVCS (Polytron Version Control System) which I used at several companies.

I can't say I loved PVCS but it did the job. It efficiently stored changes to text files as "reverse deltas" and had all the basic features like branching and tagging.

CVS, etc

In the late 1980's I  moved back to UNIX where I was a system administrator and system programmer. Under UNIX I tried SCCS, RCS (Revision Control System) and an early version of CVS (Concurrent Versions System) all of which worked butwere difficult to use, in some way.


When I moved back to MSDOS/MSWindows systems in the early 1990's I used TLIB. This was similar to PVCS, but quite a bit better. However, this was still a command line driven system which I found tedious to use.


In the mid-1990's Microsoft included a GUI based version control system with their Windows IDE (Developer Studio). This seemed like a great idea to me after my experiences with command-line version control systems. However, Visual Source Safe (VSS) turned out to be by far the worst product I have ever used - it was not only poorly designed and very confusing, but also had a tendency to lose and corrupt files and whole repositories! Moreover, it made multi-site development impossible due to poor performance - there were 3rd party extensions to VSS (I later used one called VSSConnect) that were developed purely to improve performance over the Internet - but even then the performance was barely acceptable.


In my next job I used ClearCase (originally developed by Rational before being bought by IBM). This is the sort of product you would expect from IBM - thorough but confusing due to its plethora of features and options and requiring a lot of work to maintain. Luckily, I got to work on a new project where I had the opportunity to try a new open-source control system called Subversion (SVN).

SVN (SubVersion)

I set up SVN as an Apache module on one of the companies servers and was amazed at the performance. Using an Apache server allowed SVN to easily work over the Internet since it used HTTP/WebDav. (SVN also provides its own protocol and server call svnserve but the Apache option has advantages.)

The team for this project was split between Australia and Belgium but the two developers in Belgium got great performance (through VPN over the Internet) even though the server was in Sydney. Generally we spent about 10 minutes a day updating and committing changes.

This success with SVN encouraged me to use SVN for my own personal projects.  I put my HexEdit open-source software (see into an SVN repository which was hosted on SourceForge.

SVN was the first version control system I actually enjoyed using. One reason was that there was a Windows shell extension called TSVN (Tortoise SVN) that allowed you to easily do all your version control tasks using Windows Explorer.
SVN was the first
version control system
I enjoyed using

Another favorite thing is that, even if you are disconnected from the repository (eg if Internet connection is lost), you can still compare your current changes with the repo. This is because SVN keeps a local copy of all files as they were when you last updated from the repository.


In my next job I found that I was again dealing with the horrible VSS.  Luckily, the company decided they had had enough problems with VSS and moved to TFS. Now TFS is much much better than VSS but still inferior in many ways to SVN. TFS does provides "shelving" which is a good idea but I have not found it all that useful in practice.
TFS does not
conform to the
Observer Pattern

TFS is more of a "centralized control" system than SVN. For example, it keeps track of all the files you have checked out into your WC (working copy) in its central database, whereas SVN only stores the actual files (the repo) in its central database and tracks things to do with the WC locally. To me the SVN approach makes more sense (conforming to the "Observer Design Pattern") and indeed many developers encounter problems when the local WC becomes inconsistent with TFS's idea of what it should contain.


Finally, I last came to try Git a few years ago as I was intrigued by its branching model.  This solved the only annoying thing I found with SVN - the problem of merging changes between the trunk and a long term branch. I like to merge often (as Agile and CI say you should) but SVN forced you to manually keep track of which versions you have already merged between branches.  Git automatically tracks your merges so you can't forget to merge or merge the same thing twice.
Git makes it easy
to branch and

There is a lot to like about Git but in all honesty I do not find it as enjoyable to use as SVN. First, there are a plethora of confusing commands and options. For example the ability to "stage" a commit before actually committing I never found that useful. It just adds another layer of complexity.

But the worst thing about Git is that it is all command line driven. I always find it much easier to remember how to use GUI software than to remember obscure command names and options. Luckily Atlassian provides a GUI interface to Git using a free product called SourceTree.

One good thing about Git is that it has an excellent book called "Pro Git" that explains in detail how to use it. However the book does get a little evangelical in its praise for Git at times. For example, it goes on about atomic commits (SVN has atomic commits), how fast it is to clone a repo (SVN checkout is faster) and that it has the killer feature of  lightweight branching (SVN has that too).

Then there is the fact that Git is distributed whereas SVN is centralized. Now people rave on and on about the advantages of distributed version control but I really don't see it.  Sure if you have an open-source project with one or more different "forks" then it's probably useful. Personally I prefer one central "master" copy of the source where changes are merged to as soon as possible. I think having multiple repositories floating around would lead to a merge nightmare and contravenes the idea behind CI.

Anyway, I don't want to go into too much depth on the "centralized vs distributed" debate here (I may later). So that's all for now. Bye.

Monday, 26 September 2016

Agile Version Control


A mistake often made when adopting Agile is insisting on certain Agile practices and outcomes without converting to using the necessary tools and techniques (see the CASE STUDY below for an example). This is one major deficiency of Scrum or, at least, of using Scrum by itself.  Scrum does not require necessary development tools (and even some essential processes) that allow Agile to work. I have talked about this previously (eg the Summary of November 2013 post).

A crucial practice in Agile is Continuous Integration (CI).  CI is difficult, if not all but impossible, without certain tools and practices, such as automated builds (ye olde build box), Agile (JIT) Design, etc. I will also mention Unit Tests here (again :) as without their safety net you cannot hope to make CI work.  CI also depends on using a modern version control system, like Git, and using it in the right way.  This is what I want to talk about.

A few years ago I was working on a project where management insisted on a move to Agile with the aim of creating new software releases every few weeks, instead of every few months, as was previously done (ie, about 4 to 6 times more frequently). However, no new tools or development infrastructure was introduced to facilitate this. Moreover essentially the same procedures were used.  The development procedures alone were onerous, but not as bad as testing and release procedures (of which I had little understanding and will make no comment).

For an unlucky developer there was a tedious and error-prone procedure for every new release. It was bearable when done a few times per year but less bearable when it had to be done more often. This was a typical Waterfall development approach where the project was branched for the new release so that bug fixes could be made on the branch without affecting ongoing development. (I will explain this sort of approach in detail below.)

The major steps were essentially
• Branch the project in VSS, then delete some of the unneeded branched files
• Branch and move some global headers shared between projects
• Manually modify project files to handle VSS problems and change global header locations

This whole process usually took one developer at least a day if everything went well. This is not an exaggeration, though the whole process was exacerbated by the use of VSS and a large manual process that should have been automated.

I will get to the point of this post in a moment but first I give a brief overview of how version control relates to the development process and how it was used before Agile came along.

NOTE: If you are familiar with version control concepts then you can skip to the Continuous Integration section below.

Version Control

All version control systems allow you to keep track of the changes made to source files. One advantage of this is that you can see how the software has evolved over time. This can provide a deeper understanding of code than can be obtained by just looking at the current state. Being able to compare source files from different times is invaluable when investigating why a change was made, how bugs were introduced, etc.

Moreover, you can get a snapshot from any point in time. For example, in the diagram below you could use the version control system to "checkout" the source as it was at the time of Release 1.0. You can then build that specific version if you need to investigate its behavior.

Diagram 1. Basic Version Control

Each box in the diagram represents a check-in of one or more files. Of course, this is a simplified diagram - real projects have many more check-ins (hundreds or even thousands).

Another essential facility of a version control system is branching. This allows variations to be developed from a common base version. Traditionally, branching has two uses:
  • release branching - a branch is created when a new version is released
  • feature branching - a branch for an experimental or long-term development

Release Branching

Release branching (sometimes called fix branching) is very common (if not ubiquitous) in pre-Agile development.  It allows released versions to be quickly fixed while not interfering with ongoing development. For example, consider a software project with two releases: versions 1.0 and 1.1, with ongoing development on version 2.0.
Version Control Jargon      

Repository (repo) = file historical storage
Checkin = add or update file(s) to the repo
Checkout = obtain a local copy of file(s)
  usually in order to update and checkin
Commit (v) = checkin
Commit (n) = files that were checked in
Merge = combine changes from 2 sources
Working Copy (WC) = local copy of files
HEAD = pointer into the repo for the WC,
  usually the most recent commit on the trunk
Branch = fork in version history
Trunk = ongoing development "branch"

Now imagine that a user has found a critical bug in version 1.0 (Bug 2 in the diagram below). You can't reproduce the bug in the latest version but you can reproduce it in 1.0 (and 1.1). Of course, you can't simply give the customer a copy of 2.0 as they have not paid for the new features and, in any case, it is not ready for release. You need to provide a fix for version 1.0.

You check out the code for 1.0 to view and debug it and quickly find the problem. Now you can check-in your fix to the branch for version 1.0. You also port and check-in the fix to the version 1.1 branch as well. (For completeness you also check why the bug no longer occurs in 2.0 - it may simply be hidden by other changes or obviated by some later development.)

Diagram 2. Release Branching

Feature Branching

Feature branching is traditionally used for a development that needs to be separate from the main ongoing development. This may happen for various reasons:
  • the development is experimental and may not prove to be viable
  • the development is not certain to be needed (eg, for proposed legislation)
  • the development is for a large feature that overlaps with other release(s)
Diagram 3. Feature Branching

These branches are always intended to be merged back into the trunk, but it can happen that the branched code is not required and so is discarded, eg if the experimental development is found not to be viable.

I have been involved with a few feature branch developments and they are notoriously tedious and troublesome. The first problem to avoid is that by the time the feature branch is merged back into the "trunk" there are so many incompatibilities caused by the divergent code that it can be difficult or even impossible to merge the differences. In this case a great deal of work is required to integrate the changes and often this involves workarounds and kludges that corrupts the integrity of the software design. It's not uncommon for the feature to have to be completely rewritten to be compatible with the current ongoing project.
“feature branches
can be difficult
or impossible
to merge”

Because of the above problem developers have learnt to "merge early and often". That is, changes on the trunk should be regularly merged into the feature branch to avoid divergence. Of course, this is a tedious and time-consuming process that tends to get skipped due to more urgent tasks. It often also requires discussions between members of the feature and maintenance teams to understand what the code does and how best to merge the differences.

Diagram 4. Merging Trunk Changes

Diagram 5. The completed feature is merged into the trunk

Continuous Integration

These sorts of problems of merging and integrating code (as well as other problems) led to the practice of continuous integration (CI) which is core to the Agile approach to software development. But even without Agile, CI avoids integration headaches, improves common understanding and communication in the team and generally results in a better design and less bugs. It is an example of DIRE since you are not isolating the new features from the rest of the code as it evolves.

Agile Approach

CI enables the agile approach of delivering small improvements that slowly but surely moves the development towards the target. The target, of course, is the PO's understanding of what is needed and which may itself be moving.

Each atomic development task, called a User Story, needs to be small enough to be completed in a few days (and certainly within the current sprint). If the task is larger than that, then it needs to be split up.
What is a User Story?   

User Stories are used in Agile as a replacement for "specs". A User Story is a simple statement about a change or enhancement to the software. This is often written on a small card in the format:

As <A> I want <B> so I can <C>  where:

<A> = the person/group requiring the enhancement -
  often a software user, but can be anyone
<B> = a simple description of the enhancement
  from the perspective of <A>

<C> = the purpose or benefit of the enhancement -
  this can be skipped but I highly recommend it

A User Story is almost all the written documentation you need to specify all changes to the software.  Of course, for a large feature you will have many User Stories grouped into an Epic.

The other written documentation you need is a handful of Acceptance Criteria written on the back of the related User Story card. These explain how you can check that a User Story is complete.


As an administrator I want to be able to change my password so I can ensure the security of the system

Acceptance Criteria:
1. old password must be entered first
2. new password must be entered twice to catch typos
3. new password must be different to old password

The common argument against this approach is that it is inefficient - it's better to understand the problem, come up with a solution and implement it all in a controlled manner. In theory this sounds like a good argument, in practice it doesn't work (see May 2014 post on Agile Design for more on the evils of BDUF).  If BDUF did ever work as it's supposed to (which it very rarely - if ever - does) it would be more efficient. But even then the Agile approach is more reassuring to the PO/users/stakeholders; even in that worst case it still has the perception of greater productivity since everyone can see progress being made.

A stronger argument against the Agile approach is that there are some complex tasks that cannot be decomposed into simpler ones - they cannot be tackled at all with an evolutionary approach. Again this may be theoretically possible but I have never encountered such a situation in practice. Once you get the hang of it,  it's easy to find a way to work towards a goal while keeping the software useable and useful at every point along the way (or at least at the end of every sprint).

The crucial point is that User Stories are designed such that at every stage the software can be used. At the end of every sprint the PO will have a working, bug-free piece of software that can be tested and even delivered to real users. To make this work you need a certain type of version control system.

So what sort of version control do you need for Agile?

In the end many things in Agile - short sprints, small User Stories, JIT Design, feature teams, and CI - work together and depend on a version control system that allows easy branching and (especially) merging. Having a clumsy or manual merging process is not an option as User Stories are continually being merged back into the trunk.

Conventionally version control systems treat the relationship between versions as a tree. If you look back at all the above version control diagrams (ignoring the dashed arrows) you will see that they are all tree diagrams. (I know, it's obvious that you need branches to form a tree.) Modern version control systems help you merge code between branches (the dashed arrows leading into the blue boxes) but you still need to manually keep track of where the merge comes from and which bits have been merged already.

This is where Git  comes in.


In my opinion Git is the only version control system that should be used for Agile development. Git has one killer feature - a version can have two parents. Git can automatically merge versions always keeping track of things so that it does not miss versions or try to merge the same thing more than once.

This means that a version "tree" becomes instead a "DAG" (directed acyclic graph) because each version can have two - not just one - parents.

Before I discovered Git I used another fine version control system called SVN (short for Subversion), starting about 10 years ago, and found it a joy to use except for one thing - on occasion I would need a long-term branch which was painful to keep updated with trunk developments. To avoid a nasty surprise when the branch had to be merged back into the trunk I regularly merged trunk code into the branch (as in Diagram 5 above). However, to make sure that changes were not missed, or the same change merged more than once I had to manually keep track of what versions from the trunk had been merged into the branch. This was tedious and error-prone and something that Git does for you.

Agile Version Control

Agile version control using Git is simple. A developer branches the code to work on a User Story. Git makes it easy to merge the branch back into the trunk. A simple example is shown in the following diagram where all User Story branches are merged back into the trunk by the end of each sprint.

Diagram 6. Agile Version Control

However, generally you need control of what features are delivered to "production". This is often accomplished by having dual streams - an on-going "development" stream (or branch) and a separate "delivery" stream (trunk) allowing control over when features are delivered.

Diagram 7. Dual Streams

This is very different from traditional version control where branches are eventually discarded (after possibly having been merged back into the trunk) - instead you have two on-going streams. This approach is only possible with a version control system such as Git where a version (ie, a node in the diagrams) can have two parents - in the diagrams this is any node with two outgoing arrows.

For a large project with multiple teams I have even seen the suggestion of multiple on-going "development" branches (eg: see Version Control for Multiple Agile Teams). I have not tried this but I have reservations because code merges between the teams would occur irregularly and might easily be forgotten (remember the rule of merge early and often). The two teams might create conflicting changes which are not discovered until the conflicting code is merged from the trunk into the other teams stream.

Diagram 8. Multiple Development Streams


Agile version control is very different to traditional version control. It is performed using many small feature branches which are being continually merged back into the trunk (or main development stream). This is necessary for the practice of Continuous Integration (CI) which is a core part of the Agile approach.

CI is an example of JIT (and hence DIRE) allowing problems to be found as soon as possible. It also supports other Agile practices such as short sprints and evolving the software using small, simple, user-centric User Stories. Use of CI depends on a version control system that allows easy branching and merging.

Most Agile teams also have two ongoing code streams (see Diagram 7) - the development "branch(es)" and the delivery "trunk". Again, this relies on a version control system that supports easy merging.

As far as I know Git is the only version control system currently available where a version node in the repository can have two parents. In other words Git allows you to automatically and safely merge code from different sources.

Although Git is not without it's problems (which I will discuss next month) I think using it is essential for Agile development to work smoothly. I will discuss the day-to-day use of different version controls systems (including Git) next month.