Introduction
In brief this post is about a pattern of design that is often wrongly used - that of structuring a design into layers. Layers can sometimes be a good design but often are not. Even the most experienced software designers are prone to misuse layers - I give some examples below.
Anti-Patterns
I first encountered the idea of anti-patterns when I stumbled on this Wikipedia page:
Anti-patterns
Though this list a muddle of various ideas many of them show some insight and it is worth a read.
Over the years I have encountered numerous design problems that can be traced back to this fundamental problem. Generally having an understanding of good design, and being aware of a natural tendency to use layers, can help you avoid this anti-pattern.
Design Fundamentals
In my first draft of this post I wrote a summary of how to divide a problem into sub-problems by way of introduction. This summary became too long so I put it in a separate post - see my
previous post. If you don't want to do that, here is a summary of that summary in the form of a few guidelines:
- Each module should just do one thing (SRP)
- Duplicate code is indicative of poor partitioning (DRY)
- Partition the problem at places that creates simple, well-understood interfaces
- Try to minimize the number of other modules a module uses (loose coupling)
- Remove unnecessary information and control from interfaces (information hiding)
- Separate the parts that are likely to change from the rest of the system (encapsulate what varies)
- Create flexible interfaces if likely to change and forward/backward compatibility is required
- Provide separate read and modify interfaces
- Avoid unreachable code
Layers, Hierarchies and the Brain
One more thing, before we get to the point of this post, is to do with the human brain. The human brain seems to have an affinity for giving related objects an ordering. (Come to think of it, this is not just a human trait as even chickens have a pecking order.) An ordering of objects is easily visualized using layers.
Not everything can fit into a linear ordering so the slightly more flexible concept of a hierarchy is used for organizing a myriad of things, from the directory structure on your hard disk to the classification of all lifeforms. Of course layers and hierarchies are closely related. Layers can be thought of as a linear hierarchy. And hierarchies are often organized into layers for simplicity.
Although layers may assist our understanding they do not always make sense in the problem domain. As an example, the taxonomic classification in biology mentioned above is based on 7 layers (kingdom, phylum, class, order, family, genus, species). Despite the enormous effort that has been put into this classification it is recognized that the practical usefulness of it is limited at best. For example, many lifeforms have evolved so little that there is little point in having that many levels, whereas others have branched so often that seven levels is insufficient.
Even hierarchies are not always the best way to represent information. This is why hierarchical databases are no longer used, having been usurped by relational databases. Though relational databases are in some ways more difficult to understand, they have great benefits in their ability to split data into tables and relate them to each other. In fact they afford many of the advantages that we are seeking in the design of software, such a eliminating duplication (DRY), SRP, etc.
Use Of Layers
Danish Approach
I remember an old TV ad promoting some sort of Danish pastry with the catchphrase "layer upon layer upon layer" (said in a supposedly Danish accent). This Danish approach may be good for creating delicate pastry but delicate software is a bad idea. Of course, we typically don't stretch and fold the code to make layers so how do they form?
The most common way that layers form is through a tendency to move
subservient code into a lower layer. It is a great idea to divide code into different modules but doing it this way is often too simplistic.
As we saw above the proclivity to use layers is related to how we think and also to how we are taught to write code. To most programmers it is obvious that when a function gets too large, you try to extract a portion of that function and
push it down into a sub-function. Most of my university training was in Pascal and this practice was heavily promoted by most, if not all, of my tutors and lecturers. (It is especially easy to do this in Pascal as a nested subroutine can access the local variables of its parent).
This is not always the wrong approach but often there are better ways to sub-divide a problem than by simply pushing the details into lower and lower layers.
Wrappers
Another way that layers are used is by adding a "wrapper" around an existing module. Again this can be useful but is often misused. First we will look at when wrappers should be used then consider how they are misused.
By far the best reason to add a wrapper is to simplify an interface. By hiding irrelevant parts of the interface you aid decoupling and assist understanding of how the module is used.
A wrapper may also be used to restrict use of a module in some way. For example you might need a read-only interface to a container class (see dual interfaces discussion in my previous post). Doing this is really just a special case of the above (ie, creating a simplified interface) but is not done to assist understanding but to enforce security.
The Adapter pattern (see
Adapter Pattern) is used when you need a different interface to a module and have no control over the module being wrapped. For example, an interface may be precisely specified so that different modules can be used interchangeably (see
Strategy Pattern) -- if a third-party module needs to conform to the interface then you need to create a wrapper to do this.
I can think of two ways in which wrappers are misused. There are "do-nothing" wrappers that have no real purpose. There are also wrappers that add features that are not directly related to the module being wrapped and hence violate SRP.
With regard to "do-nothing" wrappers, I should first explain that wrappers that simply add a layer of indirection can still be useful (see Adapter pattern above). However, when the reason for using them is not valid then they just add complexity (and inefficiency).
As an example, I recently encountered a wrapper for LibXML which was justified on the grounds of decoupling and
good programming practice. However, it did not provide a simplified interface any better than the cut-down interfaces already shipped with LibXML. It was also not an appropriate use of the Adapter or Strategy pattern since the library was efficient (and open source) and was never going to be replaced.
LibXML is an open source library for working with XML files. It is highly recommended as it is efficient and comprehensive. It also provides interfaces for simplified use.
Further, the wrapper by its design duplicated some error handling which affected the performance of the system. It also added data conversion facilities which is a bad idea too (see below). Both of these are violations of the DRY principle.
Even worse are wrappers that add features not directly related to the module being wrapped. It is not uncommon for a wrapper to perform data conversion operations. Often a better design can be found by moving the operations into a separate module which is independent of the called module. There is also a tendency for code unrelated to the module being wrapped to creep in, which is definitely a bad idea.
So before creating a wrapper, ensure there is a point to doing so. If you just want to create a wrapper to provide extra features then try to find a design where the extra features are encapsulated in a different module instead.
Inheritance
One special case I feel compelled to mention is the misuse of inheritance in object-oriented languages. To be honest I have never liked inheritance since the late 1980's when I read the first edition of Stroustrup's "The C++ Programming Language" (and even before that when I read about Smalltalk in an edition of Byte magazine in 1981).
Inheritance has very little applicability to real world software. It does have its uses, as demonstrated by hundreds of elegant examples in various textbooks, but I find that it is rare to want to model an "is-a" relationship in real-world software.
Now I am not talking about inheritance when applied to derivation from an abstract base class (in C++) -- this is essentially implementing an interface and is useful for implementing polymorphism which is extremely important in many design patterns. I am talking about inheriting from a working (or at least partially implemented) base class, which is essentially the same as adding another layer (the derived class) on top of an existing module (the base class).
Of course, misuse of inheritance has been discussed at length for more than 20 years (even Stroustrup in the 2nd edition of "The C++ Programming Language" [1991] says that "inheritance can be misused and overused") so I won't go into that, except to say that it is probably the most egregious thing since goto. It has also affected the design of languages (examples: multiple-inheritance is not permitted in Java and C#, the addition of final (Java) and sealed (C#) keywords, etc).
However, even what is considered acceptable use of inheritance can often be avoided with consequent improvement to the design. One example would be the use of the
Decorator Pattern.
Examples
There are many, many examples of large projects falling victim to the Layer Anti-Pattern. Many operating systems have a layered architecture that can create problems and inefficiencies. For example, Windows NT (upon which Windows 2000, and all later versions of Windows are built) originally had a strict layering but this caused problems (in particular for graphics performance) and the layering was later subverted so that the operating system could adequately run games and other graphically intense software.
Here are two well-documented examples, which you can also read about elsewhere. I also give a C code example in the next section.
Example 1: OSI
Soon after TCP/IP was invented another networking protocol was invented in an attempt to do it better. This was called OSI (open systems interconnect) and was made a standard by the ISO (international standards organisation.) Some computer system manufacturers spent many, many millions of dollars in the 1980's developing and marketing it.
The problems with OSI and the reasons for its failure have been discussed extensively. I believe the fundamental problem was in its layered design. OSI used a 7-layer model (as opposed to TCP/IP's 3 or 4 layers) which made it complicated to understand and difficult to implement. It also made it inefficient - for example each layer had its own error-handling which meant that there was a great deal of overhead and redundancy.
OSI shows how even experienced designers are subject to the layer anti-pattern. For more on this see the section
Layering Considered Harmful in the document at
http://www.ietf.org/rfc/rfc3439.txt.
Example 2: Linux Scsi Midlayer Mistake
A device driver is software that essentially becomes part of the operating system in order to interact with a piece of hardware. In the Linux Kernel, SCSI devices are handled specially as there are a large number of devices that support the SCSI standard. SCSI is handled using two layers: the high-level layer that does SCSI stuff, and low-level drivers for specific hardware. This is an example of good layering since it nicely separates concerns.
However, with this design it was soon noticed that a lot of the low-level drivers were using the same or similar code. In an attempt to remove this redundancy (remember DRY) shared code was moved upwards to become a
midlayer. After awhile it was recognized that this was the wrong approach and became known as the
midlayer mistake. A better approach was found that moved the shared code into a library that any of the low-level drivers could link to. For more on this see
Linux kernel design patterns - part 3.
Avoiding Layers
So how do you avoid this anti-pattern? First understand and be aware of it, then consider design alternatives. Knowledge of, and experience with, different design patterns is useful.
Waterfall Methodology and Layers
The waterfall development methodology, though not an approach to software design, can also be seen as an example of the layers anti-pattern. The phases in the waterfall model are really layers: the design layer is built on the analysis layer; the coding layer builds on the design layer; etc.
Agile methodologies break the layering effect. A great benefit is that they avoid the problems caused by dividing a project into phases.
One thing to remember is a good design often consists of many small interconnected modules. The layered approach starts with small low-level modules but tends to build larger and larger modules on top of them.
Code Example
To clarify all this I will give a real world example. I was going to add a C code example here, but there is a good example in my recent July post on Dependency Injection. I include the diagrams again here, and a brief explanation. For the example code, and more details, see the post at
Dependency Injection.
The design was taken from a real application that used the standard C library function
qsort() to sort an array of file names for display to the user.
Diagram 1.
The problem with this design was that the main display function (which obtained, sorted and displayed a list of files) was a too large at almost 100 lines of code. It seemed that the sorting of the strings could be separated into a separate module so I tried to push the sorting code into a lower-level SORT module (as I had been trained to do).
Diagram 2. With Layer.
Is this an improvement? If you think so then you need to re-read this post from the start! This is a classic example of the layer anti-pattern. The new layer (SORT module) really adds nothing to the design whatsoever and has several problems.
For example, it does not make sense that the SORT module should be dependent on the OPTIONS module. Thinking even more about the problem, I realized that the SORT module, even though it depends on the code that compares the strings, does not need to know the details of how the comparison is actually performed.
A much better design was found where the OPTIONS module returns a pointer to the current comparison function. The function pointer is then used by the main DISPLAY module to tell
qsort() how to sort the strings.
Diagram 3. Without Layer.
Note that this solution now avoids the unnecessary layer of code that wraps the
qsort() function. There are many advantages to this design - for example,
A good rule of thumb is that the less modules that need to be modified for a simple change then the better is the design.
if a new sort option is required then only the OPTIONS module needs to change, whereas the design using layers would require at least two modules to change.
Conclusion
When I first started programming I remember that there was a great debate about which approach to design was better: top-down or bottom-up. Of course, the real problem with this debate is that it emphasized a layered approach. Do you create the top layer first and use stubs for the lower layers; or do you build up from the bottom layers? Unfortunately the underlying assumption that you have to use layers at all is flawed.
I hope this post has convinced you that using layers is not always, and actually not often, the best approach.