The Core Four

If you are having trouble with your agile software development process, here’s something to consider. I would like to introduce The Core Four. These are the four most basic practices of an agile software development team. These practices have proven to work well together to form a solid agile software engineering practice. It has also become evident that it is difficult to maintain a sustainable agile practice without them. Having these four practices in place gives you a return on your investment in agile software development and a solid foundation for other valuable and important agile practices.

I think that I first saw these practices together in a graphical representation of Extreme Programming by Ron Jefferies in his article: “What is Extreme Programming?” I didn’t realize how important it would become later, though. Extreme Programming recommends a lot more practices than four and they are all great, but I could sense from my own experience that I didn’t have to start with all of the practices in order to get some of the benefits. It was also pretty obvious that I couldn’t be successful if I left some of them behind. It wasn’t until Uncle Bob Martin explained in his video: “Agile Programming, Clean Code, Episode 49” that Jefferies had documented an inner circle of practices in his illustration Martin called: “The Agile Circle of Life”. What we, as agile practitioners are discovering is that without these central practices, it’s very difficult, if not impossible, to sustainably support the others, yet these are the very practices that are being overlooked by many teams that call themselves “agile.”

What are the Core Four? They are:

  • Test Driven Development
  • Refactoring
  • Simple Design
  • Pair Programming

Let’s take a quick tour of the benefits of these practices.

Test Driven Development

There has been a great deal of debate about Test Driven Development over the years. Some very smart people are actually against the practice. Even so, the practice is still the basis for good software engineering to many experienced developers like myself. I have not been able to find an adequate replacement for it. It solves so many of software development’s most challenging problems.

One of software’s biggest problems is ambiguous requirements. Test driven development helps to remove this ambiguity early by forcing the requirements to be testable. Is your team having trouble getting the requirements? Is your team building the wrong thing? TDD could create a huge improvement in this area.

TDD is also a way of permanently documenting your software requirements. Unlike classical written requirements documents, tests are executable and must stay accurate. Are your requirements documents useful? Do they represent the system that is actually in place? TDD makes a big difference here.

Test Driven Development also stops your software from leaking requirements. That’s when an “upgrade” removes or corrupts previously enjoyed features. How many times have you upgraded an app only to find that you lost a feature you really liked. It’s happened a lot to me and it really makes you feel like the developers don’t care. It also makes it seem like someone lied to you when they said they had an “upgrade” available. We don’t want to be doing that to our users and a complete, working set of tests helps to stop feature leaks between releases.

If you want to learn more about TDD from the one who brought the practice to light, check out my affiliate link to: “Test-Driven Development By Example” by Kent Beck.

Refactoring

I think of refactoring as “agile architecture” that’s because I think that the most productive architect is one that makes architecture for a product that will be used. When you refactor, you are developing architecture based on code that is already being used.

In my practice, I discovered that about 90% of refactoring is merely an attempt to remove code duplication. It was one of the biggest surprises in my career. By merely trying to remove duplication, I forced myself into using well-known and loved patterns. The main difference between just employing patterns and refactoring to them, was that these patterns were actually proven to be necessary. Back in the 1990’s, during the days of patternmania, we tended to use patterns whether we needed them or not. The move to agile development, primarily by way of Extreme Programming, lead to a more practical use of patterns and refactoring was an underlying hero.

Refactoring can also be considered to be the software fountain of youth. Teams that are vigilant at the practice, have code that looks very new. It appears that the code was just typed out in a few minutes yesterday, even if it is many years old! This is something you almost need to experience to believe. A big contributor to this, other than the removal of duplication, is the basic effort to rename things. As understanding of the system matures, old names are no longer relevant and can be replaced with the more obviously fitting ones. Another big benefit is that refactoring makes the next core practice possible.

If you want to learn more about refactoring, make sure you check out the definitive work by Martin Fowler. Here’s my affiliate link to it: “Refactoring” by Martin Fowler.

Simple Design

A simple design is one that developers who read the code think is obvious. Simple designs like these are not easy to achieve, though. Rarely are our first designs adequate because we have so much to learn about what the system will have to do. A fundamental element of agile software design is to make the system as simple as possible and that usually means that we don’t do things like add code that we think we might need in the future. Because we refactor the system frequently, we don’t need to do that. We can just add the code we need when we need it and not sooner.

The opposite is also true. When code is no longer needed, we should just remove it. We shouldn’t leave code hanging around just because we might need to add it back in again. We need to rely on our version control systems in order to retrieve old code and keep our code simple.

Usually, removing duplication makes code simpler, but it’s also good to remember that there is a time to duplicate. You can imagine how ridiculous it would be to try to limit the letter “s” to as few places in a sentence as possible. You could try to write your sentences using a crossword style, but that would be harder to read, not easier. When it comes to communication, duplication can actually be a helpful thing. If duplication makes the system easier to read and understand, then it overrides the “don’t repeat yourself” rule.

Simple design is pretty subjective. What seems simple to me may seem pretty difficult to you and vice versa. I may also miss an opportunity to refactor. In fact, it’s pretty easy to make an important and costly mistake anywhere in the agile process. That’s one thing that makes the next core practice so important.

Pair Programming

This may be one of the most difficult agile software development practices to start but without it, it’s difficult to consider yourself to be a truly agile development team. How do you know you really are an agile development team if you don’t know whether or not your team members are using agile practices? Worse that that, how can you really be a team if you don’t ever work together on things? How can you teach a new member of the team what they need to know in order to be a good contributor to the project?

I think that the main reason that Extreme Programming became so popular so quickly, was because pair programming was a fundamental practice of the method. It tends to spread and balance out the knowledge on the team. There will still be experts, but there will also be those who will learn from those experts.

An important part of having an agile process is the ability to be predictable. The stakeholders begin to rely on the team’s velocity. It’s pretty obvious that this consistency depends on developers, but what if the developer of a key part of the code is sick or on vacation? An agile team should be able to continue to go at the same velocity even when a developer isn’t available for part of the iteration or sprint. When the developers work together, expertise is spread around. Even if the best person for the job isn’t available, someone else knows enough to get through it. Pair programming makes the team stronger by reducing the single points of failure caused by programmers working alone.

Pair programming ensures that all code is reviewed. Studies have shown that peer review processes reduce error. Other studies have shown that error produces late projects. Even though it seems like you will go slower when two programmers work on the same thing, it is logical to assume that this is not the case. When errors are reduced, software gets done faster. Even non-agile developers have learned to do peer-reviews. Pair programming makes the practice efficient by keeping the team involved in the code on a day-to-day basis. Errors are found quickly and are less likely to be missed and found later when they are more expensive to fix.

If you want to do a deep dive into this important practice, here’s my affiliate link to a book I highly recommend: “Pair Programming Illuminated” by Laurie Williams and Robert Kessler.

Returning to the Core

It’s one thing to write and talk about these practices, and quite another to actually do them. That’s my challenge to you as a professing agile practitioner. These practices are far from new. They’ve been proven over and over in my experience and in the experience of many other developers over the last 20 years or so. Don’t be satisfied with an inadequate trial of agile software development. Make sure that you give The Core Four a try before assuming that you have failed. I think you will be very happy you did.

Why Should We Do Test-Driven Development? Part Four

When a team practices the discipline of test-driven development, it gives them a new power.  When all of the code’s intentions are preserved by the tests, it allows anyone to change any of the code at any time without breaking it.  As far as I know, there is no other way to do this.  What this means to a developer, is that it is possible to revise existing code whenever it needs to be revised.

The practice of revising existing code without changing its intended behavior is called “Refactoring.”  Over the last 15 years or so, I have witnessed the power of refactoring.  One of the most obvious things that refactoring does is that it helps us remove duplicated code from the system.  This not only makes future change easier, it reduces the lines of code that must be maintained.  By way of refactoring, TDD makes the code itself more agile.  Refactoring has become such a major tool in my practice, that I now wonder if I have written more lines of code than I have deleted in the last five years.  When I come to a system that is completely covered by tests, I am able to remove duplicated code.  This allows me to remove duplicated tests as well. If a system hasn’t been refactored well, I may end up removing more lines of code because of duplication, than I add when I make a new features.

Refactoring improves the internal design by allowing us to do simple things like renaming variables and methods.  I do realize that some of this can be done using modern refactoring tools without TDD, but there are many cases where dynamic types, strings and reflection cannot be determined by a refactoring tool.  You can break a system by renaming when you don’t have tests.

When I first switched to TDD, this power was a game changer.  For the first part of my career, I had to be very, very careful not to change things.  I looked for ways to not touch things in order to improve my ability to finish a new feature faster.  When a design is good, this can still be a good thing to do, but the fear of change usually leads to system deterioration.  Doesn’t it make sense that we learn more as we spend more time in a problem domain?  Don’t you think that if we know more about a problem domain that it would help us design code that matches that domain better?  Then it only makes sense that we will have better ideas for the code’s design after some of the code has already been developed.  The more experience we gain, the more we see how the design could be improved.  TDD gives you the power to make those improvements.

As I mentioned in the previous article in this series, TDD also helps us design better interfaces because it usually forces us to build something that we can imagine.  This helps us to make small, single-responsibility classes.  That’s the “S” in the SOLID principles. In fact, TDD tends to naturally help us to follow most of those principles.  We tend to make small classes, so they tend to be closely related to a single feature and that makes them more closed to change.  That’s the Open Closed Principle.  We also tend to inject the objects because they are much easier to test that way.  That causes us to reverse the dependencies naturally.  That’s the Dependency Inversion Principle.  When we near a boundary, it becomes much harder to test, so we tend to use interfaces that represent the difficult boundaries to test.  That’s the Interface Segregation Principle.  Interesting isn’t it?

Have you ever considered how you might make a spider web?  I’m going to guess that it wouldn’t be trivial, especially if I gave you constraints that actual spiders have, such as where certain objects are placed, how much wind there is and so on.  Now what if I was able to reduce your brain to the size of a spider’s and then ask you to make those same webs?

The reason I ask such a strange question, is that it seems possible that some things we do can be done with less mental energy if we use simple, repeatable disciplines.  TDD appears to be a discipline like that.  When we practice it, it helps us to naturally produce beautiful code in a variety of unusual circumstances, much like a spider is able to make a beautiful and functional web in many different places and conditions.