One of the significant changes in building a system in an agile way is the role of architecture and planning. Agile methodologies are often described against the comparison of a “waterfall” methodology, where in the “waterfall” approach, one of the issues is “big design up front”. The agile manifesto calls for responding to change over following a plan, and by the time that goes through a few cycles of interpretation this can be come “don’t design up front”, or even “don’t design”. The problem here is that architecture will always emerge in a system; whether planned or otherwise. Spending a lot of time forming a master plan up front, and rigidly sticking to it regardless of feedback from contact with reality, can lead a project to ruin. Failing to plan at all, and leaving the outcome purely to chance can be just as harmful. The principle in the agile manifesto is to respond to change over following a plan. I’ve got good results from applying this principle to solution architecture.

At the start of any project, everyone on the team will know less than at any other point from then on. As the project progresses, the team should be learning more about both the problem and about potential solutions, and having mechanisms to collect and share these learnings is vital – retrospectives are one way of doing this. Whereas a more traditional approach might try to form a full system specification, detailing every requirement, before starting development, an agile backlog should be constantly changing in response to the product’s (and hopefully customers’!) needs. System architecture is normally thought of as being hard to change, and so there is a balance needed here if we are to benefit from the learnings the team accumulate. For me, a key requirement of system architecture is that is must encourage malleability – it must help the system’s evolution rather than hinder it.

A really powerful approach in architecture in this environment is to strive to make every decision reversible if needed. The first step towards this is to identify when a decision is being made. The second is to encapsulate this decision and its artefacts, minimizing the contact between the decision and the rest of the solution. For example, if a decision is made to use a particular database type, as much of the system as possible should be completely ignorant of that decision. This approach builds on the principle of separation of concerns, and leads to outcomes like the “hexagonal” or “onion” architecture, or the “ports and adapters” pattern. In this model, the logical components of the system, or core domain, are isolated – normally drawn in the middle – with the infrastructure components at the edges. Getting the dependencies right here is really important; the infrastructure should provide the capabilities requested by the core domain – as opposed to the core domain consuming the capabilities exposed by infrastructure. The distinction is subtle, and often misunderstood – this is the “dependency inversion principle” in action. The dependency is said to be inverted, in that whereas the default approach is to bring dependencies in to a system, the specification of dependencies is now contained within the system. I’ll write about this as a separate topic in its own right, to give some more concrete examples of what this looks like in practice.

At the start of a product, we will typically have a high-level vision of what features the product will eventually offer. I like to start out by taking one of these features – one that is perceived as being hard or high risk – and spiking solutions to it. I work on the basis that if I design things in a way that makes the hard things possible, the easy things can be fitted in later. If we start with things that are easy, there is a risk we will over-optimize for these in a way that makes the system rigid and brittle, and by the time we get to the more difficult things we will have less options available to us. I find that things like experimentation sprints can provide a very large return on investment, by removing the risk from the rest of the project. If something is not going to work, I want to know about it as soon as possible, and to experiment early on until viable approaches are identified. In the extreme case, we might find that the project is fundamentally not viable, because of technical limitations, and if we discover this at the start we can minimize the investment in it and get the best outcome by directing resources elsewhere. The counter-proposal here – often seen where the delivery team are an external vendor – is where there is a desire to be seen to deliver value straight away, and the focus is on low-hanging fruit to impress a customer quickly. There may be situations where there is a commercial appeal to this, but in this situation a balance should be struck – perhaps there can be a couple of tangible deliverables to show an immediate impact, with the bulk of effort going into de-risking the main challenges.

I think it’s important early in a project to make as few decisions as possible, and to constantly re-evaluate them, at least until there is sufficient evidence that the initial experiment is working. I deliberately use the word “experiment” over “plan”, because at this stage it is just that – we have come up with an approach based on a hypothesis rather than actual results. The hypothesis should be designed to highlight problems if there are any, and we should not be attached to it. Hypotheses that prove to be invalid save us from investing in the wrong thing, which can be immensely valuable. Perhaps at the start of a project I might encourage the team to huddle after every single feature, and talk openly about what worked well and what didn’t. Mob programming at this early stage is also a very powerful tool – we get everyone’s input continuously, and continuous opportunity to adapt. The aim early on is to iterate as quickly and effectively as possible on the process and architecture, guided by features – when this stabilizes, the focus transitions to delivering features as quickly and effectively as possible, guided by the process and architecture.

Architecture of an agile project (of any project for that matter) should respect the features required, rather than be its own separate set of goals. From talking to product owners or business analysts about the medium to long term product goals, we should see where the emphasis will be. There is no need to plan in detail for these goals, but it’s helpful to understand at a high level what the future will look like. Based on this, we can design an architecture that supports delivery of these features. If there will be lots of very similar functionality, we can factor this into the design, to make delivery of these features slick and efficient. One way that I often see teams struggle is where an architectural decision has been made very early on to divide a system into subsystems, perhaps labelled as microservices, but this decision has been made based on a whim or an untested hypothesis. By deciding to split a system in this way, and prematurely deciding where boundaries “should” be, there is a significant friction added to crossing those boundaries. If the boundaries then turn out to be incorrect, it can lead to a heavily impaired ability to work with the system, and very expensive to adjust later. A preferable approach here is to build in a way that supports the product’s goals, and introduce this kind of architecture only in response to emerging problems as the system grows. Martin Fowler gives this approach a name – “sacrificial architecture”. The aim here is to build something cheaply, perhaps even disposably, to gather the information about what should have been built, and then switch only if and when that information supports it.

One reason architecture in agile teams can be challenging, especially in large organizations with a more traditional setup, is that the traditional architect role can be viewed as higher-ranked within an organization than the rest of the team, and as a result there can be a political element to discussions. This is especially dangerous if the architect is not involved in day-to-day operations of the team, and doesn’t eat her own dog-food by working on the system within the constraints of its architecture. My view is that while one person may end up leading the technical team, with ultimate casting vote on decisions and responsibility for their outcome, the role of architecture in an agile team is very much shared. The team benefits if ideas from every member of the team are combined, rather than ideas from a single source only. If the organization’s structure is such that there is an architect role that out-ranks the rest of the team, the individuals in this position would be wise to show the team through their actions that they want the team to challenge their ideas as an equal peer, and that deference is harmful to everyone in the long run. Meanwhile in a modern project ecosystem it is essential that design is done with a working knowledge of the system, as a collaborative exercise, rather than by an “ivory tower” architect or CTO.

If you’re at the start of the project, do yourself a favor and resist the temptation to come up with a “master plan” that you’ll stick to regardless – but make sure you at least have a plan. Find opportunities to expose this plan to the harshest tests rather than trying to protect it, proactively drive its evolution early on, with no attachment to hypotheses that aren’t supported by evidence. In the middle or later stages of a project, if the early architecture isn’t working for you, it’s never too late to try to improve and adapt – if you’re hampered by poor architecture, dealing with this as a priority can have a multiplying effect on your productivity and morale. Remember also that the best designs come from collaboration and taking the best parts of everyone’s ideas, and combining these to be more than the sum of their parts.

Share This