One of the areas we’ve really been working on with one of my teams is their approach to planning, and choosing what to work on next. The organization have – like many – a background in running projects in a way that would probably not be unfair to describe as “waterfall”, with a heavy emphasis on defining a full scope, and detailed plan up front, and tracking progress against that plan. This is a very common approach, and this team, like many, had found that their initial adoption of Scrum looked very similar, but with a plan defined in terms of “Sprints”. However, there was still a map of what the team expected to be working on and when, stretching months into the future, up to a release date.

In this particular context, the organization’s core business is building machines – and so the release date is actually quite literally a “shipping date” – the first prototype machine will be put into crates and flown to a friendly customer’s site for a field validation exercise. This is a complex effort, with multiple teams working on the physical machine structure, electronics, and multiple different software setups – which at the moment are run as essentially independent projects with a programme manager across them (more on that soon). The team have essentially a fixed date, a fixed number of people with fairly fixed availability, and so there are only 2 variable to play with, scope and quality. Historically scope had also been constrained, and the results weren’t what anyone wanted to happen again, and this gave us a clear remit to try something different.

In the original plan, a lot of time and energy had gone into working out what order to build the parts of the system in – with the assumption that following this plan would give a “complete” result, and then it would be a case of allocating time to each step in the plan, and somehow turning that plan into reality. There were a complete set of detailed wireframes, complete set of user stories forming a backlog, a tightly-defined scope for the first “MVP”, and a date set in the calendar. This gave the team a clear picture of a big challenge ahead of them, and that was a pretty daunting thing! The implicit message to the team was that they had only 1 chance to work on each area of the system, and so they needed to get it 100% right first time. Burned by the outcomes of previous projects, and faced with this challenge, the results weren’t so great.

Some key fallacies of the linear plan and “complete backlog” include the assumption that all parts of the plan are needed, that are all are equally needed, that all dependencies can be anticipated, that dependencies are independent of design and can’t be changed, and that delivery is all-or-nothing – you either “finish” and ship everything, or you fail.

Just make it better

The first change we put in was intended to really emphasize experimentation, and get people thinking across the breadth of the system rather than focusing on one small area and optimizing. We suspended sprints, and put the backlog to one side. The team instead simply chose an area of the system “to improve” – whatever that meant – in a 3-day timebox. If it didn’t exist at all, just get it started. If it existed and had known issues, fix them – if you can’t do that, then do what you can, which might be as simple as to work out what to do next so that whoever follows has a clearer picture. From this it quickly became clear that there were key areas of the system that would need much more effort than others – and the priorities started to reform around this.

The shift from “making complete things” to “improving things” came much more easily to some in the group than others. One of the key things that helped people was to set a series of small and clearly defined goals, and to tweak the interpretation from “work on this for a while” to “tick off as many off these as you can”. At the end of the timebox however, the product had to be in a decent shape – everything had to be merged back in, with the system better than it was before. How much better doesn’t matter; all that matters is forward progress at whatever rate is feasible with the real-world limitations and discoveries the team encounter.

What to improve next?

As part of finishing the time-box on each area, the team make notes for next time – what areas are in good shape, what must be improved, what could be improved if there’s time. What emerges from this is that we have a picture of what’s needed next in each area, from a bunch of perspectives. The product focussed members of the team – business analysts, user experience, product owner and so on – have one perspective, and the more technically focussed members of the team have another. At the start of the exercise, the way people were used to working was very much that the product focussed group “owned” prioritization. This group would collect together inputs, and make a decision on what would be worked on next. It was time to change that.

Now, the way we choose what to improve next is that the whole team get together in a room. We describe what we improved last time, we look at the system as a whole, together, and we talk about what is the next most important thing to improve. Everyone has an equal voice in this, we can all suggest anything, and we can all challenge anything. We have a single-sheet summary of our high-level goals on the wall, and we tie everything back to this. We normally write up more things than we have people to work on, and so the next thing we do is we collaboratively decide who works on what. This might be people with the most knowledge of the area, or with the least so that they can learn it. We might find that some areas we know we’d like to work on, but can’t because there’s a dependency that’s not met – and so we follow up on that. When we’re done, everyone has a clear picture of where the system is at, what we’ve chosen to improve next, and – crucially – why.

We’re probably not going to get “everything” “completely finished” by the time the machine ships. But what we are doing is constantly making sure our efforts are focussed in the right areas, being 100% transparent about the decisions and trade-offs we’re making, and giving everyone an equal chance to share their input when they think a change of direction is needed. And the system keeps steadily improving!

Share This