What’s Wrong with Waterfall?

I used to assume this was obvious, but it’s not.

 

Software products are nearly always developed with imperfect knowledge in a changing world. Waterfall development assumes:

  • You can write a perfect specification,

  • Hand it off to someone else who can estimate accurately how much work it will take to execute that specification perfectly,

  • Hand it off to people who will break it into subtasks, write test cases, make a project plan - all with perfect fidelity

  • Hand that off to someone else who will code and test the specification perfectly and on time and budget,

  • …and that nothing will change (or be learned) while you do it.

That works adequately for machine screws. When it comes to new product development, especially software, all of those assumptions are incorrect for non-trivial products.

Let's take a deeper look…

 

Perfect Specifications

A specification is a document that reflects the best understanding of its authors at a moment in time. There is no way to test it other than to implement it and see what the market makes of it. To be perfect, it needs to be complete, clear, easy to understand, valid, and stay that way over the course of the implementation. That is a very tall order, as several of those criteria conflict. Waterfall requirements documents tend to ask for too much, resulting in long implementation cycles where the business is essentially running on hope that they will be right at the end.

The biggest problem with waterfall is that it takes too long to get market validation. 

What if we were to shorten the cycle time? Write a smaller spec, write a little code to implement the spec, test it internally to make sure it does what we wanted, then demo or deploy it to customers and see - learn and repeat, improving our fit each time. To maximize the rate of learning, check in with the market as frequently as possible. The quicker your cycle time, the less time and money you waste doing the wrong thing. Rely on working code and market feedback rather than specs as the source of knowledge.  

The Perfect Estimate

Research suggests that experienced software people are good at identifying the relative sizes of various items if they understand what is being asked of them. That is, they can rank order them from easiest to hardest. Research also suggests that the most accurate estimates happen when estimators can compare the new thing to the effort it took to do something like it in the past. If they don’t have that experience, research and experience say not to expect good estimates. What is a “good” estimate in software? 

The best teams I’ve seen, with a mature agile practice working on a product they know well and adding incremental capabilities can get to within 10% of actual effort needed. Typical accuracy is 30%, and misestimates grow very rapidly (they are always optimistic) as the time involved grows. This is again because of imperfect knowledge in a changing world. It is achievable to be OK at estimating things you’ve done before or things that don’t take too long. As you grow the complexity your accuracy degrades rapidly, and experience in other fields bears this out.

It’s safer to work in relatively short timeboxes with a focus on delivering an integrated and demonstrable result at the end. Each timebox is a chance to learn and re-aim a small work increment. Attempting to know in advance what it will cost or how long it will take to complete a complex solution is folly, but you can set a pace, and understand what level of ongoing investment it will take to get to the next horizon. That is usually good enough for leadership to be able to decide whether to continue. 

The Perfect Plan

I hope that most readers will agree that there is no such thing. You begin with imperfect knowledge, and things change along the way, both outside and within your organization. It’s common for some of your initial assumptions to be proven wrong, for customers to change their minds (possibly frequently), for leaders to change your budget and team composition, and for force majeure events to change everything. Planning is expensive - it takes the time of your most knowledgeable people. For many if not most companies, people are the single largest expense on the P&L. Wasting their time is not smart business. You want to plan only what you can use, rather than investing time in planning, and then throwing the plan away and starting on another one. This is depressingly common, demoralizing, and expensive. There are better ways - starting with fitting the plan to a shorter time box.

More on Perfect Specifications

Necessary conditions for writing the "perfect" specification:

  • The problem is not changing over time or you have perfect knowledge of the future

  • We know everything we need to know - no learning required

  • We can write it clearly and unambiguously 

  • We don't make any mistakes

  • The people who estimate the effort and write the code never make mistakes either

  • The estimates accurately anticipate attrition and turnover

  • No force majeure events happen - cost cuts, changes in direction, new requirements etc.

How do you test or validate a specification? You can’t. You can test the code that purports to implement the specification. Even if it does, there’s no guarantee that the code is fit for use (see above).

A specification is only perfect if it leads to code that produces perfect results for the intended user. You can only know that after the work product gets into users’ hands. 

Working code is achievable. A perfect specification is not. Even imperfect working code can be evaluated by actual users so its fitness is known. And what's more, you can sell the software or the value it produces. Specifications are in a sense a waste by-product of software development, and should be minimized.

Software is always being changed

Production software loses value rapidly unless it is constantly updated. The market changes, the runtime environment changes, there are new security threats and performance concerns. Customer needs change, and components, tools, and platforms become obsolete. The team has to keep up with all of this for your software to remain competitive and useful. 

This is where “technical debt” comes from. When software organizations over-focus on delivering features and fail to invest enough time to keep their code and tools in good shape. It’s the software equivalent of preventive maintenance. No successful trucking business thinks it can defer changing plugs, fluids, and tires for long without risking major expense down the line.

The Perfect Translation

Specifications are hard to write. They are even harder to read. To implement a requirements document of any complexity, someone will need to think about it architecturally and decide what infrastructure to create or change. Someone will need to break it down into a set of subprojects or parallel streams of effort that meet somewhere to create the product. Someone else will need to think about how to test the resulting components and the top level deliverable. In a waterfall approach, this happens monolithically - all those subgroups go off and code their parts, and at the end hope that it will integrate and work, which never happens. There are lots of handoffs and retranslations in this process, all of which leave room for further ambiguity and error. The humans or language models that write the code are imperfect too, and make mistakes both in interpretation and implementation. If your leadership focus is on checking off tasks on a Gantt chart, you will be rewarding teams for checking off their part without concern for how well it works in the larger context, and setting yourself up for unpleasant surprises including quality problems and integration issues downstream.

Told you…

The Waterfall model amplifies the consequences of uncertainty. It takes uncertainty and change and turns them into schedule and budget overruns. Iterative development is our best alternative - work in smaller chunks, with fixed cycle times, learn and adapt along the way. Things won’t go perfectly, but you will get earlier warning and an opportunity to make course corrections.

Previous
Previous

Doubling Productivity at Agilent

Next
Next

Blog Post Title Four