ProdFund 1.5: The Agile Precursors
Product Fundamentals is a podcast dedicated to spreading the core knowledge software product people need in order to succeed. Season 1 is structured as a history of how we make software.
Welcome to episode 5! In the 1990s, the advent of consumer software, the rise of the Internet, and mounting evidence of the failures of Waterfall to deliver results created space for new ideas and new ways of working. Put another way: A "righteous solution" was failing to handle a "wicked problem." Amid all these challenges, the work of two Japanese academics researching successful hardware products planted seeds that would eventually bear fruit in the Scrum and Extreme Programming approaches.
The audio is embedded below, and the episode transcript follows.
You can also find this episode of the Product Fundamentals podcast on the show website, on YouTube, and through all the usual podcast services.
Transcript
Hello friends, and welcome back to the Product Fundamentals podcast, episode 5: Agile Precursors.
In this season, we are tracking the evolution of how we came to make software in the weird way we do, from the earliest origins of our methods, through to today.
Last episode, we discussed the ascent of the Waterfall methodology to dominance in the software industry. The risk aversion of the Enterprise-driven business model, gatekeeping by the older generation of computer scientists, and adoption of heavy government standards made Waterfall the default and officially-sanctioned way to build software for decades.
This dominant position, however, could not be sustained indefinitely.
Freezing the methodology for how to work with a technology that improves at an exponential rate was almost certainly a doomed proposition from the start, and throughout the 1990s, pressure for change would mount and Waterfall's discontents would start advancing compelling alternatives.
In this episode, we'll cover how Waterfall's mounting shortcomings, combined with an evolving technological and economic landscape, created space for new ideas and practices that would profoundly change the way we work.
As we discussed last week, software leaders knew that large software projects were troublingly slow in the 1970s, but reviews in the 1990s revealed that "slow" was transitioning into plain old failure.
A review of 36 billion dollars of US Defense Department spending on weapons systems software in the 1980s and 90s found that just 2% of the software procured was usable out-of-the-box. Another 3% of software was used after modest changes (reference).
20% of software took extensive rework after delivery to be useful, 29% was paid for but never delivered, and 46% was delivered but never successfully used.
Taken together, that means that just 5% of software projects procured by the DoD in the sample could be called successful. The remaining 95% were failures, either by failing to meet the needs, or by never being delivered at all.
The DoD wasn’t alone in this regard, nor were the issues new – a smaller report by the American GAO in 1979 found similarly depressing failure rates in federal spending on software.
These increasingly common and prominent failures helped create space for alternative ideas to be taken seriously, chipping away at the Waterfall consensus.
The New New Product Development Game
An important precursor to change came in 1986, when two Japanese academics – Hirotaka Takeuchi and Ikujiro Nonaka – published an article titled “The New New Product Development Game” in the Harvard Business Review. They start from the observation that newly-developed products account for a growing share of companies profits – 30% in the early 1980s – and take for granted that something is changing in how successful companies are developing new products. They combine case studies for several recent products into a set of best practices to increase speed and flexibility of new product development.
Again, to be clear, we’re not talking about software products yet: the cases here include a Fuji-Xerox photocopier, a Canon copier, a Honda car, two Canon cameras, and an NEC personal computer. All of the products were Japanese, all released between 1976 and 1982.
Takeuchi and Nonaka highlight several methodological innovations of the projects to develop these products, and contrast them with the ever-popular bogeyman of consultants, “the traditional approach.”
The incumbent methodology that Takeuchi and Nonaka pick to contrast their innovators with is NASA’s Phased Project Planning, or PPP, which dates back to the 1960s, and which for our purposes, is a good analog for Waterfall software development. There are a series of phases, each of which is to be completed and closed in order before the next step begins. So, concept development would finish first, then a feasibility study would start and finish, then product design would start and finish, then prototyping, and so on.
The authors find that successful projects in the new paradigm have six important differentiating characteristics:
- Built-in instability,
- Self-organizing project teams,
- Overlapping development phases,
- Multi-learning,
- Subtle control, and
- Organizational transfer of learning.
Built-in instability meant that there was an ambitious but loosely-defined goal given from above. They cite an example from Xerox, where management challenged a team to produce a new photocopier within two years that matched the performance of Xerox’s premium copier at half the cost. Once given that challenge, the team should be given great flexibility in how to achieve the goal.
Overlapping development phases are what they sound like, but taken to a higher degree than we’ve talked about so far. Winston Royce talked about feedback between adjacent phases of the process, like requirements-gathering and initial design. Takeuchi and Nonaka write that successful teams accept interaction between phases several steps apart. Issues encountered during the first forays into later steps should rapidly be integrated into ongoing work in the notionally “earlier” phases.
Multi-learning refers to both “multi-level learning” and “multi-functional learning.” Multi-level learning means that the organization encourages ongoing learning in its workers. American conglomerate 3M encouraged engineers to use 15% of their company time to pursue their dreams, two decades before Google’s 20% time became a famous headhunting meme. Hewlett-Packard brought in outside marketing experts to teach courses to the marketing department. Honda sent engineers to observe the European auto market for three weeks when their new car project hit a dead-end.
Multi-functional learning refers to having workers learn about disciplines outside their usual function. NEC, for example, apparently had engineers work at a customer service center.
Subtle control is about company management using checkpoints, evaluations, and other nudges to keep projects away from chaos or dysfunction, while tolerating mistakes and smoothing interactions with suppliers and other departments.
Organizational transfer of learning is about ensuring the broader company benefits from the success of a project. Most often, this is achieved by rotating workers through different project teams, coss-pollinating each team with lessons learned from elsewhere. At a higher level of organizational strategy, the previously high-end brand of HP learned from success with a budget-priced computer, inspiring the company to move into budget-friendly printers.
Self-organizing teams is the most nuanced of these best practices. At the surface level, it requires management granting the team a high degree of autonomy to choose their own strategy. For example, Honda staffed its new car team with young engineers (the average age was 27) and challenged them to build a car young people would like to drive. It also involves being multi-disciplinary: the product team should be staffed with people from a variety of job functions, and they should be physically placed in the same workspace so that they engage with one another and constantly cross-pollinate.
But the team also needs to “self-transcend,” which is a lofty way to say they need to pursue contradictory goals. For example, a new camera must offer new automation features, while simultaneously being lighter and 30% cheaper than existing models.
Nestled under the concept of being self-organized is also the idea that the team should be out of its element enough that it can be free of conventional knowledge. In many cases, the teams were composed of young people, or people whose backgrounds were in other product lines. Thus they were able, for example, to conceptualize a car that had opposite proportions to everything else on the market.
Closing out NNPDG
Before we move on from The New New Product Development Game, it’s worth noting that two of the six hallmarks of the new game were about learning. Keep an eye on that. While no one has been anti-learning in our discussion so far, the idea of the firm as a learning entity is about to take root in a big way.
Takeuchi and Nonaka rely on a sports metaphor to label this new new product development methodology, referring to it as the “rugby approach” numerous times. This is inspired by a tightly-packed formation of rugby players, each with different jobs, pushing together to move the ball down the pitch. They actually only use the name for this formation once in the paper, but the term has become central to much of modern software development. That formation is, of course, the scrum.
Scrum
While they weren’t writing about software, Takeuchi and Nonaka’s ideas naturally resonated with incremental and iterative development practitioners and others who were burned out on Waterfall.
DeGrace
The first person to apply the scrum metaphor to software was programmer Peter DeGrace, whose 1990 book, Wicked Problems, Righteous Solutions summarized the history and ongoing problems of software development. Among many other methodologies discussed, DeGrace suggested a “scrum” methodology. He didn’t go into great depth; DeGrace just walked through a thought experiment of the seemingly wacky idea of applying the rugby approach described by Takeuchi and Nonaka to software.
For our purposes, what makes him more than a passing reference is the broader intellectual move that he makes in thinking about why Waterfall is struggling. DeGrace borrowed the notion of “wicked problems” from public policy and operations research and brought it to software.
A wicked problem is a problem that has no stable optimal solution, often because the parameters of the problem are continuously changing, and because potential solutions involve expensive trade-offs between desirable goods. These are problems that cannot be solved from the outset, because the problem space is not fully knowable. Instead, they have to be addressed with provisional partial fixes, which over time may evolve into comprehensive solutions.
The opposite of a wicked problem is a righteous problem – that is, a solvable problem with stable and knowable parameters. Righteous problems can still be very hard, but they are fundamentally solvable. Chess, for example, is a hard but solvable game, and thus a righteous problem.
DeGrace proposes that many problems in software development are wicked problems due to volatile requirements, technical change, and customers not knowing what they want until they see a partial solution. Unfortunately, Waterfall is fundamentally a righteous solution – that is, Waterfall is designed as a rationalist process that assumes problems can be well-understood from the outset, even if they are hard.
I find this characterization very compelling. While writing software in the 1960s to navigate a rocket to the moon was a very hard problem, it was a knowable problem: there are a bunch of established scientific constants to use as inputs, observable performance characteristics of the spacecraft,estimable error rates and margins of safety that can be included, and so on. The problem is hard, but the parameters are in some sense knowable. In contrast, building a word processor program for consumers is just a fundamentally different class of problem, with important unknowns around the UI patterns the user will understand, the features the user will really engage with the most, and more.
There were many, many attempts to update, salvage, or replace Waterfall in the late 1980s and early 1990s, and in this episode, I’m skipping over a bunch that had limited lasting impact. If you’re a die-hard Barry Boehm Spiral Method fanboy, or if the Unified Software Development Process is what gets you up in the morning, I sincerely apologize for leaving you hanging.
But I posit that the key differentiator between the methods that endured and the methods that quickly faded was that the enduring ones accepted the wicked problem premise. They smoothly integrated the fact that many of the things we want to achieve in software cannot be accomplished with strict, optimized, universally applicable process. The dream of Harlan Mills and others at that NATO Conference way back in Episode 1, that we would someday discover a rigorous optimal method for building software, was doomed to failure.
In the end, DeGrace didn’t propose much of a specific methodology in his book. He likes prototypes, he likes overlapping phases in development, but that’s about as prescriptive as he got.
Schwaber and Sutherland
But in the early 1990s, software engineer Ken Schwaber and project manager Jeff Sutherland would pick up on DeGrace’s “scrum for software” thought experiment and turn it into a fully fledged development methodology. By 1995, when they presenting their new process at a software engineering conference in Austin, Texas, they were calling it “Scrum.” Schwaber and Sutherland acted as consultants, seeding the Scrum methodology at small- and medium-sized businesses throughout the late 90s and early 2000s.
Perhaps Scrum’s biggest addition to the IID models we’ve discussed before is the introduction of two specialist roles: the Product Owner and the Scrum Master.
The Product Owner is meant to represent the interests of the primary stakeholder for the project. This is the external customer, or the internal team that will use the software. The Product Owner deeply understands the customer’s needs, and their primary function is to act as an empathetic communication channel between the customer and the development team.
The Scrum Master is a recasting of the project manager role as an owner of the Scrum process. Their job is to make sure the team follows the rules of Scrum properly, holding the right meetings in the right way, creating the right sort of backlog, ensuring that team members know how to operate under the Scrum system, and so on.
Now, I would never be this cynical, but a more cynical person than me might look at the role of the Scrum Master, which is someone who needs to make sure the team is following official processes which are described in the official texts, who should ideally have an official paid certification as a Scrum Master, as a way for consultants to add a cash grab component to their new methodology. But like I said, I would never be that cynical.
Beyond adding the explicit roles of Product Owner and Scrum Master, the original Scrum approach was really just a specific formalization of the general class of IID techniques we discussed back in Episode 2.
Work is divided into iterations called “Sprints,” each of which lasts for 30 days. Each iteration is planned around delivering some distinct value for the customer, some new capability that can and should be shown to stakeholders at the Sprint review at the end of each sprint.
And closeness to the customer is critical. Schwaber portrays the history of development of having needlessly separated the developer from the customer through layers of bureaucracy and formalism. Reflecting on how software had changed since the 1960s, he writes,
“As the applications and technology became more complex and the number of stakeholders in a project increased, practices were inserted to coordinate the communication among the increased number of participants… Each step drove a wedge between the stakeholders and the developers. We went from face-to-face communication to documentation. We went from quick turnaround to lengthy requirements-gathering phase. We went form simply language to artifacts that were arcane and highly specialized. In retrospect, the more we improved the practice of software engineering, the further we widened the gap between stakeholders and developers. The last step in the estrangement was the introduction of waterfall methodology...” (p. 54)
Planning in a scrum project was meant to be markedly different from under the waterfall methodology. Rather than exhaustive requirements, the work is organized around a product vision (describing the key features and benefits of the completed project), and a product backlog of the requirements to be satisfied along the way. The vision should be relatively stable, while the product backlog is free to change at any time. In each sprint, a set of requirements is pulled into the sprint backlog, and once the sprint starts, the sprint backlog cannot be changed. Beyond the current sprint, only the next sprint needs to be well-defined; everything farther out into the future remains flexible and subject to change.
Thinking back to the client conservatism that made Waterfall desirable, it’s natural to wonder why a customer would accept or even prefer to work with a team following Scrum rather than Waterfall.
One of the key concepts of Scrum is that by delivering concrete progress toward the vision in every sprint, the team is constantly re-earning the trust of the customer. The customer doesn’t need to sign a check and wait for a year to see progress. The leap of faith is much smaller.
The Scrum model itself was the subject of iteration. While initially it was characterized by just 30-day sprint and the daily standup, over time, the scrum process was updated to include the other rituals we think of today, including backlog grooming meetings and retrospectives after each sprint, and flexibility about the length of sprints.
Schwaber and Sutherland had their own version of the “wicked problem” theme that DeGrace introduced. The Scrum founders use the concept of “chaos.” Chaos, here meaning unpredictability and complexity, is a vital component of any significant project. Where Waterfall projects attempt to control Chaos out of the process through heavy documentation, tightly-defined requirements and deadlines, and rigid project structure, Scrum sees Chaos as inevitable. Requirements will inevitably change; initial approaches will inevitably fail; schedules will inevitably shift. As Jeff Sutherland describes it in a 1996 conference paper on the Scrum Development Process, Scrum’s designers aim to achieve the maximum of productivity by operating close to the edge of chaos, being maximally responsive to changes and challenges, without going over.
Scrum terminology and practices are common today, but they were not immediately ubiquitous. Early projects were small, generally shepherded by Scrum’s original authors. It wasn’t until companies like Fidelity Investments and Siemens began experimenting with scrum in the late 1990s and early 2000s that scrum really gained wider recognition.
Extreme Programming
While Scrum has strong opinions about the right way to organize a team, plan a project, and structure the rhythm of work, Scrum remains agnostic about how much of the actual work of coding is conducted.
The most prominent model for that part of work came from software engineers Kent Beck and Ron Jeffries.
In 1996, the car manufacturer Chrysler hired Beck to turn around a floundering project to build a new payroll system for the company. Beck hired Ron Jeffries, and together, they drove the Chrysler payroll project to an initial launch in a bit more than a year.
While the payroll system project was ultimately not very successful at getting support from Chrysler’s leadership and was eventually shut down, it did serve as a testing ground for a set of practices that became known as Extreme Programming, or XP. XP’s rules and values were articulated in Kent Beck’s 1999 book, Extreme Programming Explained. XP’s name comes from the fact that it was the 1990s, and what could be more 1990s than calling a practice Extreme Programming? But also, the notion that it takes a number of existing software best practices to their logical extreme.
For example, code reviews are a good practice – so in Extreme Programming, developers work in pairs, rather than solo, meaning that by definition, every line of code is reviewed by another developer. Having unit tests to make sure each function works as intended is good – so in XP, automated unit tests are written first, and then code is written to satisfy them.
Perhaps even more so than Scrum, Extreme Programming is opinionated about the right way for teams to work in a more comprehensive way than just the steps of the process. Overtime is forbidden in Extreme Programming’s rules, and all developers are considered owners of the entire codebase – meaning any developer on the team is expected to understand and be able to contribute to any part of the project.
Unlike Scrum, Extreme Programming doesn’t formalize responsibilities for anyone besides the software engineers. There’s little for a project manager to do, except to serve as a stand-in for the customer who can express requirements, express acceptance criteria, and do testing. But proper Extreme Programming practice would have an actual customer in the room for as many of those things as possible.
Otherwise, the rhythm of an XP project is similar to that of Scrum, though less formalized. There should be frequent releases – at least every two weeks, though perhaps as often as every day – and the customer should have a decisive voice in deciding which features get built in each release. Management should accept an overall vision and long-term plan, but without too many specifics defined up-front.
Similar to Scrum, iterations are planned by representatives of Business and Development working together. The business side provides the list of major features and capabilities, called “Stories,” and prioritizes them. Development estimates how hard each story is to complete. Development forms a plan to generate the maximum business value as quickly as possible.
Like Scrum, Extreme Programming is built around small teams, but it’s less clear how multiple teams following Extreme Programming could collaborate. Plus, with its emphasis on every developer feeling ownership for the entire system, there are naturally tighter limits to how large a codebase an Extreme Programming team can cover compared to other approaches.
From a modern perspective, many of these process choices may sound a bit obvious. Of course features should be prioritized and addressed in some sort of order. The idea of prioritization would not have shocked any of the earlier Waterfall advocates, either – they were very used to cutting scope as deadlines approached in order to salvage partial success.
The innovation is the structure of the deal between the business and the developers. The business agrees to specify less up-front, and to take an on-going role, and in return the development team promises to deliver initial value faster, and to keep delivering more value to meet the business’s emergent preferences as the project continues. That deal structure, predicated on accepting that software is a wicked problem, or that chaos is inevitable, is what’s really doing the magic.
To wax anthropological for a moment, I think Extreme Programming is also part and parcel of the “class consciousness” of software engineers in the early 21st century. The notion, sometimes earnestly-stated and sometimes lampooned, that software engineers are artisanal craftsmen, the appropriation of concepts like “guilds” for software, is definitely intimated in Extreme Programming. XP is decidedly not for suit-wearing stiffs or generic service economy knowledge workers.
Here’s Kent Beck on XP’s core values:
“We will be successful when we have a style that celebrates a consistent set of values that serve both human and commercial needs: communication, simplicity, feedback, and courage.”
Later, Beck would add “respect” to the list of values. These are all good things, and Beck does explicitly relate them to positive project and business outcomes. But they’re inclusion here is a sign of the changing times – the early computer science giants of episode 1 and the authors of the Defense Department standards of last episode were not the types to include “courage” and “respect” as essential to the success of software projects.
This change in developers self-image would continue to grow, perhaps reaching its purest expression in the Agile Manifesto, which we will cover soon.
OOP
Before we get to the beginning of Agile, though, it’s worth touching briefly on the technical evolution that occurred alongside the rise of these anti-Waterfall methods. Both Scrum and Extreme Programming began as tightly associated with Object-Oriented Programming, or OOP. Indeed, the first software project to use the Scrum methodology was to create a developer environment, called ObjectStudio, to build applications with an OOP language. Both methodologies took for granted that software would be built using OOP.
Now, this is where I should acknowledge that I’m a product manager by trade, and a pretty poor software engineer, so I’m going to skirt over some of this at a pretty high level. Engineering friends, please forgive me.
Object-Oriented Programming is a paradigm in which the basic building blocks of a system are objects. These objects are defined units that have a collection of attributes, and operations that can be done to them. For example, an ecommerce has classes of objects like Users, Items, and Orders. There are many instances of the User class – my account is one instance of a user, your account is another instance of a user, and so on.
Instances of these classes can interact and have relationships with each other, such as a User favoriting items, or a purchase order belonging to a user. But each class is logically separated from the rest of the project. Somewhere, there’s a bunch of code that defines Users and the things they can do; somewhere else, there’s a bunch of code defining Items and the things that can happen to Items, and so on.
Object-oriented programming also emphasizes the reusability of code, making it easy to inherit or import functionality from one class into another.
OOP stands in contrast to the procedural programming paradigm that preceded it, which is built around procedures that take a complex process, break it into smaller steps, and execute those steps to manipulate data. Imagine, for example, calculating a trajectory for a rocket to reach the moon, or a missile to hit its target. For this kind of programming, there aren’t many recurring objects to act upon – there’s just one Earth, one Moon, one rocket – but there is a lot of mathematical procedure to be done. So, those kinds of tasks were what early languages were optimized to do well.
These language paradigms do not have entirely sharp lines between them, and many modern languages can be used to write software in any of multiple paradigms. Nor was OOP new in the 1990s – there had been OOP languages since the 1960s. But Object-oriented programming was well-suited to the software environment of the 1990s, as graphical user interfaces and mass-adoption of software by regular consumers fit the intuitive nature and representational strengths of OOP.
Critics of Waterfall gravitated to Object-oriented programming because it was a good fit for the emerging mass-market for user-facing software, and also because it was very accommodating of the iterative approaches they tended to prefer. A team might focus on creating one new object class and adding basic functionality to it during one sprint, then do the same for another object in the next sprint. Each object could have its own tests, and could be iterated on relatively independently from other parts of the code. This modularity, combined with the OOP mechanisms to simplify reusability of code, make OOP a good fit for fast-moving iterative projects.
The structure of a program built with OOP also tends to map better to human intuitions than other paradigms. For example, in the Chrysler payroll project that spawned Extreme Programming, there were – presumably – objects for employees, for individual monthly payslips, for credits and deductions, and so on. These objects create a mental model that is easy to understand for non-programmers, thus enabling more communication between the project team and the customers.
The Internet and Consumer Software
One final technical force pushed the world away from Waterfall and toward incremental approaches.
The increasing ubiquity of the personal computer in the 1990s, both at home and at work, changed basic market dynamics. Moving fast became far more important, as software could now be sold at scale to a mass audience; unlike governments and major corporations, individual customers would not sign a contract before the software had been built. Companies were compelled to ship their software to a mass audience before competitors could come in and eat their lunch.
The rise of the Internet, nascent as it was in the 1990s, also hastened the advent of widespread standards in software. In 1994, just 2% of American households had Internet access; by 1998, this was 26%. By 2000, 50% of American adults said they “use the Internet.”
Standard operating systems, networked over a standard protocol, using more-or-less standard languages, meant there was simply less foundational work to be done on a software project. Many layers of the stack had been addressed, and had standard interfaces to abstract away parts of the problem. Plus, the Internet increasingly provided a low-cost distribution channel for software, further incentivizing development speed. As basic Internet access – and later broadband – spread, it became increasingly possible to release an early version of software, then improve it with regular updates over time.
This strategy of over-the-Internet updates became the default with the software for early Internet browsers themselves, such as Netscape Navigator and Internet Explorer. It was then embraced by early online video game companies as well, which used online updates to address bugs and add content to keep players engaged. Incremental development leading to frequent releases went from difficult to possible to being basic customer expectation in a few short years.
While Waterfall projects could certainly take advantage of standardization and leverage the Internet as a distribution channel, these changes disproportionately benefited projects using iterative approaches, because they had more opportunities to launch and deliver value to customers faster. The balance of power between methodologies was rapidly shifting.
Wrapping up
At the end of the 1990s, Waterfall remained a major player, entrenched at many organizations, but increasingly well-articulated alternatives like Scrum and Extreme Programming were demonstrating that they could compete, and technological tailwinds were accelerating the adoption of new approaches.
That’s all for this time. In our next episode, we’ll boldly cross Y2K, and if our computers are still working, then we’ll be ready for the arrival of two words that have become emblematic of the entire industry: Agile and Lean.
As always, your comments and feedback on this episode are very welcome. If you’d like to tell me that I got your preferred methodology all wrong, or if you’d like to find a transcript and links to sources, check out the show website at profound.com.
And if you like this series, and you want to hear more, do me a favor and share it with someone you think would enjoy it too.
Thank you very much for listening.