Skip to content

Book Review: “Accelerate” by Nicole Forsgren et al.

The book Accelerate – Building and Scaling High Performing Technology Organizations by Nicole Forsgren et al. is an eye opener and game changer for everyone involved in software development. Nicole provides empirical evidence why teams that apply best practices like test and deployment automation, continuous integration, loosely coupled architectures and team empowerment by far outperform teams that don’t. Following the practices of the Agile Manifesto, eXtreme Programming and Scrum enables teams to deliver software both faster and with higher quality than teams ignoring these practices. Trade-offs between speed and quality are debunked as lame excuses.

Book Structure

The book Accelerate. The Science of Lean Software and DevOps – Building and Scaling High Performing Technology Organizations makes Nicole Forsgren‘s research work accessible to a broader audience. Nicole is the lead author of this book and the State of DevOps Reports (see here for the 2019 version). She is now the VP of Research & Strategy at GitHub. Her co-authors Jez Humble and Gene Kim are renowned DevOps experts and co-authors of several other books like The DevOps Handbook and Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.

In my review, I focus on Chapters 1, 2, 4 and 5 of Part I: What We Found. Chapter 1 gives the context of the study. Chapter 2 explains how to measure the performance of software development teams. Chapter 4 provides the main results of the study (see Figure 4.2). Chapter 5 shows the importance of a loosely coupled architecture, which in turn enables loosely coupled teams.

The remaining chapters of Part I delve into the capabilities shown in Chapter 4 to have a strong positive influence on team and organisation performance. These capabilities include architecture, integration of infosec, lean management and a sustainable work pace. 

In Part II: The Research, the authors explain the methodology of their study: the science behind the book. Part III: Transformation shows how to use the findings from Part I to transform an organisation. The single chapter High Performance Leadership and Management of Part III is written by different authors, Steve Bell and Karen Whitley Bell, two pioneers of Lean IT.

Accelerate (Chapter 1)

In 2001, the Agile Manifesto laid out four values and twelve principles how to develop software in better ways: faster, with higher quality and with more respect to people. The signatories of the Agile Manifesto are practitioners who deduced these values and principles from their daily work in software projects. Many of us know from experience that teams following these principles perform better than teams ignoring them.

Nevertheless, we often end up in organisations that ignore the Agile values and principles – mostly with bad consequences to businesses and people.
So far, we had only anecdotal evidence. The book Accelerate provides empirical evidence which practices lead to high-performing teams and organisations. The authors have collected over 23,000 survey responses from over 2,000 organisations, which cover all sizes and industries. They explain their approach in Part II: The Research – for everyone to refute or to confirm. With their statistical models, the authors can predict the performance of a team by looking at how well teams implement practices like deployment and test automation, continuous integration, loosely coupled architecture and empowered teams.

Measuring Software (Chapter 2)

Common performance measures (e.g., lines of code, hours worked, Scrum velocity) suffer from two  flaws: “First they focus on outputs rather than outcomes. Second, they focus on individual or local measures rather than team or global ones.” 

Short excursion: As a solo consultant, I find the first sentence very interesting. It distinguishes hourly billing (output focused) from value-based billing (outcome focused). I can use the performance measures and the best practices improving performance to write value-based proposals. They help me to estimate the ROI or value of a project in a proposal and to measure the value during the project.

Back to defining a performance measure: The authors suggest four criteria for software delivery performance.

  • Product delivery lead time is the time “it takes to go from code committed to code successfully running in production”. The survey participants could choose between seven durations ranging from less than one hour to more than six months.
  • Deployment frequency denotes how often the software is deployed into production. Again the participants had seven options ranging from several times per day over once per month to fewer than once every six months.
  • Mean time to restore is the time the team needs to fix a bug in a product and deploy the fix. The options are the same as for the lead time.
  • Change fail percentage gives the percentage how often the implementation of a story requires a fix later.

The study shows that the shorter the lead time and the mean time to restore, the higher the deployment frequency and the lower the change failure rate, the higher the performance of the team is (see tables 2.2 and 2.3).

The study also disproves the wide-spread dogma in the software industry that teams can only go faster if they compromise quality. In the words of the authors: “Astonishingly, these results demonstrate that there is no trade-off between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures.”

This finding should also put an end to the tedious discussion about technical debt. We must not take on any technical debt, that is, sacrificing quality for speed. We can have both speed and quality at the same time.
Higher software delivery performance also leads to higher organisational performance (see Figure 2.4). Higher organisational performance in turn leads to higher returns on investment (ROI) and to higher resilience against economic downturns.

Technical Practices (Chapter 4)

Continuous Delivery (CD) enables teams “to get changes of all kinds […] into production or into the hands of users safely, quickly, and sustainably.” CD is based on five principles: 

  • Build qualitiy in.
  • Work in small batches.
  • Computers perform repetitive tasks; people solve problems.
  • Relentlessly pursue continous improvement.
  • Everyone is responsible.

The CD principles suggest that we take many small steps, that we inspect the outcome of each step and adapt the next step when the outcome doesn’t match our expectations. Small steps minimise the costs of missteps. Like Scrum and XP, continuous delivery introduces multiple inspect-and-adapt cycles or feedback loops such that we can deliver “high-quality software […] more frequently and more reliably”.

The authors identify nearly a dozen capabilities that have a strong influence on continuous delivery (see Figure 4.1). These capabilites include automated deployment, automated testing, continuous integration, trunk-based development,  loosely-coupled architecture and empowered teams. These capabilities match pretty well with the capabilities in The Joel Test or The Matthew Test. The more CD capabilities a team has, the better it does on Continuous Delivery.

The results of the book can be summarized in the following statements (see Figure 4.2).

  • The better a team does on Continous Delivery, 
    • the better it does on Software Delivery performance.
    • the less rework it must do.
    • the better the organisational culture (e.g., less pain, less burnout).
    • the stronger the identification with the organisation is.
  • The better teams do on Sofware Delivery Performance, the better the whole organisation performs.

One question must not remain unanswered: How much influence does Continuous Delivery have on software quality?

Measuring software quality is similarly tricky as measuring team performance. Test coverage, for example, doesn’t tell you much about software quality. The authors settled on measuring quality by the percentage of new work, unplanned work or rework and other work (e.g., meetings). The rationale is that teams spend more time on unplanned work or rework when the change failure rate is higher and when the mean time to restore is longer. 

The results are revealing (see Figure 4.4). High-performing teams spend 30% on other work, 21% on unplanned work and 49% on new work. For low-performing teams, the split is 35%, 27% and 38%, respectively. All in all, high-performing teams spend 11% more on new work than low-performing teams. That is an additional half day per week for new work!

Architecture (Chapter 5)

The authors summarise their findings about architecture spot on: “We found that high performance is possible with all kinds of systems, provided that systems – and the teams that build and maintain them – are loosely coupled.” If the system architecture is loosely coupled, it is much more likely that the communication bandwidth between teams is low. In contrast, a “spaghetti” or tightly coupled architecture requires a high communication bandwith.

As we know from the previous chapters, testability and deployability have a strong influence on software delivery performance. As most of us know from experience, “spaghetti” systems are very hard to test and deploy. A loosely coupled and well-encapsulated architecture makes testing and deployment so much easier. Unsurprisingly, it is “the biggest contributor to continuous delivery […] – larger even than test and deployment automation […]”.

A loosely coupled architecture enables teams

  • to make large-scale changes with little or no communication and coordination with people outside the team,
  • to release their work with little or no dependencies on work by other people, and
  • to test their work mostly in isolation.

“To enable [loosely coupled architectures and teams], we must also ensure delivery teams are cross-functional, with all the skills necessary to design, develop, test, deploy, and operate the sytem on the same team.”

If organisations consists of many small teams working mostly independent, they can scale the number of developers and accelerate the delivery of software – at the same time (see Figure 5.1). High performers with 1000 developers deliver their software three times more often than medium performers with 1000 developers or high performers with 100 developers.

When the number of developers increases, medium performers keep their delivery rate constant, whereas low performers see their rate drop. Interestingly, Figure 5.1 does not show any low performers with more than 100 developers. I’d reckon that such organisation would go out of business.

We all have probably suffered from procurement departments that dictate which tools we must use for software development. The argument is that the tool costs are lower when bought in high volumes. The hours wasted with inadequate tools are forgotten. The authors do away with this bogus argument. They show that teams show a higher software delivery performance, if they can select their own tools.

Leave a Reply

Your email address will not be published. Required fields are marked *