Monday, April 21, 2014

Latest News on DOD Goes Agile

Since I briefed the Coordinating Task Force at the Pentagon on how to report back to the Congress on progess, we've talked a lot about how the US Department of Defense (DoD) is going Agile. We've also shared some ideas on how to do it from SEI. Here is some real concrete guidance about how to do it from the DoD.

We've also been corresponding with four-star General Barry McCaffrey, my former classmate at West Point. He was very clear in his comments on our new book, Scrum: The Art of Doing Twice the Work in Half the Time.

“Scrum is mandatory reading for any leader, whether they’re leading troops on the battlefield or in the marketplace. The challenges of today’s world don’t permit the luxury of slow, inefficient work. Success requires tremendous speed, enormous productivity, and an unwavering commitment to achieving results. In other words success requires Scrum.”

In the process of writing our online course Agile Defense we read with great interest the Interim DoD Instruction 5000.02. This document shows at least one way the DoD is thinking about doing, Agile development in software. 

Here are their two models of software acquisition. The first reflects a certain amount of waterfall thinking:

"Figure 4 is a model of a program that is dominated by the need to develop a complex, usually defense unique, software program that will not be deployed until several software builds have been completed. The central feature of this model is the planned software builds – a series of testable, integrated subsets of the overall capability – which together with clearly defined decision criteria, ensure adequate progress is being made before fully committing to subsequent builds."

Here's a new, more Agile model of deployment:

"This model is distinguished from the previous model by the rapid delivery of capability through several limited fieldings in lieu of single Milestones B and C and a single full deployment. Each limited fielding results from a specific build, and provides the user with mature and tested sub-elements of the overall capability. Several builds and fieldings will typically be necessary to satisfy approved requirements for an increment of capability. The identification and development of technical solutions necessary for follow-on capabilities have some degree of concurrency, allowing subsequent increments to be initiated and executed more rapidly."

The new model moves the DOD closer to the second value in the Agile Manifesto - working product over comprehensive documentation. They may not meet the Agile Manifesto principle of delivering product increments in a few weeks to a few months. Yet it is a definite step forward.

At the Pentagon I emphasized that DOD purchasing should require "Change for Free" in all their contracts as it could save them hundreds of billions of dollar a year. The F-35 contract alone is $143B dollars over budget mainly due to software testing!

Sign up for our online course next Wednesday from 11-12 for more on how Scrum, Agile, and the military work together.

Thursday, April 17, 2014

Leadership Dashboards: Post 2

This is the second post in a series about creating effective leadership dashboards.  You can catch the first post in the series on the goals of an effective agile dashboard here.

This post focuses on how to organize the creation of a great dashboard.  How the dashboard is created is often overlooked in favor of diving directly into what metrics to include, and the look and feel of a complete dashboard; but being deliberate in the process can dramatically improve the finished product and prevent the alarmingly frequent incidents of total failure.  For example, a client I am currently supporting in dashboard creation had previously worked with a large accounting firm to create an executive dashboard for their development group…six months and hundreds of thousands of dollars later, the project was cancelled without the leaders ever having seen a single working metric!

There are three main pillars for how to create a successful leadership dashboard:
  •      Use Scrum to develop the dashboard
  •      Develop a prioritized backlog of metrics and deliver incrementally
  •      Leverage an “interactive” interface and drill down hierarchy

Use Scrum
It should come as no surprise that I recommend using a Scrum team to develop and deploy the leadership dashboard. (After all, we use it for everything here at Scrum Inc.)  But interestingly enough, when most companies embark on developing a dashboard, they tend to treat it like an internal side project and not give it the full focus, attention and agile discipline that they apply to their customer-facing work.  With unclear roles and objectives, and competing with a dozen other projects for employee attention these informal efforts often go nowhere. 

If you are serious about developing a useful dashboard, designate a dedicated and cross-functional team to build it.  Maintain the discipline of working off a backlog of metrics to deploy and identify a product owner to incorporate user feedback. Leverage the structure provided by a regular sprint cadence with demo and retrospective meetings.  It may feel like a lot of overhead, but you will be glad you made the investment when enjoying your completed dashboard.

Deliver Incrementally
What caused the failed dashboard effort mentioned above, and countless others before it, was the attempt to deliver a working dashboard in a single “big bang.”  The hallmarks of this approach are to “mock up” the look and feel of all of the metrics in a dashboard, get buy in from leadership on that design, then build the dashboard.  Unfortunately, this waterfall approach is no more effective for dashboard creation than it is for software development, and for exactly the same reasons: 1) leadership doesn’t really know what they need to see to make effective decisions until they see it, leading to a “moving target” that can consume years of back and forth mock-up revision; 2) too often, teams agree on a mock-up only to discover that it is impossible to pull the data required to make it a reality; and 3) company data and systems are complex and interdependent…small changes to one element of the dashboard have ripple effects throughout the tool, making after the fact changes a nightmare.

Fortunately, dashboards lend themselves extremely well to iterative and incremental delivery. Each combination of metric and business unit within the company is a useful feature that can be delivered and refined independent of the other metrics.  Integration, summary views and second-order metrics can then be layered on.  A savvy product owner in dialogue with the dashboard’s end users can easily prioritize which metrics and which business units are of greatest concern and prioritize the backlog of metrics appropriately to deliver the greatest value as early as possible.  Ensuring that there is an easy way for users to provide feedback within the first iteration of the dashboard makes this process even more straightforward.

The biggest challenge here is to set appropriate expectations with leaders in advance about what incremental development looks like and get a commitment to providing feedback.  They might be resistant at first to a partially completed product or the effort of feedback, but it is imperative that they provide actionable guidance to produce an effective dashboard.  “It isn’t good enough yet” is a common response, but not sufficiently engaged and actionable to help the team. 

In the past, I have positioned the value proposition for incremental delivery as, “The initial version will be rough, and I can almost guarantee that you won’t like it; however: you will have something in your hands within x days, it will be REAL data that will improve your visibility over what you have now, and we will implement any actionable feedback you have over the subsequent iterations to make the dashboard exactly what you want it to be.”  This usually works, though it is worth getting agreement in writing for the inevitable retrenchment later.

Leverage an Interactive Tool
The default expectation in the business world is that a dashboard is a single page or PowerPoint file that contains summary metrics and can be transported around for easy reference.  While this is fine in practice, the advent of cloud computing and ubiquity of mobile devices have opened up significant new possibilities for interactive metrics reporting.  Here again, a little up front expectation setting is essential to leveraging this potential.

A single flat dashboard can only show top-level metrics and tends to get very cluttered as more information is added.  By contrast, an interactive data visualization tool like Tableau or Domo can have a clean and simple top-level design, but allow the decision-maker to click-down into disaggregated views as needed (stay tuned for more detail on tools in a future post).  It provides more data, but gives the user control over how much detail to show.  The data can be updated at different cadences or even in real time, rather than a single monthly update to a PowerPoint slide. Finally, with dashboards fed to mobile apps, the information is even more readily available than the printed dashboard page.

That said, remember that the dashboard’s ultimate purpose is to help leaders make decisions, so particularly in the early iterations it may be helpful to build a dashboard that looks familiar and comfortable to leadership.  Having built credibility in the process and team, you can then flex your interactive muscles and deliver value-adding capabilities.

In the next post, we will cover what information leaders typically want updates on a regular basis, and how to achieve those objective with compelling agile metrics. If you want a sneak peek at some of these metrics, they are covered in our online course on The Scrum Leader’s Dashboard.

Alex Brown is Scrum Inc’s Chief Product Owner and Chief Operating Officer.  He set up the company’s internal metrics dashboard to automatically consolidate and share agile metrics and support better decision-making.  He also trains senior leaders and consults to companies on how to succeed strategically in an agile business environment.

Thursday, April 10, 2014

Yet Another $163B Waterfall Disaster

The F-35 Is Worse Than

The $400 billion jet project is the most expensive weapon the Pentagon has ever purchased. It's also seven years behind schedule and $163 billion over budget ...

And here’s the kicker: According to a 41-page Government Accountability Office (GAO) report released yesterday, the F-35, which has yet to fly a single official mission, will stay grounded for at least another 13 months because of “problems completing software testing.”

What GAO Found 
Delays in developmental flight testing of the F-35’s critical software may hinder delivery of the warfighting capabilities the military services expect. F-35 developmental flight testing comprises two key areas: mission systems and flight sciences. Mission systems testing verifies that the software-intensive systems that provide critical warfighting capabilities function properly and meet requirements, while flight sciences testing verifies the aircraft’s basic flying capabilities. Challenges in development and testing of mission systems software continued through 2013, due largely to delays in software delivery, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. The Director of Operational Test and Evaluation (DOT&E) predicts delivery of warfighting capabilities could be delayed by as much as 13 months.

Tuesday, April 08, 2014

Happiness Metric - The Wave of the Future

The Happiness Online Course archive is available.
Nöjd Crispare Historik

... any investor should be able to measure its return, and now a group of U.K. researchers say they've provided the first scientifically-controlled evidence of the link between human happiness and productivity: Happier people are about 12% more productive, the study found.

The results, to be published in the Journal of Labor Economics, are based on four different experiments that employed a variety of tactics on a total of 713 subjects at an elite British university. See article ...
ScrumInc used the happiness metric to help increase velocity over 500% in 2011. Net revenue doubled. The way to do this is now part of a formal pattern at called "Scrumming the Scrum." Traveling around the world, the happiness metric keeps bubbling up as a topic of interest. 

Books are starting to hit the charts at Amazon by business leaders (Zappos CEO, Joie de Vivre CEO) and psychologists. Managers and consultants are telling me that people are getting fed up with being unhappy at work. Younger people in particular are refusing to work in command and control environments based on punishment and blame. Major change is emerging (see The Leaders Guide to Radical Management by Stephen Denning). The Harvard Business Review devoted a recent issue to Happiness because happy employees lead to happy customers and better business. However, never underestimate the human capacity for screwing things up. See HBR blog on "Happiness is Overrated." You might need to "Pop the Happy Bubble," a pattern designed to straighten things out when your team is oblivious to impediments.

The Scrum Papers documents some of the early influences on Scrum and Nobel Laureate Professor Yunus at the Grameen Bank in Bangledesh provided key insights on how to bootstrap teams into a better life. Practical work on these issues on the President's Council at Accion helped me put these insights into practice just prior to the creation of Scrum in 1993. I saw how to bootstrap developers out of an environment where they were always late and under pressure into a team experience that could change their life.

One of the most innovative companies in the world of Scrum is a consultancy in Stockholm called Crisp. Henrik Kniberg is the founder and we have worked together on Scrum and Lean for many years. He recently introduced the "happiness index" as the primary metric to drive his company and found it works better than any other metric as a forward indicator of revenue.

Henrik outlines on his blog how he used the A3 process to set the direction for his company and how that led to measuring company performance by the "Happy Crisper Index."

Now a days one our primary metric is "Nöjd Crispare Index" (in english: "Happy Crisper Index" or "Crisp happiness index"). Scale is 1-5. We measure this continuously through a live Google Spreadsheet. People update it approximately once per month.

Nöjd Crispare Index

 Here are the columns:

  • Name
  • How happy are you with Crisp? (scale 1-5)
  • Last update of this row (timestamp)
  • What feels best right now?
  • What feels worst right now?
  • What would increase your happiness index?
  • Other comments
We chart the history and how it correlates to specific events and bring this data to our conference.

Nöjd Crispare HistorikWhenever the average changes significantly we talk about why, and what we can do to make everybody happier. If we see a 1 or 2 on any row, that acts as an effective call for help. People go out of their way to find out how they can help that person, which often results in some kind of process improvement in the company. This happened last week, one person dropped to a 1 due to confusion and frustration with our internal invoicing routines. Within a week we did a workshop and figured out a better process. The company improved and the Crisp Happiness Index increased.

Crisp Happiness Index is more important than any financial metric, not only because it visualizes the aspect that matters most to us, but also because it is a leading indicator, which makes us agile. Most financial metrics are trailing indicators, making it hard to react to change in time.


As Dan Pink points out in his RSA talk, people are motivated by autonomy, purpose, and mastery. Takeuchi and Nonaka observed in the paper that launched Scrum that great teams exhibit autonomy, transcendence, and cross-fertilization. The "happiness metric" along with some A3 thinking helped flush out these issues at Crisp and it can work for your company.

At the core of the creation of Scrum was a daily meditation based on 30 years of practice beginning as a fighter pilot during the Vietnamese war. It is a good practice for a warrior and for Scrum as changing the way of working in companies all over the world is a mighty struggle. May all your projects be early, may all your customers be happy, and may all your teams be free of impediments!

"May all beings be well, may all beings be happy, may all beings be free from suffering."
- Compassion Meditation for a Time of War

Six Signs your Team’s Acceleration is Too Good to be True

Let me say right from the beginning that I am a huge fan of tracking velocity in Scrum…it is an amazingly powerful concept. 
  • The ability to measure team output from sprint to sprint allows a team to systematically experiment with different process improvements and consistently get better over time. 
  • A clear sense of how much output the team actually produces within a sprint also drives better decision-making about when to expect project completion without slave-driving the team developing it. 
  • As an agile leader, I like to know whether my teams are accelerating, decelerating or staying stable over time. If they are accelerating, it gives me confidence that our projected completion dates are relatively safe; whereas if they are stable or decelerating, it implies greater risk in current projections. 
But metrics are only as good as the integrity of the data that feed them. The traits that makes estimating velocity in points so powerful (speed, ease for the teams, intuitive accuracy of estimation) also mean that, in the wrong conditions, velocity can be manipulated to produce misleading conclusions.  To be clear, this is not the terrible scourge that proponents of measuring in hours often claim it is…a few simple guidelines such as “NEVER tie incentives to velocity” and “don’t use declining velocity as a reason to beat up your teams” generally suffice to eliminate any deliberate gaming of the system. 

However, seeing velocity increase over time just feels good and teams often thrive on the sense of accomplishment that comes from getting more done in less time.  Even without overt pressure to increase velocity, the collective will to go faster can create upward velocity drift that isn’t necessarily driven by increases in underlying output.  Since only the team suffering from velocity drift is impacted, this need not be a major problem. But for Scrum Masters, Product Owners and teams concerned with maintaining the integrity of their velocity metric, I humbly offer…

The 6 signs that your beautiful velocity growth trajectory may just be bad data:
  1. Velocity always increases – Even the highest-performing Scrum teams suffer setbacks or encounter new impediments.  In general, scrum teams that set challenging goals should expect to fail about 20% of their sprints, meaning that velocity should decline from the previous sprint about the same percent of time (if you are using “yesterday’s weather” to pull stories into the sprint). If your team has been through a dozen sprints without a single backslide in velocity, it suggests that stately upward trend is being managed more deliberately than it should.
  2. Inexplicable Acceleration – Velocity can go up in short bursts for no particular reason, but it is difficult to sustain structural velocity improvement without systematically removing team impediments.  So if a team’s velocity has been increasing consistently but they can’t point to specific impediments that they have removed, that is a red flag that the acceleration may not be real.  At best, the team is not conducting healthy process experiments to deliver repeatable and sustainable acceleration.  At worst, they may be undermining the meaningfulness of their velocity.
  3. The same story now receives a higher point estimation than it used to – This is the definition of “point inflation” that opponents of measuring velocity in points are always pointing to.  In practice, we rarely see egregious cases of point inflation where the exact same story that was 3 points in a previous sprint is now 5 points in the current one.  Instead, we typically encounter more nuanced forms of inflation, such as when an additional quality check is added to correct for past issues and points are added to complete this additional work.  The amount of effort needed to complete the work may have increased, but the amount of output has not, so these added points represent a subtle form of point inflation.
  4. Backlog stuffing with “filler stories” – Teams that are striving to increase velocity often become obsessed with ensuring that everything they do is reflected in the backlog.  In general, you don’t want to do tons of off-backlog work and do want to stay focused on completing the goals of the sprint. However, some level of housekeeping and team hygiene work is a natural part of the group process.  If including these items in the backlog was always a team norm…that is fine. If that wasn’t always the norm, then including these filler stories with associated points gives the false impression that the team is accelerating when it is not actually producing more output.
  5. Lots of separate minimum-sized stories – We often say that smaller user stories are better, and that ultimately teams should strive to work with uniformly small stories in the sprint.  This is a great goal, but it can be taken too far.  If work is broken into many stories that are smaller than the smallest sizing increment used by the team (“xs”, “1-point”, “Chihuahua”, etc.) then the rounding error of adding all these fractional stories together starts to exert a strong upward influence on velocity.  If these precisely divided stories are still good user stories reflecting incremental functionality, then it is time to reset your reference story to accommodate smaller divisions.  More often than not, however, these tiny stories are really tasks and are hurting the team’s ability to work together to produce quality product.
  6. Excessive “normalization” of velocity – Tracking team strength in each sprint is helpful for knowing the context the team is operating within. There are a number of compelling reasons to apply a lightweight level of normalization to a team’s raw velocity number to get a better predictor of actual team output: it provides a more stable measure of output in the face of major illness, vacations, family leave and other significant shifts in team capacity.  However, it also introduces one more lever that can artificially increase apparent velocity, so teams need to be careful to only reserve normalization adjustments for major capacity impacts, and not try to adjust for every perceived shift in team strength.  If you notice that the team has not been at full capacity for a long time, it is time to questioning if over-normalization may be occurring.
I    Feel free to share your own experiences with velocity in the comments section below, or check out other musings on the value of good metrics in Scrum, including a thread on leadership dashboards here.


Alex Brown is Scrum Inc’s Chief Product Owner and Chief Operating Officer.  He set up the company’s internal metrics dashboard to automatically consolidate & share agile metrics and support better decision-making.  He also trains senior leaders and consults to companies on how to succeed strategically in an agile business environment.

Sunday, April 06, 2014

Agile Progamming for Families

New York Times columnist and author Bruce Feiler has just published a new book titled: The Secrets of Happy Families: Improve Your Mornings, Rethink Family Dinner, Fight Smarter, Go Out and Play, and Much More.  The secret it turns out is applying agile development to your household. 

Every week starts with a family meeting. Kids and parents self-organize and self-manage (kids even help decide on their own incentives and punishments) and every week they have a retrospective to determine what they can do better next Sprint. Turns out that kids think their parent’s stress levels can improve.

Scrum: savior of the modern American family? Watch and decide for yourself. 

You can listen to an NPR review and an interview with Bruce Feiler here and here.

Wanna learn more about Agile and childhood?  Click here for Scrum in schools.

-- Joel Riddle

Wednesday, March 26, 2014

#ScalingScrum: How Do You Manage a Backlog Across Multiple Teams?

Scrum is being used for everything from massive software implementations, to cars, to rocket ships to industrial machinery. Scrum Inc.'s Chief Product Owner, Alex Brown, is in the midst of putting together a presentation on a modular way to scale scrum. 

He has noticed that when projects scale-up, one of the first issues organizations must confront is how to manage a Product Backlog across multiple teams. 

Some organizations work from one master backlog managed by a Chief Product Owner or a Product Owner Team. Multiple teams then pull stories from that backlog.

Other organizations have teams with individual product owners who create their own backlogs and release their own modules into a loosely coupled framework. Spotify has set up their entire organization to enable this. (They also carefully manage dependencies across teams.) 

There is a whole spectrum of options between these two examplesThe right answer for any company lies in their own context. If you're building something where all the modules are intimately integrated, a single, tightly managed, master backlog may work well. In a different environment, it might be faster for individual teams to continuously release improvements on their own module. There is coordination on the epic level, but Sprint-to-Sprint, their backlogs are independent from each other.

These models work for different Scrum implementations and we know there are even more ways of doing it. We would love to hear your story so we are extending an open invitation to the Agile community:

How do you manage your backlog across teams?

We want to learn how your context shapes your practice. Why do you do it that way? What kind of product are you building? How many teams do you have? And how is your method working for you?

Please post your answers in the comment section or on Jeff's Facebook page, or on Twitter if you are that concise (@jeffsutherland #ScalingScrum).

As the conversation winds down, we'll write a blog and compile the most interesting and effective techniques so we can learn from each other. 

In the coming months, look forward to a Scrum Inc. online course in which Alex and Jeff present a framework for scaling Scrum. They will also share this framework at Agile 2014 in Orlando.

--Joel Riddle

Thursday, March 20, 2014

Agile Leadership Dashboards: Post 1

I am having lots of conversations these days about executive dashboards for Scrum…what does a leadership team really need to know in order to do their job well, and how can teams provide that information without wasting valuable of time preparing reports?  Within these discussions, there are also nuances such as: what agile metrics might actually be dangerous to share with management because they can drive unintended consequences?  And what metrics should or shouldn’t be linked to incentives?

With these debates as a background, I am embarking on a regular series of posts to explore the subject of agile metrics and leadership dashboards further.  For this, I will be drawing specifically on experiences setting up our own dashboard at Scrum Inc. and working with a large tech client (who shall remain anonymous) to set up an agile executive reporting tool for their C-suite leadership.  I welcome everyone to join in the conversation and share your own experiences, both positive and negative, in the blog comments.  I will try to weave these into future posts.

To start, what are the goals of a leadership dashboard?  Like a good user story, it is important to have a clear vision of the desired outcome, and a set of acceptance criteria so that you recognize success when you see it.  Particularly if you are using agile to develop the dashboard incrementally, you need to know when your latest increment is “good enough.”

At the most basic level, leaders need to accomplish three objectives:

  • They need to establish and maintain a compelling vision that aligns the organization around a shared sense of purpose.
  • They need to maintain visibility of how the organization is progressing toward the realization of that vision, and make course adjustments as needed to ensure progress.
  • They need to support motivation and accountability within the organization.

An effective leadership dashboard directly supports the second objective, but its can also help deliver the third objective, and should be informed by the first.

To my mind, a successful dashboard should provide leaders with the relevant context and metrics they need to make informed decisions.  It should be updated on a frequency that meets or exceeds the required decision-making cadence.  Finally to the extent possible, the dashboard should be assembled from data that teams are already collecting to inform their own process and pulled automatically to minimize distraction from producing valuable new product.

Relevant Content
I will definitely revisit the topic of potential metrics to include in a dashboard in much more detail in future posts.  For this discussion, suffice it to say that top-level metrics should answer the key questions of:

1) Are we producing the right product(s) for the customers we are trying to serve;
2) Are we prioritizing our efforts and applying our resources correctly given what we think we   know about the market and our competitors;
3) Are we making consistent progress towards our strategic goals; and
4) Are we doing all of the above in a way that we can sustain for the long run? 

Metrics that answer or inform these questions help leaders make better strategic decisions.  Extraneous metrics are at best a distraction, and at worst cause leaders to make bad decisions.

Update Cadence
Some decisions only need to be made once a year, such as “should we renew our annual contract with…?”  Others need to be made monthly, daily, or in response to real-time events, such as “how do we restore service given an outage that just occurred?”  A truly great dashboard provides the current snapshot of key metrics with the most relevant data being shown.  Real-time decisions need data from a few moments ago, whereas monthly financial decisions are better made with the recent month’s complete data rather than a partial view of the current month’s results.  Deliberately matching update frequency to decision cadence brings the most powerful data to the right place with the least amount of effort.

We speak with far too many teams that complain about the onerous reports they are asked to produce for senior leadership.  The typical complaint is that leadership wants updates on a number of metrics the team never uses for its own purposes, so this data must be gathered, calculated, and presented manually.  The team sees this as a huge waste of time to produce metrics that don’t even reflect reality on the ground.

The Scrum process throws off enormous amounts of high quality data, such as velocity, team happiness, planned backlog, defect and impediment lists, and business value.  Most teams already collect this data in software tools with API interfaces.  Other than the first few iterations of a new dashboard where the metrics and presentation are still being refined in coordination with leadership stakeholders, there is no good reason dashboard data can’t be pulled, calculated, and presented automatically.

As I mentioned, this is just the first of many posts on this topic.  If you want to dig deeper, feel free to check out our past online course on "The Agile Leader's Dashboard".

-Alex Brown