Thursday, April 10, 2014

Yet Another $163B Waterfall Disaster

The F-35 Is Worse Than

The $400 billion jet project is the most expensive weapon the Pentagon has ever purchased. It's also seven years behind schedule and $163 billion over budget ...

And here’s the kicker: According to a 41-page Government Accountability Office (GAO) report released yesterday, the F-35, which has yet to fly a single official mission, will stay grounded for at least another 13 months because of “problems completing software testing.”

What GAO Found 
Delays in developmental flight testing of the F-35’s critical software may hinder delivery of the warfighting capabilities the military services expect. F-35 developmental flight testing comprises two key areas: mission systems and flight sciences. Mission systems testing verifies that the software-intensive systems that provide critical warfighting capabilities function properly and meet requirements, while flight sciences testing verifies the aircraft’s basic flying capabilities. Challenges in development and testing of mission systems software continued through 2013, due largely to delays in software delivery, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. The Director of Operational Test and Evaluation (DOT&E) predicts delivery of warfighting capabilities could be delayed by as much as 13 months.

Tuesday, April 08, 2014

Happiness Metric - The Wave of the Future

The Happiness Online Course archive is available.
Nöjd Crispare Historik

... any investor should be able to measure its return, and now a group of U.K. researchers say they've provided the first scientifically-controlled evidence of the link between human happiness and productivity: Happier people are about 12% more productive, the study found.

The results, to be published in the Journal of Labor Economics, are based on four different experiments that employed a variety of tactics on a total of 713 subjects at an elite British university. See article ...
ScrumInc used the happiness metric to help increase velocity over 500% in 2011. Net revenue doubled. The way to do this is now part of a formal pattern at called "Scrumming the Scrum." Traveling around the world, the happiness metric keeps bubbling up as a topic of interest. 

Books are starting to hit the charts at Amazon by business leaders (Zappos CEO, Joie de Vivre CEO) and psychologists. Managers and consultants are telling me that people are getting fed up with being unhappy at work. Younger people in particular are refusing to work in command and control environments based on punishment and blame. Major change is emerging (see The Leaders Guide to Radical Management by Stephen Denning). The Harvard Business Review devoted a recent issue to Happiness because happy employees lead to happy customers and better business. However, never underestimate the human capacity for screwing things up. See HBR blog on "Happiness is Overrated." You might need to "Pop the Happy Bubble," a pattern designed to straighten things out when your team is oblivious to impediments.

The Scrum Papers documents some of the early influences on Scrum and Nobel Laureate Professor Yunus at the Grameen Bank in Bangledesh provided key insights on how to bootstrap teams into a better life. Practical work on these issues on the President's Council at Accion helped me put these insights into practice just prior to the creation of Scrum in 1993. I saw how to bootstrap developers out of an environment where they were always late and under pressure into a team experience that could change their life.

One of the most innovative companies in the world of Scrum is a consultancy in Stockholm called Crisp. Henrik Kniberg is the founder and we have worked together on Scrum and Lean for many years. He recently introduced the "happiness index" as the primary metric to drive his company and found it works better than any other metric as a forward indicator of revenue.

Henrik outlines on his blog how he used the A3 process to set the direction for his company and how that led to measuring company performance by the "Happy Crisper Index."

Now a days one our primary metric is "Nöjd Crispare Index" (in english: "Happy Crisper Index" or "Crisp happiness index"). Scale is 1-5. We measure this continuously through a live Google Spreadsheet. People update it approximately once per month.

Nöjd Crispare Index

 Here are the columns:

  • Name
  • How happy are you with Crisp? (scale 1-5)
  • Last update of this row (timestamp)
  • What feels best right now?
  • What feels worst right now?
  • What would increase your happiness index?
  • Other comments
We chart the history and how it correlates to specific events and bring this data to our conference.

Nöjd Crispare HistorikWhenever the average changes significantly we talk about why, and what we can do to make everybody happier. If we see a 1 or 2 on any row, that acts as an effective call for help. People go out of their way to find out how they can help that person, which often results in some kind of process improvement in the company. This happened last week, one person dropped to a 1 due to confusion and frustration with our internal invoicing routines. Within a week we did a workshop and figured out a better process. The company improved and the Crisp Happiness Index increased.

Crisp Happiness Index is more important than any financial metric, not only because it visualizes the aspect that matters most to us, but also because it is a leading indicator, which makes us agile. Most financial metrics are trailing indicators, making it hard to react to change in time.


As Dan Pink points out in his RSA talk, people are motivated by autonomy, purpose, and mastery. Takeuchi and Nonaka observed in the paper that launched Scrum that great teams exhibit autonomy, transcendence, and cross-fertilization. The "happiness metric" along with some A3 thinking helped flush out these issues at Crisp and it can work for your company.

At the core of the creation of Scrum was a daily meditation based on 30 years of practice beginning as a fighter pilot during the Vietnamese war. It is a good practice for a warrior and for Scrum as changing the way of working in companies all over the world is a mighty struggle. May all your projects be early, may all your customers be happy, and may all your teams be free of impediments!

"May all beings be well, may all beings be happy, may all beings be free from suffering."
- Compassion Meditation for a Time of War

Six Signs your Team’s Acceleration is Too Good to be True

Let me say right from the beginning that I am a huge fan of tracking velocity in Scrum…it is an amazingly powerful concept. 
  • The ability to measure team output from sprint to sprint allows a team to systematically experiment with different process improvements and consistently get better over time. 
  • A clear sense of how much output the team actually produces within a sprint also drives better decision-making about when to expect project completion without slave-driving the team developing it. 
  • As an agile leader, I like to know whether my teams are accelerating, decelerating or staying stable over time. If they are accelerating, it gives me confidence that our projected completion dates are relatively safe; whereas if they are stable or decelerating, it implies greater risk in current projections. 
But metrics are only as good as the integrity of the data that feed them. The traits that makes estimating velocity in points so powerful (speed, ease for the teams, intuitive accuracy of estimation) also mean that, in the wrong conditions, velocity can be manipulated to produce misleading conclusions.  To be clear, this is not the terrible scourge that proponents of measuring in hours often claim it is…a few simple guidelines such as “NEVER tie incentives to velocity” and “don’t use declining velocity as a reason to beat up your teams” generally suffice to eliminate any deliberate gaming of the system. 

However, seeing velocity increase over time just feels good and teams often thrive on the sense of accomplishment that comes from getting more done in less time.  Even without overt pressure to increase velocity, the collective will to go faster can create upward velocity drift that isn’t necessarily driven by increases in underlying output.  Since only the team suffering from velocity drift is impacted, this need not be a major problem. But for Scrum Masters, Product Owners and teams concerned with maintaining the integrity of their velocity metric, I humbly offer…

The 6 signs that your beautiful velocity growth trajectory may just be bad data:
  1. Velocity always increases – Even the highest-performing Scrum teams suffer setbacks or encounter new impediments.  In general, scrum teams that set challenging goals should expect to fail about 20% of their sprints, meaning that velocity should decline from the previous sprint about the same percent of time (if you are using “yesterday’s weather” to pull stories into the sprint). If your team has been through a dozen sprints without a single backslide in velocity, it suggests that stately upward trend is being managed more deliberately than it should.
  2. Inexplicable Acceleration – Velocity can go up in short bursts for no particular reason, but it is difficult to sustain structural velocity improvement without systematically removing team impediments.  So if a team’s velocity has been increasing consistently but they can’t point to specific impediments that they have removed, that is a red flag that the acceleration may not be real.  At best, the team is not conducting healthy process experiments to deliver repeatable and sustainable acceleration.  At worst, they may be undermining the meaningfulness of their velocity.
  3. The same story now receives a higher point estimation than it used to – This is the definition of “point inflation” that opponents of measuring velocity in points are always pointing to.  In practice, we rarely see egregious cases of point inflation where the exact same story that was 3 points in a previous sprint is now 5 points in the current one.  Instead, we typically encounter more nuanced forms of inflation, such as when an additional quality check is added to correct for past issues and points are added to complete this additional work.  The amount of effort needed to complete the work may have increased, but the amount of output has not, so these added points represent a subtle form of point inflation.
  4. Backlog stuffing with “filler stories” – Teams that are striving to increase velocity often become obsessed with ensuring that everything they do is reflected in the backlog.  In general, you don’t want to do tons of off-backlog work and do want to stay focused on completing the goals of the sprint. However, some level of housekeeping and team hygiene work is a natural part of the group process.  If including these items in the backlog was always a team norm…that is fine. If that wasn’t always the norm, then including these filler stories with associated points gives the false impression that the team is accelerating when it is not actually producing more output.
  5. Lots of separate minimum-sized stories – We often say that smaller user stories are better, and that ultimately teams should strive to work with uniformly small stories in the sprint.  This is a great goal, but it can be taken too far.  If work is broken into many stories that are smaller than the smallest sizing increment used by the team (“xs”, “1-point”, “Chihuahua”, etc.) then the rounding error of adding all these fractional stories together starts to exert a strong upward influence on velocity.  If these precisely divided stories are still good user stories reflecting incremental functionality, then it is time to reset your reference story to accommodate smaller divisions.  More often than not, however, these tiny stories are really tasks and are hurting the team’s ability to work together to produce quality product.
  6. Excessive “normalization” of velocity – Tracking team strength in each sprint is helpful for knowing the context the team is operating within. There are a number of compelling reasons to apply a lightweight level of normalization to a team’s raw velocity number to get a better predictor of actual team output: it provides a more stable measure of output in the face of major illness, vacations, family leave and other significant shifts in team capacity.  However, it also introduces one more lever that can artificially increase apparent velocity, so teams need to be careful to only reserve normalization adjustments for major capacity impacts, and not try to adjust for every perceived shift in team strength.  If you notice that the team has not been at full capacity for a long time, it is time to questioning if over-normalization may be occurring.
I    Feel free to share your own experiences with velocity in the comments section below, or check out other musings on the value of good metrics in Scrum, including a thread on leadership dashboards here.


Alex Brown is Scrum Inc’s Chief Product Owner and Chief Operating Officer.  He set up the company’s internal metrics dashboard to automatically consolidate & share agile metrics and support better decision-making.  He also trains senior leaders and consults to companies on how to succeed strategically in an agile business environment.

Sunday, April 06, 2014

Agile Progamming for Families

New York Times columnist and author Bruce Feiler has just published a new book titled: The Secrets of Happy Families: Improve Your Mornings, Rethink Family Dinner, Fight Smarter, Go Out and Play, and Much More.  The secret it turns out is applying agile development to your household. 

Every week starts with a family meeting. Kids and parents self-organize and self-manage (kids even help decide on their own incentives and punishments) and every week they have a retrospective to determine what they can do better next Sprint. Turns out that kids think their parent’s stress levels can improve.

Scrum: savior of the modern American family? Watch and decide for yourself. 

You can listen to an NPR review and an interview with Bruce Feiler here and here.

Wanna learn more about Agile and childhood?  Click here for Scrum in schools.

-- Joel Riddle

Wednesday, March 26, 2014

#ScalingScrum: How Do You Manage a Backlog Across Multiple Teams?

Scrum is being used for everything from massive software implementations, to cars, to rocket ships to industrial machinery. Scrum Inc.'s Chief Product Owner, Alex Brown, is in the midst of putting together a presentation on a modular way to scale scrum. 

He has noticed that when projects scale-up, one of the first issues organizations must confront is how to manage a Product Backlog across multiple teams. 

Some organizations work from one master backlog managed by a Chief Product Owner or a Product Owner Team. Multiple teams then pull stories from that backlog.

Other organizations have teams with individual product owners who create their own backlogs and release their own modules into a loosely coupled framework. Spotify has set up their entire organization to enable this. (They also carefully manage dependencies across teams.) 

There is a whole spectrum of options between these two examplesThe right answer for any company lies in their own context. If you're building something where all the modules are intimately integrated, a single, tightly managed, master backlog may work well. In a different environment, it might be faster for individual teams to continuously release improvements on their own module. There is coordination on the epic level, but Sprint-to-Sprint, their backlogs are independent from each other.

These models work for different Scrum implementations and we know there are even more ways of doing it. We would love to hear your story so we are extending an open invitation to the Agile community:

How do you manage your backlog across teams?

We want to learn how your context shapes your practice. Why do you do it that way? What kind of product are you building? How many teams do you have? And how is your method working for you?

Please post your answers in the comment section or on Jeff's Facebook page, or on Twitter if you are that concise (@jeffsutherland #ScalingScrum).

As the conversation winds down, we'll write a blog and compile the most interesting and effective techniques so we can learn from each other. 

In the coming months, look forward to a Scrum Inc. online course in which Alex and Jeff present a framework for scaling Scrum. They will also share this framework at Agile 2014 in Orlando.

--Joel Riddle

Thursday, March 20, 2014

Agile Leadership Dashboards: Post 1

I am having lots of conversations these days about executive dashboards for Scrum…what does a leadership team really need to know in order to do their job well, and how can teams provide that information without wasting valuable of time preparing reports?  Within these discussions, there are also nuances such as: what agile metrics might actually be dangerous to share with management because they can drive unintended consequences?  And what metrics should or shouldn’t be linked to incentives?

With these debates as a background, I am embarking on a regular series of posts to explore the subject of agile metrics and leadership dashboards further.  For this, I will be drawing specifically on experiences setting up our own dashboard at Scrum Inc. and working with a large tech client (who shall remain anonymous) to set up an agile executive reporting tool for their C-suite leadership.  I welcome everyone to join in the conversation and share your own experiences, both positive and negative, in the blog comments.  I will try to weave these into future posts.

To start, what are the goals of a leadership dashboard?  Like a good user story, it is important to have a clear vision of the desired outcome, and a set of acceptance criteria so that you recognize success when you see it.  Particularly if you are using agile to develop the dashboard incrementally, you need to know when your latest increment is “good enough.”

At the most basic level, leaders need to accomplish three objectives:

  • They need to establish and maintain a compelling vision that aligns the organization around a shared sense of purpose.
  • They need to maintain visibility of how the organization is progressing toward the realization of that vision, and make course adjustments as needed to ensure progress.
  • They need to support motivation and accountability within the organization.

An effective leadership dashboard directly supports the second objective, but its can also help deliver the third objective, and should be informed by the first.

To my mind, a successful dashboard should provide leaders with the relevant context and metrics they need to make informed decisions.  It should be updated on a frequency that meets or exceeds the required decision-making cadence.  Finally to the extent possible, the dashboard should be assembled from data that teams are already collecting to inform their own process and pulled automatically to minimize distraction from producing valuable new product.

Relevant Content
I will definitely revisit the topic of potential metrics to include in a dashboard in much more detail in future posts.  For this discussion, suffice it to say that top-level metrics should answer the key questions of:

1) Are we producing the right product(s) for the customers we are trying to serve;
2) Are we prioritizing our efforts and applying our resources correctly given what we think we   know about the market and our competitors;
3) Are we making consistent progress towards our strategic goals; and
4) Are we doing all of the above in a way that we can sustain for the long run? 

Metrics that answer or inform these questions help leaders make better strategic decisions.  Extraneous metrics are at best a distraction, and at worst cause leaders to make bad decisions.

Update Cadence
Some decisions only need to be made once a year, such as “should we renew our annual contract with…?”  Others need to be made monthly, daily, or in response to real-time events, such as “how do we restore service given an outage that just occurred?”  A truly great dashboard provides the current snapshot of key metrics with the most relevant data being shown.  Real-time decisions need data from a few moments ago, whereas monthly financial decisions are better made with the recent month’s complete data rather than a partial view of the current month’s results.  Deliberately matching update frequency to decision cadence brings the most powerful data to the right place with the least amount of effort.

We speak with far too many teams that complain about the onerous reports they are asked to produce for senior leadership.  The typical complaint is that leadership wants updates on a number of metrics the team never uses for its own purposes, so this data must be gathered, calculated, and presented manually.  The team sees this as a huge waste of time to produce metrics that don’t even reflect reality on the ground.

The Scrum process throws off enormous amounts of high quality data, such as velocity, team happiness, planned backlog, defect and impediment lists, and business value.  Most teams already collect this data in software tools with API interfaces.  Other than the first few iterations of a new dashboard where the metrics and presentation are still being refined in coordination with leadership stakeholders, there is no good reason dashboard data can’t be pulled, calculated, and presented automatically.

As I mentioned, this is just the first of many posts on this topic.  If you want to dig deeper, feel free to check out our past online course on "The Agile Leader's Dashboard".

-Alex Brown

Friday, March 14, 2014

How Waterfall Led Off a Cliff

....And no one noticed until it was too late. Steven Brill's recent piece cover story in Time about how a handful of Silicon
Valley engineers and experts resurrected from technical and political disaster should be a warning to politicians and policy experts everywhere: No longer can the government continue to use traditional development and contracting methods without looking incompetent. Citizens use software everyday that delights them. They now expect websites to be intuitive, well-designed and reliable. 

Here are three takeaways from Time:

 Make work visible!
One of the things that shocked [the rescue] team most–”among many jaw-dropping aspects of what we found,” as one put it–was that the people running had no “dashboard,” no quick way for engineers to measure what was going on at the website, such as how many people were using it, what the response times were for various click-throughs and where traffic was getting tied up. So late into the night of Oct. 18, [the team] spent about five hours coding and putting up a dashboard.
Stand-ups Work
 It was in . . . a nondescript office park in Columbia, Md.–lined with giant Samsung TV monitors showing the various dashboard readings and graphs–that Barack Obama’s health care website was saved. What saved it were stand-ups.. . . The stand-up culture–identify problem, solve problem, try again–was typical of the rescue squad’s ethic. 
Government contracting is broken:
But one lesson of the fall and rise of has to be that the practice of awarding high-tech, high-stakes contracts to companies whose primary skill seems to be getting those contracts rather than delivering on them has to change. “It was only when they were desperate that they turned to us,” says [team member] Mickey Dickerson. “I have no history in government contracting and no future in it … I don’t wear a suit and tie … They have no use for someone who looks and dresses like me. Maybe this will be a lesson for them. Maybe that will change.” 
Things are changing at least at the Department of Defense. It is now the law that all DoD software contracts must be Agile. Many in Washington are still trying to figure out what exactly that means but it is a start. Scrum Inc. is also seeing a rush of training requests from big government defense contractors looking to get their teams certified in Scrum. 

In April we're giving an on-line course that focuses on Scrum, Agile, and the DoD. We'll be talking about the latest on how the Department is changing its rules to take advantage of Scrum's ability to deliver products faster, better, and more responsive to change. Which if you're providing tools to someone in combat, is critical.

-- Joel Riddle

Sunday, March 02, 2014

Call for Papers - HICSS 48

HICSS is one of the top conferences in paper citation index rankings. This means papers will be seen and used by researchers worldwide more than papers from other conferences. All HICSS papers are published in the IEEE Digital Library and are FREE to download, so accepted papers are accessible to everyone for the rest of time. No other conference can give you the same distribution of your concepts and ideas. Plus it is held on the island of Kauai in January. Write, publish, vacation!

We invite you to submit abstracts and manuscripts for the Agile and Lean Organizations: Management, Metrics and Products mini-track, to be held at HICSS-48 on January 5–8, 2015 at Kauai, Hawaii. Mini-track co-chairs are Jeff Sutherland (Scrum, Inc.) and Dan Greening (Senex Rex). HICSS is an IEEE Computer Society sponsored conference. Abstract and manuscript submissions received before May 1, 2014 will receive early guidance to improve the likelihood of acceptance. Final manuscripts are due June 15, 2014.
The Hawaii International Conference on System Science provides a great mix of academics, industrialists and consultants looking at many system science applications. HICSS attendees gain inspiration from innovators in a variety of fields.
May 15, 2014. Early Review Deadline
June 15, 2014. Submission Deadline


We seek research papers and experience reports that describe how agile development and lean product management interact with organizations, their structures, cultures and products. What evidence-based guidance can we provide to leaders to help motivate, create and sustain agile/lean organizations? How do agile development and lean product management interact and support product groups, departments and companies? How do organizations restructure to support these philosophies and when they do not restructure, what happens? What cultural requirements and/or training are needed for companies to maintain agile behavior? How do organizations implement, monitor and improve coaching, training, mentoring and Scrum Mastering? What are the important metrics, and how do companies measure and improve? How do markets respond to rapid iterations and end-user experimentation?
Submit abstracts or full manuscripts for early review and guidance to improve likelihood of acceptance, to Earlier submissions will gain greater attention.
Submit full manuscripts for review. Review is double blind; therefore this submission must be without author names. Follow author instructions found on, select “Software Technology” track and “Agile and Lean Organizations” mini-track.
Agile managers structure product development as rhythmic experiments to improve production. Agile is most often applied to software development, and we expect most papers in this mini-track to discuss software development. However, we also welcome papers that describe different types of organizational “production”, such as management initiatives, manufacturing, marketing, sales and finance.
Lean product management continually seeks to reduce waste, including waste due to producing unprofitable products (recently popularized as “Lean Startup” or “Lean Entrepreneurship”). Characteristics include: set-based design, A-B testing, unmoderated user-experience testing, direct market experimentation, customer validation and pivoting. Advocates claim lean product management produces greater market satisfaction and customer engagement, earlier discovery of hidden market opportunities, higher revenues and more efficient use of development staff.
These approaches claim superiority in new product development over traditional approaches (such as “waterfall management”) that make early development and market assumptions in long-range plans and rarely test those assumptions prior to release.
Experimentation lies at the heart of both agile development and lean product management: they identify leading indicators of progress (velocity, reach, engagement, loyalty, revenue, etc.), consider changes to process or product, construct hypotheses, and incorporate feedback loops to confirm or invalidate the hypotheses, perform production or market experiments, and rapidly adapt to discoveries.
Agile and lean approaches challenge organizations large and small. People typically conflate small failures (learning) with large failures (organizational threats), assume that innovation means taking long-range untested risk, and establish and protect budgets with many baked-in production and market assumptions. These cultural realities interfere with agility and real innovation.
As a result, companies invest enormous amounts of money in agile transformations that can succeed, but sometimes fail. What can organizations do to improve agile uptake? How do we know that the organization is improving? How can organizations diagnose problems without motivating gaming? What types of people are more likely to thrive in agile and lean organizations, and what roles should they take? What hiring practices result in better candidates? What training programs produce better results? What coaching structures work? How do we measure these activities?
There are two agile/lean mini-tracks in HICSS-48. This mini-track focuses on organizations and product management. The other mini-track focuses on software development practices. The mini-track chairs may redirect submissions that seem more appropriately hosted in the other mini-track.
We’re looking forward to seeing your submission, and seeing you at HICSS 2015 on Kauai!