Does the Product Owner Have to Accept User Stories?

The concept of user stories (or product backlog items) being accepted by the Product Owner before they can be considered “Done” is deeply ingrained in many teams’ understanding of Scrum. Some assume that it is something you just do. Others include it as an explicit statement in the Definition of Done.

But there is nothing in the Scrum Guide about the Product Owner accepting backlog items. In Scrum, an item is “Done” when it meets the Definition of Done. That’s it.

Obviously, that does not necessarily preclude you from having a “The backlog item was accepted by the Product Owner” statement in the DoD.

However, does this practice actually make sense?

Consider the following: So the Product Owner introduces a backlog item that changes the color of the menu bar from blue to green. You finish that backlog item, and then you have to ask the Product Owner: “Hey, have a look. Is this green?”

And yes, that oversimplifies the issue a bit, but essentially that is what you are asking the Product Owner when you ask for acceptance. I am not saying that the Product Owner should not see the result or that they should only see it in the Sprint Review. After all, the Scrum Guide quite clearly states that the Scrum Team (not just the Developers) presents their results to stakeholders. This implies that the Product Owner has seen these results before the Sprint Review. But what was the purpose of showing the Product Owner the finished work?

  • Option 1: The Product Owner gets to check if the Developers implemented the backlog item correctly. But isn‘t that the responsibility of the Developers? They do reviews, they run various tests. Do we really have to turn the Product Owner into a test engineer? A lot of Product Owners complain about being too busy. Perhaps one reason is that they do double duty as a member of the Development team, because that’s essentially what they do when they act as a test engineer. And frankly, that’s hardly the core competency I expect from a Product Owner.
  • Option 2: The Product Owner clarifies the requirements. So they said in the acceptance criteria that the menu bar should be green. So now they say: “But not THAT shade of green.” Well, if the particular shade of green is crucial, why wasn’t it mentioned in the acceptance criteria? And how often does the Product Owner get to clarify the requirements during a sprint? How do you actually estimate the size of a backlog item if it’s going to turn into a moving target of constant requirement clarifications during the sprint? No, that’s not how Scrum works. The backlog item was refined, it was considered ready, so now you should not renegotiate it. Many teams wonder: “Should we get acceptance from the Product Owner before or after we ran all the tests?” Because you don’t want to present a potentially buggy item, but you also don’t want to waste time on testing an item that the Product Owner will not accept. Both options are wrong, and that dilemma only exists because the team assumes that Product Owner acceptance is necessary.
  • Option 3: The Product Owner learns something new from seeing the finished work item. So the Product Owner sees the green menu bar and realizes that another shade of green would have been better. Or purple. Or blue after all. So we have learned something. That is great, because that is exactly why we work in iterations and increments. And what is the unit of an increment in that context? It is a product backlog item. And what is the iteration? It is a sprint. So if we learn something, we place it into a new backlog item for the next sprint (unless we still have room to get it done in the current sprint). So yes, the Product Owner can feed the new learnings back into the team – by creating a new product backlog item – rather than changing (or “clarifying”) an existing one.

So in my opinion, having the Product Owner accept product backlog items to consider them “Done” is not just unnecessary. It also contradicts the principle of working in increments and iterations, which is a core concept of agile methodologies.

First Sprint Panic

Has anyone else ever had crisis meetings with project managers after a first Sprint? It seems to be a very common thing. After all, first Sprints often go wrong. Of course, already the use of the word “wrong” is highly debatable, but let’s say they go “wrong” from an outside point of view. The team is not yet settled in, they have not even started their storming phase, they have no idea how much work they can get done in a Sprint, they are working with new tools and toolchains. So of course, many problems surface during the first Sprint. And that is great. More problems -> more learnings -> more improvements.

So why have I had to sit through so many first Sprint crisis meetings with project managers? There’s the project manager asking me why the team underestimated the work, why they didn’t see early enough that they would not manage to reach the Sprint Goal, why they did not raise impediments earlier, and so on. And of course, the project manager wants to know why we don’t have a burndown chart, a popcorn board and why subtasks are not being estimated in person hours. Because all of that would have ensured a “successful” Sprint. And that is just the tip of the iceberg.

I think one major reason for this crisis mode is that project managers in a classic project world are used to projects that go very smoothly at the beginning. Nothing goes wrong for many many months. Everything is being reported as “on track”. Everything goes according to plan. But then, three months before the fixed delivery date, people start admitting that things are not going so well, and the test teams are finding more bugs than the developers can fix. So experienced project managers go into crisis mode the very moment these indicators start popping up – because after all, they only have a few months left. Experienced project managers are trained to panic the moment they see a problem. That is a survival strategy in classic project management.

It is also a part of this project management experience not to trust anyone. After all, everyone has been telling you for many months that everything is fine, while in reality everything was a complete mess. So as a project manager, you don’t trust those people to fix things, but instead you go in and micromanage everything.

How does that translate to an agile project? Like I said, there are many good reasons why a first Sprint may not go so smoothly. And in an agile project, you see this immediately. You get a lot of indicators for things that need to be improved.

Now remember the project manager’s survival strategy: You see problem indicators, you go into crisis mode, stop trusting people and start micromanaging. So there are the Scrum Masters, sitting in a crisis meeting with the project manager after the first Sprint, being told what to do to fix all those terrible problems.

How can this be avoided? I guess one important method for you as Scrum Master or Agile Coach is to manage a project manager’s expectations – to let them know in advance that the first few Sprints will not go so smoothly and to explain to them that this is an important part of the learning and improvement process. Build trust by being honest with the project manager. You have to to remember though that honesty and transparency is something that many project managers are not used to, so you have to understand that this is a learning journey for them.

If you are the project manager in this scenario, I won’t even tell you not to get involved, because I know that it might be your natural survival instinct. Transparency is there for a reason, and it’s completely fine that you ask some questions. But I’d ask you to resist the temptation to micromanage. Don’t tell people what to do. You hired Scrum Masters for a reason. Ask them what the teams have learned, and you’ll probably get plenty of answers. Give yourself the chance to be surprised by what a team can learn within a few Sprints.

Disclaimer: I know, “project managers” are not considered a part of agile projects by many, but most organizations still use project managers, especially when they are at the beginning of an agile transition, so I am speaking from a realistic and pragmatic point of view here. If crossing your arms and saying “This project manager shouldn’t even be here” works for you as a Scrum Master, then that is great of course.

Upgrade from Retrospectives to Prospectives!

Have you ever felt that you are not getting as much out of your retrospectives as you should? Perhaps the problem is already in the term “retrospective”. What is our main goal in an inspect&adapt cycle? Is it the “inspect” part: looking back and analyzing problems? Or is our main goal the “adapt” part: generating creative ideas and building an action plan for improvements in the future? I believe it is the latter. So why do we call the event a “retrospective” and not a “prospective”?

Looking back at what problems we have been facing (the “inspect” part) is necessary to build a platform for further discussion. The big question is how far we want to go in analyzing the past problems. A deep analysis would be especially useful in a complicated, engineering-focused environment, where a “Good Practice” can be applied to a recurring problem, because we know that if a solution (would have) worked in the past, it will work in the future. In complex environments, we cannot be so certain. And most of the time, we deal with complex problems in our retrospectives.

Look at the “5 Whys” method for example. That is a great method for quickly finding an underlying cause for a problem. So let’s play through a common example:

Problem: We couldn’t deploy anything in the last Sprint, so nothing got “Done”.

Why?
Cause 1: The toolchain was completely broken. Nothing worked.

Why?
Cause 2: We only have one person working on the toolchain.

Why?
Cause 3: Because we don’t have a budget for a second person to work on the toolchain.

Well, that looks good so far. But from here on out, the Whys become increasingly pointless:

Why?
Cause 4: Because we did not plan for a bigger budget in the initial planning.

Why?
Cause 5: Because the project manager doesn’t have sufficient experience with large-scale projects like ours.

So what have we achieved with the last two “Whys”? We have found somone to blame (the project manager), and we know that if we had a time machine, we could go back in time and plan a bigger budget. That leads nowhere. Have you ever been in a retrospective where you found out who to blame and that having a time machine would be really great? Did you find these insights to be particularly useful?

Well, the exercise was not completely pointless. At least it looks like we have one important insight: The budget is too low. So now the common approach is to climb back up the “chain of causes”: Get a bigger budget -> Hire an additional person to work on the toolchain -> Fix the toolchain -> Get stuff “done”. Simple enough. But what if we find out that we will not get more money?

In my opinion, we could have stopped after the first “Why”. We know why we couldn’t get anything “Done”. Getting more people to work on the toolchain is an obvious fix. Climbing down a ladder of causes is pointless and only limits us to the solution of “We need more money”. What if there are other solutions? Can team members help out with fixing the toolchain for a few days? Can we reuse something from another project instead of maintaining our own toolchain? Is it time to rethink the toolchain and for example throw out steps or quality gates that nobody actually needs?

I am not saying that these other solutions are better. But working in a wider solution space is definitely better. Drilling down into the problem space unnecessarily can keep us from looking at the wider solution space though. That can lead to discussions like: “Why use up precious developer resources to help with fixing the toolchain if we know that it’s all the project managers fault?” Have you ever been in a situation where you got stuck in that kind of thinking? Well, that’s what you get for wasting most of your retrospective drilling into the problem space.

Even the most common retrospective structure (from the book “Agile Retrospectives” by Esther Derby and Diana Larsen) leads you down that path. After setting the stage, you gather data, and then you generate insights, and these are not insights about what to do better in the future, but about why things went wrong. The “5 Whys” are actually one method for gathering insights proposed in that book. And the authors actually discourage you from looking at solutions too quickly, but they prefer thinking about causes analytically. This obviously caters to an engineering mindset, but we are usually not dealing with engineering problems but with complex problems in a complex adaptive environment. In one model for a two hour retrospective in the “Agile Retrospectives” book, 50-80% of the time are assigned to talk about the past in “Gather data” and “Generate insights” and only 15-20% of the time are assigned to talking about the future in the “Decide what to do” segment.

Doesn’t that strike anyone as completely bizarre?

What kind of creative ideas are we going to come up with in 20 minutes?

I propose to turn this around: We should spend 20% of the time to build our platform – to figure out what went wrong and  to determine the basic root causes. And then we should spend 80% of the time on talking about what we can do better in the future: Action plans, creative ideas, outside-of-the-box thinking, experiments. We don’t need to spend 80% of the time on becoming “problem experts”. We should spend 80% of the time on becoming “solution experts”.

This kind of event would not be a forum for crying over spilled milk, for nostalgically dreaming about water that was already under the bridge two weeks ago, for assigning blame or for discussing about how to build a time machine to fix past mistakes. Instead, it would be a generator of ideas, a catalyst for change, a foreward-looking creative workshop. The key question of the event would not be “What went wrong?” but “What are we going to do better?”.

And this event would no longer be a retrospective but a prospective.

No Agility Without Retrospectives

A popular interview question for Scrum Masters is: “Which one is the most important Scrum event?.”

Obviously, the purpose of the question is not to actually mark one event as the “most important” event, but to see the candidate’s thought processes. There is no right answer, and there is no good answer.

I have nevertheless thought about this question quite often. And if I were forced to pick one event, it would be the retrospective.

The reason is that the retrospective is in my opinion the least likely event to happen “by accident” and it is not easily replaced by other tools or methods. It is a human impulse to ignore the need for improvement when the pressure is high to deliver results. A formalized retrospective gives a team the space to think about improvements, even if the customer is asking for everything to be delivered “tomorrow”.

Also, if I had to develop a plan for a staged approach to an agile transition (instead of the “big bang” strategy many organizations seem to prefer), I would probably start by introducing regular retrospectives for all teams. So in this approach, retrospectives would exist even before the introduction of full-fledged Scrum.

I actually believe that the omission of retrospectives in supposedly agile organizations is an underestimated but highly critical antipattern. Often, retrospectives are not the last but the first meetings to be cancelled or postponed when other meetings seem more important. However, this is one important sign that an organization is missing basic aspects of agility, that the agile mindset is not well-developed and that the organization follows agile methodologies as a cargo cult.

The Transparency Fallacy

Transparency is an important cornerstone of agile methods. However, transparency purely for the sake of transparency – or even worse – to facilitate control or to alleviate irrational management fears, only creates pointless overhead.

No user story has ever been completed faster by being split into sub tasks. No sprint goal has ever been completed by a burndown chart. No progress has ever been made by a progress report.

The purpose of these tools and of transparency in general is to foster team work and to discover areas of improvement for the team. Improvements in performance are achieved not by putting people under a microscope but by creating an ideal work environment for them and by allowing them to focus on their tasks.

Some Overhead is Good Overhead

A common complaint by software developers about Scrum is that it supposedly creates overhead by forcing them into a sprint structure with many meetings. This is somewhat paradoxical, considering that Scrum is basically an extension of how probably all software developers choose to work when left to their own devices.

When a software developer works on a piece of code, they compile it to see if there are any compilation errors. Then they either run their test suite or, if possible, run their code to see if it works as expected and check debug or log output to determine if anything went wrong. After writing some code, they once again compile the code and check if it works.

Now, how long, on average, is the interval between these compile/test/run/check cycles? Is it months? Weeks? Days?

My assumption is that in most cases, the interval is just a few minutes long and rarely longer. If for any reason, developers are forced into longer intervals, I believe that they become more worried and impatient, eager to finally see if their latest additions to the code are correct.

So why do developers do this? After all, every compile/test/run/check cycle causes a lot of overhead. In fact, it might take longer to compile and test the code than it took to write it. Why introduce such an immense overhead? Would it not be much more efficient to just write all the code and then compile and test it? Coding efficiency could easily double, couldn’t it?

Obviously, that suggestion is complete nonsense. No software developer would write 10,000 lines of code without even compiling it once to see if it is correct. So, developers naturally accept a lot of “overhead” in their daily work. And why? Because they intuitively understand that they are working on a complex problem that requires a lot of intermediate checking, fixing of unavoidable bugs or unforeseeable problems, and potentially replanning.

Scrum is only an extension of this natural way of working from a single-developer methodology to a framework that works in the same way for teams of several developers working together on the same product. And the “overhead” seen by many developers is only a reproduction of the intermediate checking, fixing of unavoidable bugs or unforeseeable problems, and potentially replanning. Any Scrum event serves only this purpose.

I guess the usual problem is that Scrum events are often added on top of other meetings that add little or no value for the software developers, so they feel that the Scrum events are adding overhead to their work. But I believe that all software developers can easily understand that an iterative incremental approach is the most natural approach to software development.

Not in the Scrum Guide

I collected some terms that do not appear in the Scrum Guide – and I have a feeling that to some, these might be quite surprising, as many of these terms have become a part of the folklore surrounding Scrum. Note that this is not meant as any kind of judgement on whether or not these terms or the concepts behind them are useful. Most of them are definitely useful. It is mostly meant as a reminder of how basic and lightweight the definition of Scrum actually is and how much flexibility we have in applying it.

These are some terms not included in the Scrum Guide:

  • “User Story” – The Scrum Guide only refers to “Product Backlog Items”.
  • “Definition of Ready” or “DoR” – The Scrum Guide states “Product Backlog items that can be Done by the Scrum Team within one Sprint are deemed ready for selection in a Sprint Planning event.” but does not say anything about the need for further rules to determine PBI readiness.
  • “Story Points”
  • “Estimates”
  • “Velocity”
  • “Burndown Charts” – While these are mentioned (along with burnup charts and cumulative flow diagrams, they are only mentioned as examples, not as a prescriptive part of the Scrum framework.
  • “Ceremonies” – The Scrum Guide only refers to “Scrum Events”, not “Scrum Ceremonies”.
  • “Task Board”, “Sprint Board” – Scrum does not prescribe a task board. It is not even mandatory to decompose PBIs into tasks.
  • “Three Questions” (for the Daily) – In previous versions, the three questions were meant as an example for a typical Daily. In the 2020 version, the three questions are gone.
  • “Stand-up”
  • “Set the stage”, “Gather data”, “Generate insights”, “Decide what to do”, “Close the retrospective” – These retrospective phases are actually derived from the book “Agile Retrospectives: Making Good Teams Great” by Esther Derby and Diana Larsen.
  • “Approval”, “Acceptance” – There is no process for the Product Owner to approve or accept PBIs as “Done”. PBIs are “Done” when they meet the Definition of “Done” (which could include an approval by the Product Owner, but that is not mandatory).

Agile Does Not Fix Chaos

It cannot be said often enough: Agile methods were designed for complex problems, not chaotic ones.

Just as a reminder – what is the difference between complicated, complex and chaotic problems?

  • In complicated problems, there is a direct and predictable relationship between cause and effect. It is difficult to predict the effect from the cause, but with a certain skill set, it is nonetheless possible.
  • In complex problems, there is a relationship between cause and effect, but the effect is generally not fully predictable, and the relationship between cause and effect can only be fully understood in hindsight.
  • In chaotic problems, there is no discernible relationship between cause and effect. 

Agile methods like Scrum are most effectively applied to complex problems, where short feedback cycles allow us to iteratively gain a better understanding of the complex cause-and-effect relationship.

It is important to consider not only the problem itself but also the environment. Not just the task itself but also the environment can be complex or chaotic.

In a complex environment, outside influences are not necessarily predictable, but there is still a meaningful cause-and-effect relationship. Stakeholders follow the work progress and derive new ideas and requirements primarily from the results of the work. Agile is well-suited for integrating these new requirements into the workflow. The result is a controlled creative process. Looking at the resulting progress, we usually see a „x steps forward and y steps back“, where x is greater than y. This means that in a complex environment, we have to be prepared to rework certain parts of our previous work, but we will see a steady progress, and short feedback loops are meant to ensure that x remains not just greater but significantly greater than y. 

In a chaotic environment, outside influences are not just completely unpredictable – they also are in no way related to the work results. The stakeholders constantly introduce new requirements which are not related to the previously done work. New features or whole new projects appear out of nowhere and supplant previously high-priority work items. In this kind of environment, we also take „x steps forward and y steps back“, but here, x might be greater than y, or y might be greater than x, which leads not just to a lack of progress from the stakeholder‘s point of view, but which also creates a lot of frustration for the developers. This is not a creative process. It is a destructive process. Some people seem to believe that creativity arises from chaos. This is not true. Creativity arises only from complexity. Chaos is a purely destructive force. 

When confronted with the loss of efficiency created by this chaos, stakeholders usually reply: „But we are agile!“ often followed by an „Aren‘t we?“

The big issue here is that stakeholders often do not realize that they are creating a chaotic environment, or they do not realize that agile methods are not going to magically solve the problems caused by arbitrarily changing requirements. They assume that agile methods will fix these problems, because after all, agile methods are designed to deal with changing requirements. And while this is true, agile methods only work efficiently when we are dealing with complex changes, not chaotic ones. 

Obviously, we can easily keep integrating arbitrarily changing requirements into a product backlog. We can change priorities on a daily basis. But one should not expect any kind of efficient output from this chaotic environment only because „We are agile!“ In a chaotic environment, even perfectly built agile frameworks can NOT ensure that x is greater than y in the „x steps forward and y steps back“ equation.

It is critically important to understand this, because when a project eventually fails after months or years of chaos, it is easy to blame the agile methods for not having magically turned chaotically changing requirements into a valuable product.