Setting Expectations in Scrum and Monte Carlo Probabilistic Forecasting

--

When we don’t know what we don’t know, it’s kind of silly to come up with a Gantt chart saying, we’ll all be done by Christmas, but this is exactly what people do. They predict what they’re going to do in future sprints, if they’re using scrum and in future months if they’re using Kanban.

If there’s more unknown than known, you probably need to allow for some uncertainty and you need to embrace that uncertainty.

When you’re trying to set expectations, maybe the only expectations you can set are that we don’t know when we’ll be done. That might be not very favorable in your organization, but what you could do is try asking people to look at the product, look at the service, look at what you’ve done. Look at the outputs of experiments and interviews and see how people are reacting to the product in terms of what you might have already released in the market.

Maybe reflect on those and then decide what to do next. So instead of building up a big list of things that we wanna do next, maybe all we need to do is say, well, what’s the next thing that we need to do. What’s the next right thing. Failing that maybe what you need to consider is Monte Carlo probabilistic forecasting.

It’s still smoke mirrors in uncertainty. We still don’t know when we will be done, but I find that if your stakeholders cannot deal with uncertainty then they will put in some undoable date that’s beyond the limits, beyond the capabilities of the team in that vacuum, they will make something up. This is what I’ve noticed.

In that event, maybe you tried to get your stakeholders used to the idea of running forecast regularly with Monte Carlo Probabilistic forecasting. It sounds very fancy, but it’s actually quite simple. You run random number generation on how big could your backlog be? How small could it be? How big could it be?

90% chance you’re right, 10% chance you’re wrong. And then you look at the throughput how many items does your team get done? Valuable work items. How many items does it get done? 90% chance you’re right, 10% chance you’re wrong. And then look at what would be the worst performance in terms of throughput, what would be the best performance in terms of throughput and let Monte Carlo run random number generation between those limits.

How small is the backlog, how big is the backlog? How bad could we be in terms of how much work we do and how productive could we be in terms of how many outputs we could deliver. And based on that, Monte Carlo will come up with all these different date intervals and you’d be a fool to pick the date in the middle because that’s like saying 50, 50 heads or tails move, maybe over to where there’s 85 percentile, even 95 percentile. If it’s a legal requirement and say 15% chance, it won’t be done by this date, but I’ll give you a better forecast next week and in just saying, but I’ll give you a better forecast next week, we’re basically saying that we don’t know.

A bit of a health warning with this, if your team doesn’t have regular throughput, if the, your team isn’t delivering work that’s done according to scrum every single sprint. If it’s delivering all of the work, maybe on the last day of the sprint as well, or if there’s really a regular throughput, the quality of your forecast is going to be worse, but you can still give regular forecasts.

And by providing regular Monte Carlo Probabilistic forecast to your stakeholders. It’s a good way of making them realize that because you’re updating the forecast every week. It’s like the weather, the forecast for your work will change as well.

Thank you. That’s setting expectations. We don’t know.

https://linktr.ee/johncolemanxagility — social and podcast links

https://linkpop.com/orderlydisruption — order training from right here

--

--

John Coleman executive guide, product leader

Leadershum Power List of the Top 200 Biggest Voices in Leadership in 2023, agility chef, executive agility guide, product manager, creator Kanplexity & Xagility