How long will this take?” is a tricky question. Most of us are terrible at estimating which is why organizations often make decisions based on either wildly inflated guesses, or wishful but unrealistic thinking. Furthermore, estimating itself takes time and binds resources that we could invest in actually doing the work instead.

Intuitive Prediction by Daniel Kahnemann and Amos Tversky.

Advocates of the #noEstimates movement argue that it is in fact possible to get the benefits of estimating without having to pay the (full) price, and they get a lot of traction these days. An interesting real-world example that shows how #noEstimates can work comes from a team at Adidas runtastic which dropped estimations while still being in tune with a company-wide OKR system.

So why do we bother with estimations at all? And can we get some of its alleged benefits without the downsides?

Start with why

In my experience as a developer, product owner, and product manager building enterprise software there are three main reasons for estimations: Predictability, decision-making, and building shared understanding.


Team Topologies by Matthew Skelton and Manuel Pais.

Being embedded in a large organization means that others will depend on our deliverables at some point. While in an ideal world our teams would be fully empowered to deliver value autonomously, without hand-offs or synchronization points with other teams, in practice that’s rarely manageable. It’s worth attaining to that goal to reduce the need for dependable estimations, but it’s hardly ever possible to completely eliminate it.

Even the most autonomous, stream-aligned team might work with an external marketing agency that needs to know when a big feature is ready so they can organize a public launch event. Or the team may need to commit to a deadline for a critical bugfix so that sales can bring in a deal in the current quarter rather than the next. Sometimes customers have a right to demand that we stick to a hard deadline, threatening to sue if we don’t: When building enterprise software it is not uncommon that upgrading from one version of our product to the next is a major project on the customers’ side, requiring backup strategies, data migrations, end-user trainings, and much more. Thus customers rightly demand access to roadmaps and plans months in advance.

But apart from that first level of predictability, there’s also a more subtle aspect to estimations that’s primarily driven by the business itself: Making decisions.


In software development, in economics, and in life in general, no decision is without opportunity cost. If you do X, that means you don’t do Y. Or, as economist Russell Roberts puts it:

“Economics teaches you that making a choice means giving up something.”

In a product development organization you can expect a million ideas to be floating around all the time. Everyone, from the CEO to sales, marketing, support, and of course product management themselves, have their pet features that they’d like to see implemented as soon as possible. It’s the task of the product manager (or product owner, depending on your setup) to decide which of those potential features are going to make it into the product, and when. But that decision always requires making trade-offs: If we do X, that means we won’t do Y.

To be able to make that decision the PM needs to know (or assume) a lot about the candidate features. I find the following factors helpful to prioritize, but depending on your industry and context others can make more sense:

  • Customer attraction: Will this feature make the product more appealing to new customers?
  • Customer retention: Will our existing customers get more value out of the product because of this feature?
  • Competition: Do we have to do this to stay on par with (or ahead of) the competition?
  • Architecture: Will implementing this put us in a position to add other features more easily later?
  • Risk: Will doing this endanger quality or stability of other areas of the product?
  • Maintenance: How much recurring work will this feature burden us with in the future?
  • Effort: How long is this going to take to build this?

The problem with those qualities is of course that they’re themselves highly uncertain: We simply don’t know the answers to those questions yet, so we need to come up with estimates. To some degree that process may even feel scientific: You put numbers in spreadsheets, arrange items on charts, and plot graphs about future progress. But it’s critical to keep in mind that most of the data we’re working here are estimates—educated guesses about what we think might happen.

But to get to those estimations is often a valuable process in itself: By discussing the questions broadly and honestly, we can build a level of shared understanding that has a lot more value than a number someone pulled from thin air, put into a spreadsheet, and called an estimation.

Shared understanding

One great forum for building shared understanding is the estimation or backlog refinement meeting in the Scrum process. During that meeting, the team would discuss candidate stories with the main goal of coming up with effort estimations which then influence the trade-off decisions introduced before. In practice though, especially when using an estimation approach like planning poker, the estimations themselves are often just by-products of the meeting with the actual outcome being a better understanding of the customer need we’re trying to address.

That effect is also helpful with other qualities of our candidate features: By questioning why sales person A hopes this feature will help us get ten new customers while sales person B thinks it’s only going to be two, we have to think more deeply about what that feature should actually be about. Maybe A had something very different in mind than B, thus over-estimating how useful the feature would be. Or B didn’t consider a major benefit that would make the feature appealing to a different industry or geographic market.

Again, the point is not so much to put a number (10 new customers, 5 days of work, 400k$ in revenue, …) into a spreadsheet. The point is to facilitate a discussion between stakeholders that leads to new knowledge that wasn’t present in anyone’s mind before.

How we estimate

As the software industry matured, we moved from traditional estimation and planning approaches based on hours, days, or months to more abstract ways of estimating.

Time spent

The intuitive approach to estimating is simply to guess how long something will take using a measure of time. While in theory doing that should make a team extremely predictable and ease the trade-off discussion among stakeholders, in practice time-based estimations face the stigma of being widely unreliable. Therefore, organizations have learned to compensate by putting buffers on top of those estimates which can pile up so much that everything looks like it’s going to take forever form the decision maker’s point of view.

Equivalence classes & complexity

It turns out that comparing is a task that the human mind is a lot better at than estimating. Thus, most agile development methodologies propose to estimate using an approach based on equivalence classes: Instead of estimating items individually, we compare them by asking “is this more or less complex than that?” Then we actually work on the tasks and take what we learned into account for next round of estimations. Teams then quickly learn (a) to estimate more accurately based on past experiences and (b) what their limits in terms of capacity are.

The shapes and sizes of the equivalence classes can vary from team to team. While my teams were happy with using the story point approach (assigning each item either a number from the Fibonacci sequence 1, 2, 3, 5, 8, 13, or “too large”) others have stripped down the number of classes to t-shirt sizes such as “small”, “medium”, and “large”.

Regardless of the number of equivalence classes and what names we assign to them, teams usually learn quickly how many items of which complexity they can finish in a certain timeframe. A team following the story point approach might level out at “50 story points per 2-week-iteration”, or a team estimating in t-shirt sizes may realize that “if we have one large item to work on we can’t take on more than one additional medium item in this sprint”. This is a great level of predictability which makes it easier for other teams to build on top of our results, and for the business to make trade-off decisions about future actions.

Furthermore, having informed discussions about perceived complexity is easier in my experience than about estimated effort. The question “Why do you think this will take two weeks?” is a difficult one, but “Why do you think this is more complex than that?” makes it easier to have a discussion that ends up building shared understanding, leaving everyone off wiser than they were before.


Allen Holub, a big proponent of #noEstimates, says in his blog post on the topic:

“NoEstimates is not a thing: a process or practice or methodology or framework. It’s a discussion.”

I really like that spirit, because it creates space for approaches beyond estimates to grow and develop. There’s no standard #noEstimates methodology which one can compare to a competing approach and list pros and cons. Rather, proponents of getting rid of estimates argue that we should question weather or not estimations have enough value in our individual contexts to merit their downsides.

For the team at Adidas runtastic I mentioned in the introduction, their shift to #noEstimates meant moving from assigning arbitrary numbers to large chunks of work in a tedious and time consuming process, to learning from an observation of their way of working: By identifying that they have a stable throughput of tasks per developer per unit of time anyway, they found out that they can replace top-down estimations with bottom-up predictions. By breaking down their work into tasks, they already have a forum for discussions that lead to shared understanding, and their predictable throughput of tasks makes it easy to forecast and plan ahead.

In that environment an additional level of estimations has little value, which they confirmed by running their experiment. They put it best in their blog post by saying:

“While we cannot not say that story points have no merit, we learned that, for our team, in our current situation, it was not the right tool.”


Usually we don’t estimate for the heck of it, but because we need to achieve something: We want our teams to be more predictable, we need input to make trade-off decisions, and we want to build shared understanding throughout the organization. Estimation techniques based on equivalence classes and complexity have proven quite good at that in the past for some teams, in some situations.

But that shouldn’t force us to religiously keep to rituals like estimation meetings and planning poker indefinitely. It’s much more important to focus on the goals we want to achieve and look for better ways to attain those goals that actually work for our teams than to blindly follow the rituals. #noEstimates is a great movement in that sense, as it reminds us that we should continue to question our ways and give alternatives a try without feeling bad for “not doing things the way we’re supposed to.”