#NoEstimates is essentially a twitter hashtag, as I already mentioned in a previous post. In this post from late 2014, I tried to explain my point of view on both the #NoEstimates hashtag and the reasons why I think we should try as much as possible to find other ways than estimates in order to make decisions.
Today, I decided to stop using #NoEstimates as a hashtag, and the purpose of this post is basically to explain why.
The best I could find to summarize my thoughts right now is this tweet of mine :
#NoEstimates has been trolled to death, by both “them and us”, so much that the only visible discussion is still about the #
— Matthias Jouan (@mattjmattj) May 2, 2015
When I first heard of #NoEstimates, I was the leader of a small development team, using Kanban in quite a Lean fashion. The team was not really involved in estimating potential value or cost, or even effort; so that the only thing business was asking from us was delivery date estimations.
As I explained in “Indian Planning Poker”, we first tried to use effort estimates – the Agile way – to deduce delivery dates. That proved completely useless and a big waste of time. Since we were in a Lean state of mind, we decided to drop all those wasteful estimation stuff and use a simple rule : slice it until it is small enough to be “as usual”. This would be sufficient to enable statistical forecasting : not every work item would be the same size, but their size would follow a distribution that we could use.
In Kanban we want things to be as much business as usual as possible. The standard, Kanban, way to operate is to identify several classes of service, for example based on their cost of delay, and make sure we “modelize” their cycle time distribution. To be clear : the heuristics we use for determining cost of delay and slicing work items are definitely some kind of effort/risk/complexity/cost estimation.
My first blog on the #NoEstimates topic was a rant against standard Agile, velocity-driven estimation. There is something I don’t get with this story point, velocity thing, really. At that time I thought that #NoEstimates meant exactly “no fucking story point estimates, use data and statistics for fuck’s sake”. I was then – I still am – a pretty big fan of people like Pawel Brodzinski and Troy Magennis.
While I was stepping into the #NoEstimates world, I realized that some of the most important thought leaders behind the hashtag were sharing my views on budget-driven development, what I called post-agile. In fact I realized later that it was just “Agile” – not post-Agile. Anyway.
Product development became a main focus. Lean startup, story maps, impact maps, “job to be done” are the kind of stuff I got into. I really believe in product discovery and learning. My main thought here is : do not estimate outcome, try and learn. Learning might involve estimation though.
When I started tweeting with the #NoEstimates hashtag, I wanted to say something like “Hello! We don’t use effort estimates here. Come around, I’ll tell you how if you wish”, but that did not happen. What happened was that #NoEstimates was actually a minefield. I immediately had to answer things like “Give me a real example of a multi-billion project done without any kind of estimates”. Ouch. “Please reply to the 5 blog posts I wrote against #NoEstimates”. Hum…
I tried. I genuinely tried to answer, given my knowledge and experience. But it seems like I failed. I had pretty interesting conversations though.
But #NoEstimates comes with a huge load of emotions. Time passing, I noticed that most of the conversations around #NoEstimates were very superficial. I got involved in a couple of them myself. On the topic of #NoEstimates you can find conversations about linguistic, semantic, how to behave politely, etc. and all of this comes and comes again without any hope for improvement.
I really got frustrated and wrote an open letter to try and understand why #NoEstimates was that dangerous, why there was so much hatred, on so little details, in vain. I saw many people, including myself, try to explain that not all estimates are to be removed, that obviously we knew that we basically estimate all the time in our day-to-day life, in vain too. Neil Killick, who recently got involded in a pure semantic argument, also tried to sort this all out, in vain again.
The thing is that this hashtag is bad, too extreme maybe. I’ve seen it described as a tribe, with some revolutionary connotation. The truth is we should have settled the semantic discussions a long time ago. Instead :
@henebb I wrote about that already, in vain. #NoEstimates is an endless, superficial argument between self-made linguists. It’s killing me.
— Matthias Jouan (@mattjmattj) 2 Mai 2015
If we want to move forward on this topic, I think we need to stop using the #NoEstimates hashtag. When I’ll come to twitter to talk about estimates, I won’t hashtag #NoEstimates anymore. Estimates are just a tool. There is always a bigger purpose and we should talk about it instead. Are we talking about #ReleasePlanning ? Is it #ProductDevelopment ?Estimates are a vehicle, not a target. Let’s move forward and talk about the real things.
16 thoughts on “Why I decided to quit #NoEstimates”
Let’s move to “good release planning,” which means we show up more or less as planned with more or less what was promised in his release, that more or less works as needed to continue the delivery of value to the customer base.
Or how about a topic of determining how much efficacy of our dollar is needed to produce the planned value (revenue) from our project
I really liked this post. And I love your attempt to move forward, reaching out a hand!
In my opinion, it’s very important to use words that marks intention and what is meant. So, yes, to me that is important. I wonder if it’s possible to accept that it is how I feel? And if it isn’t important to you (or anyone else) then why not use another name (as you suggest)? If you (or anyone else) doesn’t want to change name, it makes me wonder… Is semantics actually important anyway? Or what’s the purpose of not changing name?
The reason I don’t like the name is because I see it form people’s thinking when tweeting/blogging under the name. Binary thinking etc. And you pronounce it “no estimates”. We don’t need to tell people that, we don’t need a movement for that. We don’t need to tell people to stop doing something they find value in. As you say, it doesn’t mark the real question at hand.
The purpose of an estimate is to inform decisions. And in your post you say that you dropped estimates. I think it’s because of that; they didn’t inform decisions in any real way, thus you could remove those. The purpose isn’t to “estimate outcome”, it’s to weigh the “cost” and “benefit/value” of the decision being made – even if you discover and learn, because that has a price as well.
And I think you kind of moved to implicit estimates? That is actually not an idea I’d promote. Read more here: http://kodkreator.blogspot.se/2015/03/not-estimating-can-also-be-problematic.html
The reason that implicit estimates aren’t really a good advice is because there’s always expectations. And it’s much better to get those expectations out for scrutiny – explicitly.
And “use data and statistics for fuck’s sake” is also estimating. Perhaps very different from engineering estimates, but estimates > engineering estimates. And even not using “real” data (when doing engineering estimates) is actually still using data and statistics – it’s called experience. Just because we don’t use actual numbers doesn’t mean we don’t implicitly use it.
To be clear, using data is a great concept (explicit, right?). But it’s not universally applicable, so we don’t need to tell people: Stop!
And again, may I please be able to consider “using data” as a form of estimating? (the definition of “estimate” actually says so as well, check Merriam Webster) If it’s (semantics) not important to you (or anyone else), then please use another tag/name. Just because it’s different doesn’t mean it’s “no” of the original definition. Compare ax and chainsaw. Quite different! But still share the same purpose/need. Ax = engineering estimates, chainsaw = forecast using data (or opposite?)
We don’t need to tell people doing heavy mathematics (statistics) that consider what they do as “estimating” as “no, that is not what you do”. Can we please change name? Or if that is important to you, I wonder what the purpose is?
The premise and original point of #NoEstimates is to make decisions with “no estimates”. In more than 2 years time, no one has provided one example how to do that. Because estimates informs decisions.
Hence, “no estimates” doesn’t exists. So, yes, let’s move on. Because, all the binary thinking, false dichotomies etc (but also all the bad behavior we see (mainly by the “advocates” I have to say – I was just being called a troll for asking questions), so if there’s any good in NoEstimates, it gets lost) that is tweeted and blogged in its name is not needed, and it doesn’t progress us as a community.
@henebb, You just replied with the same discussion the author said he is quitting. Give the man a rest.
I was taking the opportunity to explain myself, my view and my thoughts about the very thing this post covered. I actually thought it fitted the blog post. I’m so sorry if it was interpreted as yet another thing on the same discussion. I regret my comment in that case.
I have stopped to tweet with that tag too. There’s people in there that really scares me. There’s also people in there that I look up to. But the bad won over the good.
I’ll come clear: I dare not tweet that tag. It’s too painful.
Love this post! Loved you thoughts on the *actual* matter.
> “Give me a real example of a multi-billion project done without any kind of estimates”
Isn’t the obvious counter-question “Give me a real example of a multi-billion project that was even close to the estimate”?
Before suggesting such simple and simple minded comparisons. Please determine the root cause to those project overruns. Was it because the estimates suffered to many of the well know problems, each of which has a directly actionable corrective actions.
Then when you have the root cause(s) in hand and are “exploring” the corrective action. please explain how NOT ESTIMATING will have prevented those root cause from producing the undesirable outcomes.
No one working in the Software Intensive Systems domain http://goo.gl/7r2hfd has any illusion what so ever that creating credible estimates is not serious problem. Our research in our domain of SIS has shown 4 top level sources of program performance issues http://goo.gl/7r2hfd
When the No Estimates advocates have identified the Root Causes and show how the corrective action of NOT ESTIMATING would have either prevented or corrected those un favorable outcomes, then there’ll be a common basis of discussion.
Regarding the multi-billion projects that come in at or below the planned budget., I’ve worked two of them. a 52 site roll out of SAP and a $7B software intensive nuclear safety and safeguards remediation project.
So using your counter question in the absence of the corrective actions based in NOT ESTIMATING isa Tu Quoque fallacy.
So until #NE comes up with working examples of mega projects that did not estimate and where considered successful, it’s going to be difficult to have any means of comparing one approach to the other.
All the respect in the world for the work you do. However, while the counter may be a case of Tu quoque, I believe the original question is begging and is itself a Nirvana fallacy. In many ways, #NE reminds me of Public Choice theory vs. traditional political science. The latter deals with an idealized world view, the former emerged to explain why that’s not what we’re in fact observing.
I don’t know how many multi-billion projects you’ve worked on, but I’m guessing it can’t be too many, so your track record on that is indeed impressive. However, the sample size is also sufficiently small that we can’t really conclude too much. Perhaps you got lucky and now suffer from the Illusion of Control fallacy? 🙂
In the past 20 years I’ve worked a dozen or so billion baselined programs. The core issue is criticizing those programs for overrunning without discovering the Root Cause is a fools errand. Here’s our current vehicle for assessing these RC’s http://goo.gl/cZnlW
My issue is the NE’ers suggest that Not estimating will be the corrective action for the root causes of program overruns. Take a look at some of the RCA there or Goggle “nunn mccurdy” “root cause analysis” for our domain. Bent That’s my point.
Without that RCA no suggested corrective actions is going to credible
There is a fallacy in the control fallacy. So look further into the research showing how how to not only overcome that fallacy bit also how to avoid.
The Planning Fallacy and the Control Fallacy, while popular, are fraught with false assumptions about how programs are managed. The Root Cause Analysis efforts we work on have shown the fallacy of those fallacies.
Without removing root causes no method is going to fix the problem.
Another thought. The latter is far from idealized view of estimating project cost, schedule, and technical performance. Rather there are well developed and well practiced methods. The big problem and Bent shows this as well, is the misuse of the method.
I have a colleague – former NASA cost director – that suggested three sources of cost over run
1. We couldn’t know – we simply have no enough knowledge about the futurte and fail to acknowledge this and therefore fail to provide contingency
2. We didn’t know – we didn’t do what we were obligated to to to discover the actual estimating parameters and their data sources needed to make an informed estimate
3. We don’t want to know – if we told the truth the program would never get funded or it would get canceled.
In NASA it’s been shown that about 50% of the overrun programs result from #3.
So I’m back to the unanswered question. “How can Not Estimating” improve the probability of success of any endeavor?
Depends on your definition of success. If success is to deliver a project on time and on budget, probably not a whole lot, for obvious reasons.
But it may help redefine success by shining a light on the complete inadequacy of most project estimates. This can highlight the inherent risk factor and perhaps stop projects before they get started. Can we consider it a success if we don’t even start a project? From a budget perspective it certainly is, if that mitigates losses. From a project perspective, not so much, but that’s a tunnel-vision perspective.
Perhaps the term NoEstimates is too extreme. I don’t know anyone who would not have some idea of time and budget, but the I don’t think that’s the point. We obviously don’t start projects having no clue if it’ll take 2 months or 10 years. So we can get in the ballpark quickly by a napkin calculation. Can we get closer? Perhaps, but at what cost? What are we willing to pay, in money and delays, for a small amount of increased accuracy? That to me is the core of the argument, not a binary notion of estimating or not.
How about this definitions of success https://goo.gl/kiLmIS and some more background on project success here http://goo.gl/v2tqZK
The original post on No Estimates was and still is, since it has not been replaced, “decisions can be made without estimates”
Until the principles of making decisions in the presence of uncertainty – this is called Microeconomics – has been addressed, the moniker of #NE will be forever confusing to those of us making decisions in the presence of uncertainty by estimating all the parameters that impact those decisions.
So when you say “the complete inadequacy of most project estimates,” do you mean in your experience, in your domain. Or in general? Because certaintly estimates in our Software Intensive Systems, which includes enterprise IT, estimating is misused, manipulated, and many times down right “fixed” but the process of estimating and the mechanisms for creating credible estimates is readily available and well tested.
It’s a people problem not an estimating problem.