Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I worked on a product that was built around planning an estimation with ranged estimates (2-4h, 1-3d, etc)

2-12d conveys a very different story than 6-8d. Are the ranges precise? Nope, but they're useful in conveying uncertainty, which is something that gets dropped in any system that collapses estimates to a single point.

That said, people tend to just collapse ranges, so I guess we all lose in the end.





> 2-12d conveys a very different story than 6-8d.

In agile, 6-8d is considered totally reasonable variance, while 2-12d simply isn't permitted. If that's the level of uncertainty -- i.e. people simply can't decide on points -- you break it up into a small investigation story for this sprint, then decide for the next sprint whether it's worth doing once you have a more accurate estimate. You would never just blindly decide to do it or not if you had no idea if it could be 2 or 12 days. That's a big benefit of the approach, to de-risk that kind of variance up front.


> you break it up into a small investigation story for this sprint, then decide for the next sprint whether it's worth doing

That's just too slow for business in my experience though. Rightly or wrongly, they want it now, not in a couple of sprints.

So what we do is we put both the investigation and the implementation in the same sprint, use the top of the range for the implementation, and re-evaluate things mid-sprint once the investigation is done. Of course this messes up predictability and agile people don't like it, but they don't have better ideas either on how to handle it.

Not sure if we're not enough agile or too agile for scrum.


That's definitely one way of doing it! And totally valid.

I think it often depends a lot on who the stakeholders are and what their priorities are. If the particular feature is urgent then of course what you describe is common. But when the priority is to maximize the number of features you're delivering, I've found that the client often prefers to do the bounded investigation and then work on another feature that is better understood within the same sprint, then revisit the investigation results at the next meeting.

But yes -- nothing prevents you from making mid-sprint reevaluations.


If you measure how long a hundred "3-day tasks" actually take, in practice you'll find a range that is about 2-12. The variance doesn't end up getting de-risked. And it doesn't mean the 3-day estimate was a bad guess either. The error bars just tend to be about that big.

If a story-sized task takes 4x more effort than expected, something really went wrong. If it's blocked and it gets delayed then fine, but you can work on other stories in the meantime.

I'm not saying it never happens, but the whole reason for the planning poker process is to surface the things that might turn a 3 point story into a 13 point story, with everyone around the table trying to imagine what could go wrong.

You should not be getting 2-12 variance, unless it's a brand-new team working on a brand new project that is learning how to do everything for the first time. I can't count how many sprint meetings I've been in. That level of variance is not normal for the sizes of stories that fit into sprints.


Try systematically collecting some fine grained data comparing your team's initial time estimates against actual working time spent on each ticket. See what distribution you end up with.

Make sure you account for how often someone comes back from working on a 3-point story and says "actually, after getting started on this it turned out to be four 3-point tasks rather than one, so I'm creating new tickets." Or "my first crack at solving this didn't work out, so I'm going to try another approach."


That's literally what retrospectives are for. You do them at the end of every sprint.

Granted, they're point estimates not time estimates, but it's the same idea -- what was our velocity this sprint, what were the tickets that seemed easier than expected, what were the ones that seemed harder, how can we learn from this to be more accurate going forwards, and/or how do we improve our processes?

Your tone suggests you think you've found some flaw. You don't seem to realize this is explicitly part of sprints.

I'm describing my experiences with variances based on many, many, many sprints.


I think it really depends on how teams use their estimates. If you're locking in an estimate and have to stick with it for a week or a month, you're right, that's terrible.

If you don't strictly work on a Sprint schedule, then I think it's reasonable to have high variance estimates, then as soon as you learn more, you update the estimate.

I've seen lots of different teams do lots of different things. If they work for you and you're shipping with reliable results then that's excellent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: