Back Original

Estimating projects sells them short (and that's okay)

Recently, I read a blog post about doing software project estimates. It's a reasonable post with a reasonable method. But it does what all estimates do: it sells projects short.

I don't mean in the sense of underestimating a young promising project's potential, relegating it to an unfulfilling career pushing paperwork. I mean in the sense of selling short on the stock market.

Most of the time when you trade stocks, you buy a stock and you sell it later on. This is how we buy most things in life: we pay money to get them, then later we can use them or get rid of them. This works as we imagine it for stocks.

Let's say you buy a share of a company for $10. Later, you sell it. In the absolute worst case, the stock is worth $0 when you get rid of it, and you're out all the money you put in. You've lost 100% of your investment. But if it's worth $10 when you sell it, you've gotten your money back (less any fees), yay! And maybe it went up to $20, then you made a return of 100%. Or it went to $50, and you made a return of 400%. The amount of money you can make is technically unlimited.

Then you have what's called a short sale. In these, you sell a share you don't have yet. To do that, you borrow it from some other party and promise to give it back to them eventually (and compensate them for the loan, too). Your hope is that the value of the stock will go down over time, so you can buy back the stock you sold for less than you sold it for.

With short sales, how much you can make has a cap. If you sell short a share worth $10 today, then the most you can make is $10 if the stock goes all the way down to $0. But what happens if it goes up? If it goes up to $40, now you'll lose $30—three times your investment—when you close out your position.

Short sales are often pretty dangerous! There are ways to reduce the risk, but unlimited downside risk is a big deal. Even with hedges, there's a much bigger potential downside than when you buy a stock the normal way.

Now, what about software project estimates?

Let's pretend that you, dear reader, are an independent software developer. You do software development for clients, and they pay you a project-based fee rather than an hourly rate. A good fee is one that's low enough that it's worth it to your client to accept, but high enough that you'll make a profit on it. To reach what fee that is, you have to figure out how long a job will take. If your only cost is your own labor, then the minimum charge to make a profit is the hours the project will take multiplied by your hourly rate.

Let's say you estimate a project by breaking down the project into small tasks, then you estimate how long each of those will take (using your expert knowledge), and add it up. How close do you think you'll be for the total project time?

My guess is you'll be pretty far off, and that you're going to underestimate the total time you need. Part of this is from tasks you'll forget, and the rest is from getting your estimates wrong. That's expected—that's why they're estimates, after all. And the tasks you get done faster than expected won't cancel out the ones that take longer, but we'll get to that.

So how do we handle underestimating completion time?

Some people take a simple approach and pad the entire estimate. This is crude, but can be quite effective. You can take the entire estimate and multiply it by something (three? five?) to guess how long it will take when you have some tasks that are slow or some tasks you forgot, they're handled in that slush.

If it works for you, great, a lot of people use it. I don't love this approach, though. It feels a little bit like bending the truth, unless you're transparent about your padding. It would be nice if we had a way to figure out the real completion time from this estimate. And it would be even better if we could explain that to our clients.

When you estimate a task, you're saying it will take some amount of time to complete. Let's say you estimate that a certain task can be done in 10 hours. If you're right about that, great! If you're wrong, and it's faster, then at most you're saving 10 hours from your total project timeline.

But if you're wrong and it takes longer... Then it could take 20 hours, or 30, or 40. Sometimes you can cut this out of scope, but sometimes this happens on a critical feature that you didn't expect (and sometimes scope gets added).

Just as with short selling, the maximum you can be wrong by is unlimited.

I worked on a project once where we got (to my recollection) within a few weeks of a correct estimate. The project was on track for about four months, with about six engineers on it. We eventually deviated from the plan because we got new information, so we changed course. But that was quite a feat, being within weeks on a four month chunk of the project.

We did that by taking the uncertainty of each estimate into account. Some tasks, we were quite certain, so we'd put 70-90% certainty, depending on our confidence. Others, we were very unsure of how long things would take, so we could get as low as 10%. Then, the question was: how do we use this uncertainty to get our aggregate estimate?

It depends on the distribution of values. The normal distribution, our familiar bell curve, is quite common—but it isn't our distribution here, since it can extend infinitely far in either direction. If I recall correctly, we used a Poisson distribution, and it looked something like this.

Illustration of something close to a Poisson distribution

The upshot here is that given this sort of distribution, we capture the property of unlimited potential downside with limited upside. And this is really the case with estimates!

So, knowing that we are short selling our projects, what do we do?

Honestly, probably, we just continue doing it—but with our eyes wide open.

If you do project-based pricing, you can use this knowledge to reduce or acknowledge the risk you're taking. And maybe you can use it to lock in better estimates, using historical data to figure out the actual distribution of tasks timing so you can build a nice statistical model.

On the other hand, if you (like me) are working in a team that's building products and doesn't do client work per se, this is still useful! It can help you with locking in more predictable timelines, though it's usually not worth the effort to actually calculate all of this out. The most useful thing I've found from this, though, is just communication: having this understanding of unexpected events is really helpful for explaining potential risks before starting on a project, and for when you do hit those road bumps.

If you do use uncertainty in your estimates, I'd love to hear about how you do it!


If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts and support my work, subscribe to the newsletter. There is also an RSS feed.