Your software project is doomed. The writing is on the wall.

It’s nothing about your project, exactly. It’s just… statistics.

According to the Standish Group (2004), software projects have only a 34% successful delivery rate, meaning that the project is delivered on time, on budget, and with the critical functionality.

The remaining two-thirds of projects are considered impaired or failed, limping over the finish line, late, expensive, or missing some critical functionality—or they are aborted entirely.

Your project is probably struggling right now—in acknowledging and solving technical difficulties, issues with team dynamics, or blocking dependencies. Conversations are hard when they’re motivated primarily by fear, instinct, or intuition.

It doesn’t have to be this way.


You can have up-to-date and accurate information about how your project is actually going

When you have a clear and objective map of where your project is headed, conversations & decisions become easier, because they’re grounded in observable truth about the project.

You can confidently and repeatably lead your projects to successful outcomes.

You can get there.

The tools are simple and quick, and you can learn how to use them.


Background

I’m David Owen, and I’ve been developing software for most of my life.  I actually never cared much for Statistics, and skipped that course when I received my Bachelor of Science in Computer Science and Mathematics.

In the middle of my career developing software, I got an itch to try something new, and transitioned into project-management.  Doing that taught me two things: I was really good at it, and it didn’t fit my personality at all!  But becoming good at it took a lot of work.

In my new role, and without any related training or certifications, I was faced with several challenges: how to better work with others, how to deal with the flood of information about a project, and how to better coordinate my group’s work with other teams and departments in the company.

I was already familiar with the models used to forecast software development.  I was also familiar with how difficult they were to use, or worse, how bad their results were.

I wanted to understand why the models didn’t work well.  To do that, I hit books like NIST’s “Handbook of Statistical Engineering” and Neter, Wasseman, and Kutner’s “Applied Linear Statistical Models,” and started applying what I learned.

It paid off!  Having a single clear goal, I was able to avoid the complexity of most of the models out there.  Knowing the pitfalls of estimating software work, I was able to find and fix the problems with the simpler models.  And, I was able to base it on modern development practices, not research from large “waterfall” projects from the 1970s.

Best of all, the model was right!  I started with one team that believe software could not be estimated.  After collecting their data for a month, we forecasted about a year’s worth of their work.  At the six-month mark, we were less than a week off, letting us coordinate needs and availability with other groups and cut out a bunch of friction.  The model also worked on large projects spanning several teams and departments.

Why hasn’t anyone else done this?  I’m not entirely sure.  Part of it might be the need to understand statistical methods as well as how software development actually works.  Part of it might be the need to have data to test your ideas on.  I happened to already have data covering several years of development.  Most organizations that have that data have tended to be large, with sophisticated and rigid processes deeply entrenched in their culture.  I was coming from a more nimble culture, and wanted a model adapted to that.

To be sure, the software teams that have tried to guide their projects are varied and vast.  There are a lot of common mistakes that have to be overcome.


Top project-forecasting mistakes

  • Using estimates as-is. Practically everyone in software knows that this doesn’t work. Some people are almost always too low, others are almost always too high. If you take estimates as-is, they’ll be wrong, plain and simple.
  • Using the wrong model. You have to use a model to convert estimates into useful forecasts, but if you use the wrong model, like “velocity,” different ranges of forecasts will be wrong in their own unique way. It’s like sailing a ship without understanding that the world is round—you’re several miles off coming in to any port, but you don’t see why.
  • Spending too much time estimating or time-tracking. There’s no use estimating, forecasting, and tracking a project if you spent 80% of everyone’s time to do so, and the project is now so late it has no hope of success.
  • Not updating regularly. Forecasts are not a one-and-done thing. Your project changes, the external difficulties and pressures change, so you need the latest information on where you are and where you’re going. To do that, you need a method that works quickly.
  • Not using estimates and forecasts to prevent future problems. This is a major blind-spot for most of those organizations that do use estimates. Your estimates and forecasts can help you find preventable problems that can help future projects run smoother.
  • Not estimating at all. Every journey begins with a first step. If you don’t gather estimates, you can’t get any of their benefits.

What are the alternatives?

There are other courses out there. You’re probably familiar with some of them. They tend to cover the whole field of estimation practices—estimating by analogy, wide-band Delphi, planning poker, story points, etc. By the end you could write an essay about all the different ways to come up with estimates, but they haven’t given you any guidance on what to use or why.

Then there are the models. Should you use velocity, COCOMO II, SLIM, or SEER-SEM? What about all of the “factors” in most of those models, like “product complexity” or “developer capability?”  Are those just fudge factors?

Instead of giving you a survey of estimation topics, this course gives you specific procedures and how-to’s for the entire project life-cycle, beginning to end. Instead of a complex model with tons of parameters that you have to guess, this course gives you a simple, understandable model that calibrates itself to you and your team with minimal input.

This course is designed top to bottom to be easy to take. It’s divided into one coherent module each week. Invest just a few hours a week, and you could prevent your projects from being another industry failure.


What you’ll learn in this course

Here are some highlights of what you’ll learn in each of the modules if this course:

  1. The cycle. Learn about the project management & feedback cycle, how most people get it wrong, and how you can get it right. Learn about the key elements involved.
  2. The model. The right model removes biases and quantifies variation. A bad model can require a lot of data to be gathered; a good one minimizes that. Shows how to handle effective working hours, weekends, sick days, vacations, etc.—it’s actually so simple, you won’t believe it!
  3. Work-lists. Making a good work-list facilitates estimation and project execution. Learn this simple method to make a good work-list without going into too much detail and wasting time on it.
  4. Estimates. With so much variation from one project to another, how do estimates help at all? This module shows exactly that, and then tells you how to get good estimates without spending a lot of time on them.
  5. Forecasts. Use the estimates you got from the developers and the model I teach you to calculate your project’s forecast and prediction intervals. I give you the spreadsheet that does all the hard work for you, and we go through it step-by-step.
  6. Tracking time. How to track time without being a pain to the developers. No, you don’t need to track by the minute! This simple technique makes it easy!
  7. Course-corrections. Now that your project’s off the ground, learn how to handle typical obstacles: work goes long, new work comes along, work changes or gets dropped, etc.
  8. Project wrap-up. How to wrap-up a project so that what you learned from this project goes forward to benefit your future projects.

Project Forecasting Beta Course—What’s included in each level

This is the first public offering of this course.  As a special offer, when you enroll in this beta offering, you’ll get 60 days of e-mail support directly with me.

You may enroll in the course at one of three levels.  Here’s what you’ll get at each level:

Basic Level

Professional Level

Enterprise Level

  • All 8 modules, taking you step-by-step through the entire process, so you’re comfortable and confident applying it to your projects.
  • Spreadsheets that handle the model for you, so you don’t have to worry about the math.
  • Life-time access, including any future updates to the course contents, to protect your investment.
  • Bonus for this beta offering: 60 days of e-mail support.  Get one-on-one help with any difficulty in the course material or its application.
Everything in the Standard Level, including e-mail support, plus the following:

  • Multiple units of estimation, so individuals on your team can each estimate in the unit they are best at.
  • How to use forecasts to preemptively avoid mistakes on future projects, with specific checklists and hooks into different parts of the process.
Everything in the Professional Level, including e-mail support, plus the following:

  • An enhanced model allowing you to estimate & forecast in situations with higher-than-normal risk or uncertainty, from individual items to entire projects.
  • Special content on organizing a project to maximize concurrent & efficient development.
  • How to plan with and around other teams, to reduce business friction.

Satisfaction guaranteed

If you aren’t entirely satisfied with the course in the first 60 days, I’ll refund your purchase 100%, and let you keep the model spreadsheets.  But, you must meet the following conditions: you must demonstrate that you used, or attempted to use, the material; and, you give me specific feedback on why you are not satisfied, so that I can improve the course.


Questions?  Please ask!  david@devquant.com