« Are we not men? | Main | Philippe Starck's Organic Forms »


Glen has a brief post critiquing estimating using PERT techniques which points out some of the issues with using a particular mathematical methods of schedule estimation. It is interesting that the one of the commenters on this article takes Glen to task claiming:

"... there are some quite famous scientists and mathematicians that created these techniques, but they were also proven by millions of projects executed by hundreds of thousands of project management practitioners."

Unfortunately that is false. The creators of PERT did it under time pressure and did not themselves believe they were anything more than "quick and dirty". They also fail to be famous. I can't name them, can you? I think the claim of hundreds of thousands of project management practitioners may be a bit overstated as well.

But what is funny here is that the commenter decided to back up the technique by appealing to authority. In my opinion, that is a last resort. What is wrong with quick and dirty that you can't defend it as a techique in its own right? The point to be made is that there is a middle ground, a spot on the curve where the cost of obtaining the information is much lower than the value of having it. Certainly the technique is better than guessing. Defend it that way rather than attributing it to nameless famous scientists.

Just so I am not mis-understood, I have similar issues with the way it was presented. Monte Carlo schedule simulation is just about as quick (here is a free monte carlo simulator and one for blackjack players too) and much less dirty. The goal should always be to maximize return on your methods. Once you get to the point where the precision implied by your results is greater than the precision of what you are inputting, you have gone too far. Focus on the quick and try and clean up the dirty part. Project managers should strive to be as effective as possible. Sometimes extreme accuracy is part of being effective, other times it is a hindrance. But in no case should you have to apologize for being quick and dirty if that is what does the job.


Comments (1)


Thanks for the post. We use Monte Carlo and some other tools - RACM is one to answer several questions:

- Where are the hot spots in the schedule?
- What schedule margin is needed when we have a "zero slack" plan?
- What reserve is needed for the alternative branches in the plan?
- What does the CDF look like for a critical miletone - say first article launch?
- What is the sensitivity of each task on or near the critical path? This sensitivity analysis is part of the Monte Carlo tool?
- What is the correlation between cost (labor loaded plan required) and the schedule durations distributions?

The accuracy of the results is not really the issue, but the relative impacts of one portion of the plan on another.

The Titan program (which just launched its last vehicle) used Risk+ for weekly EV assessments of the forecast. They never missed a budget or a launch date. Lot's of things went right over the decades, but insight into the dynamics of the plan helped.

Post a comment

(Comments are moderated to fight SPAM and will be published after I have a chance to approve them. Thanks for waiting.)


The previous article is Are we not men?.

The next article is Philippe Starck's Organic Forms.

Current articles are in the main index page and you can find a complete list of articles in the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34