SPERT-Beta Confidence Interval v. Monte Carlo Simulation

Today I created ten, 3-point estimates with various skews and Most Likely Confidence levels in a SPERT-Beta Excel workbook.  The values I chose might be something like what a project manager might choose when estimating ten tasks on a project.  Tasks were often skewed to the right, meaning that there was a greater likelihood that an outcome would be greater than the most likely outcome than less.  I included one triangular distribution where the minimum point-estimate was the same as the most likely estimate (50, 50, 100).

Now, according to the Central Limit Theorem, you obtain a bell-shaped distribution for the sum of underlying distributions, irrespective of what kind of an underlying distribution you choose.  The CLT also stipulates that the variables should be independent, too, and they should all have the same kind of distribution.  Clearly, my ten tasks didn’t neatly fit into the stipulations for relying on the CLT to create confidence intervals for the entire ten estimates.

And yet….sometimes it’s good enough to be close enough so you obtain useful results.  While I used a variety of distributions among my ten, 3-point estimates, they did trend to being a little skewed to the right (but not always).

When I compared the resulting 90% confidence interval using SPERT-Beta with a 90% confidence interval obtained through Monte Carlo simulation (using @Risk’s RiskBetaGeneral function), I found amazingly close results, even though I wasn’t following the CLT stipulations perfectly.

• SPERT-Beta, the 90% confidence interval was 793 – 938
• Monte Carlo simulation, the 90% confidence interval was 796 – 940

Shockingly close!

Have a look at the results (all results were copied from the Excel file I was working in to do the compare).  If you have access to Monte Carlo simulation software, try comparing your own SPERT-Beta confidence intervals with results from a simulation model.  Try breaking the rules for using the CLT by using different, underlying distributions (that is, skewed to the left, skewed to the right, triangular, and with different Most Likely Confidence levels for each 3-point estimate) and see what effect that has on SPERT-Beta confidence intervals compared to simulated results.

Comparison of SPERT-Beta with Monte Carlo Simulation using RiskBetaGeneral

SPERT-Beta Development Release D

This new build of the SPERT-Beta template adds quite a few new features.  I’ve added ratio scales for standard deviation and mean, so the template will calculate an estimate of standard deviation, variance, and mean for each 3-point estimate that’s entered.

Using that information, the template calculates the mean for the entire portfolio being estimated, and the standard deviation for the entire portfolio (by taking the square root of the sum of variances).

And using THAT information, I’ve added the ability to find a confidence interval for the portfolio, which calculates a minimum and maximum estimate values for the entire portfolio.

To test this, build, I created four estimates:

1. 100, 400, 500 (Low confidence)
2. 200, 500, 1000 (Very low confidence)
3. 500, 500, 5000 (Medium-low confidence)
4. 1000, 10000, 12000 (Very high confidence)

The result was a portfolio having a SPERT-Beta-estimated mean of 11,878 with a standard deviation of 2,178.  The SPERT-Beta 90% confidence interval was 8,294 – 15,461.

Comparing this to a simulation model, I used 10,000 trials and the same 3-point estimates and combination of the SPERT worksheet’s choice for the shape parameters, alpha and beta.  In the simulation, the standard deviation was extremely close:  2,180.  The 90% confidence interval was a little different:  7,991 – 15,222.  The minimum threshold value in the simulation differed by almost 4% from the SPERT-Beta minimum threshold (the SPERT-Beta worksheet overstated the minimum).  The maximum threshold value in the simulation differed by only 1.6% from the SPERT-Beta maximum value (again, the SPERT-Beta worksheet overstated the maximum).

In looking at the simulation results, I could see that the portfolio of four estimates was bell-shaped but skewed to the left, slightly, which explains why the SPERT-Beta confidence interval differed from the simulation model.  Had I used more than just four 3-point estimates, and had the portfolio exhibited a more normal appearance overall, the SPERT-Beta confidence interval for the portfolio would create results that are closer to the simulation model.

Download Development Release D and view standard deviations, variances, means, and find a confidence interval of your choice using lucky Build 13!