Monthly Archives: October 2015

SPERT-Beta Confidence Interval v. Monte Carlo Simulation

Today I created ten, 3-point estimates with various skews and Most Likely Confidence levels in a SPERT-Beta Excel workbook.  The values I chose might be something like what a project manager might choose when estimating ten tasks on a project.  Tasks were often skewed to the right, meaning that there was a greater likelihood that an outcome would be greater than the most likely outcome than less.  I included one triangular distribution where the minimum point-estimate was the same as the most likely estimate (50, 50, 100).

Now, according to the Central Limit Theorem, you obtain a bell-shaped distribution for the sum of underlying distributions, irrespective of what kind of an underlying distribution you choose.  The CLT also stipulates that the variables should be independent, too, and they should all have the same kind of distribution.  Clearly, my ten tasks didn’t neatly fit into the stipulations for relying on the CLT to create confidence intervals for the entire ten estimates.

And yet….sometimes it’s good enough to be close enough so you obtain useful results.  While I used a variety of distributions among my ten, 3-point estimates, they did trend to being a little skewed to the right (but not always).

When I compared the resulting 90% confidence interval using SPERT-Beta with a 90% confidence interval obtained through Monte Carlo simulation (using @Risk’s RiskBetaGeneral function), I found amazingly close results, even though I wasn’t following the CLT stipulations perfectly.

  • SPERT-Beta, the 90% confidence interval was 793 – 938
  • Monte Carlo simulation, the 90% confidence interval was 796 – 940

Shockingly close!

Have a look at the results (all results were copied from the Excel file I was working in to do the compare).  If you have access to Monte Carlo simulation software, try comparing your own SPERT-Beta confidence intervals with results from a simulation model.  Try breaking the rules for using the CLT by using different, underlying distributions (that is, skewed to the left, skewed to the right, triangular, and with different Most Likely Confidence levels for each 3-point estimate) and see what effect that has on SPERT-Beta confidence intervals compared to simulated results.

Comparison of SPERT-Beta with Monte Carlo Simulation using RiskBetaGeneral

SPERT-Beta Development Release D

This new build of the SPERT-Beta template adds quite a few new features.  I’ve added ratio scales for standard deviation and mean, so the template will calculate an estimate of standard deviation, variance, and mean for each 3-point estimate that’s entered.

Using that information, the template calculates the mean for the entire portfolio being estimated, and the standard deviation for the entire portfolio (by taking the square root of the sum of variances).

And using THAT information, I’ve added the ability to find a confidence interval for the portfolio, which calculates a minimum and maximum estimate values for the entire portfolio.

To test this, build, I created four estimates:

  1. 100, 400, 500 (Low confidence)
  2. 200, 500, 1000 (Very low confidence)
  3. 500, 500, 5000 (Medium-low confidence)
  4. 1000, 10000, 12000 (Very high confidence)

The result was a portfolio having a SPERT-Beta-estimated mean of 11,878 with a standard deviation of 2,178.  The SPERT-Beta 90% confidence interval was 8,294 – 15,461.

Comparing this to a simulation model, I used 10,000 trials and the same 3-point estimates and combination of the SPERT worksheet’s choice for the shape parameters, alpha and beta.  In the simulation, the standard deviation was extremely close:  2,180.  The 90% confidence interval was a little different:  7,991 – 15,222.  The minimum threshold value in the simulation differed by almost 4% from the SPERT-Beta minimum threshold (the SPERT-Beta worksheet overstated the minimum).  The maximum threshold value in the simulation differed by only 1.6% from the SPERT-Beta maximum value (again, the SPERT-Beta worksheet overstated the maximum).

In looking at the simulation results, I could see that the portfolio of four estimates was bell-shaped but skewed to the left, slightly, which explains why the SPERT-Beta confidence interval differed from the simulation model.  Had I used more than just four 3-point estimates, and had the portfolio exhibited a more normal appearance overall, the SPERT-Beta confidence interval for the portfolio would create results that are closer to the simulation model.

Download Development Release D and view standard deviations, variances, means, and find a confidence interval of your choice using lucky Build 13!

(Visit the Download page to download the latest version of Statistical PERT – Beta Edition).

SPERT-Beta Development Release C

This version of SPERT-Beta adds a new set of probability curves — only they’re not curves!  Now, if the most likely outcome is equal to, or very close to, either the minimum or maximum point-estimates, the skew analysis will determine that a triangular distribution is the best shaped distribution for the uncertainty (and a right triangle, at that).  This means that for a right-skewed uncertainty where the minimum point-estimate is also the most likely outcome, the implied shape of the probability distribution is a right triangle sloping downward to the right.  Conversely, if the most likely outcome is equal to, or very close to, the maximum point-estimate, the skew analysis determines that the shape is a left-skewed, right triangle sloping downward to the left.

However, because the estimator can make a subjective opinion about how likely the most likely outcome really is, the actual shape of the implied distribution could be something other than a triangle.  For near certainty, for example, the shape is flat along the x-axis and rises very sharply either towards the minimum or maximum point-estimates (depending on whether the most likely outcome is equal to or very near to the minimum or maximum point-estimate).  For conditions where there is something less than medium-low confidence, the shape is concave and at the point of a guesstimate, the shape is virtually uniform.

So, if you want a triangular distribution, specify medium-low confidence, which approximates a right triangle.  However, because of the way this template is constructed, it isn’t a perfectly-shaped right triangle — it’s close, but results at the range midpoint appear to differ by about 2%.  For a perfect triangle, the beta value for medium-low confidence would need to be equal to 2, not 1.9.  It’s set by default as 1.9 for medium-low confidence because medium-low confidence puts about 27% of the area under the curve to the small side of the curve at the 3-point range’s midpoint; a triangular distribution would put exactly 25% of the area under the curve on the small side of the range midpoint.  To keep the triangular skew consistent with the meaning of all other skew values (like “near certainty” or “high confidence”), I opted to accept that the SPERT-Beta template won’t actually create probabilities for a perfectly-shaped, right triangle.

Try it out!  As complicated as I described this enhancement to the SPERT-Beta template, it’s extremely easy to use still.  Just enter a 3-point estimate, and set the most likely outcome to be equal to either the minimum point-estimate or the maximum point-estimate.  Once you do that, the skew analysis will say “Triangular” and the alpha shape parameter will be set accordingly.

(Visit the Download page to download the latest version of Statistical PERT – Beta Edition).