Uncertainty Quantification: Don’t obscure possible risk by not properly classifying, analyzing, and reporting epistemic uncertainty

There is no such thing as an error-free measurement.

No model is perfectly correct (but some are useful).

No model parameter suite is perfectly precise.

No computational method is without numerical error.

Requirements can have uncertainty.

Most engineers are not trained to know what to do when there isn’t enough statistical data to credibly support even the simplest of statistical models, and many engineering review boards don’t know what questions to ask to fully penetrate a statistical result to assess its credibility. Therefore, in a world where worst case analysis (WCA) is preferred due to its ease of explanation and defense under audit, problems where WCA doesn’t work need different, more methodical and higher levels of scrutiny using honest representation of lack of statistical knowledge to ensure confident risk assessment.

When most college-educated individuals learn “Probability and Statistics” as part of engineering and business-related degrees, the majority of the subject matter consists of mathematically convenient statistical forms that adhere to the Law of Large Numbers. Real world problems often need to model things beyond common probabilistic models such as the Gaussian (a.k.a. “Normal” or “Bell Curve”), Uniform (a.k.a. “Flat” or “Even”), Triangular, Discrete (e.g. dice and card probabilities), and Poisson (failure rate) distributions. In fact, there are common mistakes engineers and scientists make when applying statistical methods. For example, they are unaware that there are separate methods to deal with the absence of a probability model where only a known, bounded interval for the parameters in question exist: the inclusion of epistemic uncertainty. The consequences of this usually result in representing epistemic uncertainties as aleatory ones (otherwise stated: applying an unjustifiable probabilistic form), thereby obscuring possible relevant outcomes in numerical assessments.

Erik has been working with predictive simulations for spacecraft Entry, Descent, and Landing on various planetary bodies since 2001, and cringes when he hears people use “Monte Carlo” as a synonym for “thoroughness”, or “sigma” being thrown around for distributions where sometimes the second mode (variance) is potentially undefined. He also firmly believes that “Six Sigma” is an incomplete and misleading, buzzword-riddled methodology that can lead to incorrect technical insight due to oversimplification of real-world problems. Erik uses frequentist methods with appropriately classified, distributed inputs to accurately assess uncertainty due to both known statistical forms (aleatory), and bounded-interval uncertainty without probabilistic form (epistemic) where applying statistics to one or more parameters is premature or unwarranted. Margins against design requirements and risk assessments can still be made using these hybrid epistemic/aleatory techniques, but these methods are rarely (if ever) taught in college curricula.

Erik can help you and your organization prevent overly-simplified (and therefore incorrect) data analysis with subsequent bad assessments based upon on flawed statistical data and assumptions by being mathematically honest about what is both known and unknown about forms of uncertainty in numerical analyses. He is available for consulting and training on how to “do the right thing” and get meaningful results, instead of using statistics to obfuscate the problem at hand. Contact him if you are interested in consultation, scheduling an educational talk, or a corporate class. Email him with any inquiries at <esbailey@alum.mit.edu>.