Appendix 6: RAP Primer
Risk Analysis Process (RAP) involves four steps:
Step 1: Define the Structure and Logic of the Problem
A "structure and logic diagram" depicts the variables and cause and effect relationships that underpin the forecasting problem at-hand. Although the structure and logic model will eventually be written down mathematically to facilitate analysis, the graphical depictions presented above greatly facilitate stakeholder scrutiny and modification in Step 3 of the process.
Step 2: Assign Central Estimates and Conduct Probability Analysis
Each variable will be assigned a central estimate and a range (a probability distribution) to represent the degree of uncertainty. In every possible instance, historical data will be utilized to develop these estimates. Special data sheets are used to record the estimates. The first column gives an initial median while the second and third columns define an uncertainty range representing an 80 percent confidence interval. This is the range within which there exists an 80 probability of finding the actual outcome. The greater the uncertainty associated with a forecast variable the wider the range.
Variable | Median | 10% Lower Limit | 10% Higher Limit |
---|---|---|---|
Percentage of Assistive Listening Systems in Courtrooms that will undergo Alterations |
75% |
50% |
95% |
Probability ranges will be established on the basis of both statistical analysis and subjective probability. Probability ranges need not be normal or symmetrical - that is, there is no need to assume the normal bell-shaped probability curve. The bell curve assumes an equal likelihood of being too low and being too high in forecasting a particular value. It might well be, for example, that if a projected growth rate deviates from expectations, circumstances are such that it is more likely to be higher than the median expected outcome than lower.
The risk analysis process outlined in this framework will transform the ranges as depicted above into formal probability distributions (or "probability density functions"). This liberates the non-statistician from the need to appreciate the abstract statistical depiction of probability and thus will enable stakeholders to understand and participate in the process whether or not they possess statistical training.
The central estimates and probability ranges for each assumption in the forecasting structure and logic framework come from two sources. The first is an historical analysis of statistical uncertainty in all variables and an error analysis of the forecasting "coefficients." "Coefficients" are numbers that represent the measured impact of one variable (say, income) on another (such as retail sales). While these coefficients can only be known with uncertainty, statistical methods help uncover the magnitude of such errors (using diagnostic statistics such as "standard deviation," "standard error," "confidence intervals", and so on). The uncertainty analysis outlined above is known in textbooks as "frequentist" probability.
The second line of uncertainty analysis employed in the risk analysis process is called "subjective probability" (also called "Bayesian" statistics). Whereas a frequentist probability represents the measured frequency with which different outcomes occur (i.e., the number of heads and tails after thousands of tosses) the Bayesian probability of an event occurring is the degree of belief held by an informed person or group that it will occur. Obtaining subjective probabilities is the subject of Step 3.
Step 3: Conduct Expert Evaluation[1]
Step 3 involves the formation of an expert panel and the use of facilitation techniques to elicit from the panel risk and probability beliefs about:
1. The structure of the forecasting framework; and
2. Uncertainty attached to each variable and forecasting coefficient within the framework.[2]
In (1), experts will be invited to add variables and hypothesized causal relationships that may be material, yet missing from the model. In (2), panelists will be engaged in a discursive protocol during which the frequentist-based central estimates and ranges, provided to panelists in advance of the session, will be modified according to subjective expert beliefs. This process will be aided with an interactive "groupware" computer tool that permits the visualization of probability ranges under alternative belief systems.
Step 4: Issue Risk Analysis
The final probability distributions will be formulated to represent a combination of "frequentist" and subjective probability information drawn from Step 3. These will be combined using a simulation technique (Monte Carlo analysis) that allows each variable and forecasting coefficient to vary simultaneously according to its associated probability distribution (see Figure 26).
The end result will be a central forecast, together with estimates of the probability of achieving alternative outcomes given uncertainties in underlying variables and coefficients.
Annual Elemental Capital Costs (In Millions of Dollars) | Probability of Exceeding Value Shown at Left |
---|---|
105.3 |
0.01 |
98.4 |
0.05 |
94.9 |
0.10 |
91.0 |
0.20 |
88.2 |
0.30 |
85.8 |
0.40 |
83.5 |
0.50 |
81.2 |
0.60 |
78.5 |
0.70 |
75.2 |
0.80 |
71.3 |
0.90 |
65.0 |
0.95 |
53.5 |
0.99 |
82.9 |
Mean Expected Outcome |
Consensus Process
The application of Bayes' Formula extends beyond laboratory application. In the real world, consensus building represents some combination of empirical observation, professional beliefs and personal values. Tversky and Kahneman (the latter the 2002 Nobel laureate in Economics) are among the pioneers in the quantification of subjective probabilities through a process called "elicitation." Defined broadly, elicitation is a process that helps experts and lay persons construct a set of carefully reasoned and considered judgments. Specifically, elicitation is conducted with a range of available or circumstance-specific "protocols" employed with a view to obtaining peoples' subjective but accurately specified quantitative expressions of future probability in relation to matters such as:
Economic variables – such as fuel prices and interest rates/discount rates;
Behavioral variables – such as price elasticities and cross-elasticities, quality of service elasticities and cross-elasticities, and income elasticities;
Technology impact variables – such as the impact of adding new process at border inspection or the rate at which a technology might become obsolete;
Risk variables – such as technological obsolescence, management-labor relations, human factors and politics;
Value parameters – such as the economic value of delay to a commuter at borders;
Domain parameters – such as the delay impact at the borders on the regional economy and competitiveness of the country as a whole;
Model structures – such as the way in which scientific knowledge is employed in making cause-and-effect judgments;
Project and policy design variables – such as solution complexity and involvement of multiple agencies; and
Decision criteria – such as the classification of issues as liberties versus public goods, and welfare criteria such as net present value and rate of return.
The term "accurate" as used here does not contemplate the discovery by analysts of pre-existing subjective probabilities as they exist in the minds of experts and stakeholders. Such constructs rarely exist. Rather, consensus building is intended to enable stakeholders themselves to formulate and articulate their own beliefs about probabilities in light of the issues at-hand and in light of pre-existing and relevant knowledge, new evidence, and values - both their own values and those of others. "Accurate" assumes the realization of probability statements that are purged of factual error, freed of scientific myth and misinterpretation, and liberated from reasoning biases.
[1] This type of evaluation will occur on a formal level with architectural experts specializing in ADA compliance (both affiliated with the Department and not), as well as the Department's lawyers specializing in ADA compliance. The questions asked of each group will differ in many cases. However, less formal consultations with the Department are also ongoing.
[2] Variables that might be reviewed include unit cost of an element, number of elements per facility, and percentage of accessible elements required per facility.