Difference between revisions of "Assessing uncertainty"

From Testiwiki
Jump to: navigation, search
m (links corrected)
m
Line 20: Line 20:
 
Risk assessment has general properties that can be identified and described. The diagram below illustrates the general properties of good risk assessments as a tree structure. The goal, good risk assessment, is the node on the left of the diagram and the required properties to achieve this goal are then defined and divided moving towards right in the diagram. It is important to notice that the arrows in this diagram describe how particular properties lead to the ultimate objective.  
 
Risk assessment has general properties that can be identified and described. The diagram below illustrates the general properties of good risk assessments as a tree structure. The goal, good risk assessment, is the node on the left of the diagram and the required properties to achieve this goal are then defined and divided moving towards right in the diagram. It is important to notice that the arrows in this diagram describe how particular properties lead to the ultimate objective.  
  
The diagram and more information from the general properties of RA can be found from [[Purpose and properties of good risk assessments]].
+
The diagram and more information from the general properties of RA can be found from [[Purpose and properties of good assessments]].
  
 
Diagram 1: General properties of good risk assessment.
 
Diagram 1: General properties of good risk assessment.

Revision as of 12:24, 7 February 2008

Short note on the combination of the uncertainty report (Deliverable 7) and the General properties of good risk assessment

4.7.2007 Marko Tainio, Mikko Pohjola and Jouni Tuomisto National Public Health Institute - KTL

Deliverable 7: Uncertainty report presented two dimensional concepts to identify and assess the uncertainties within the risk assessment. The coming INTARESE training on uncertainty management, held in October 2007 in Copenhagen, stimulated us (KTL) to compare the two dimensional uncertainty concept with other risk assessment concepts that have been developed in the INTARESE project. The reasoning behind this comparison was to find possible overlapping issues and/or see if the terminology and concepts could be harmonized.

Based on the comparison we drafted following proposal to combine concepts.

Location uncertainty

The Deliverable 7 defined location uncertainty as follows: “The location dimension refers to where uncertainty manifests itself within the configuration of the system model”. The location uncertainty was further categorised to following five locations:

  • Context
  • Model structure
  • Inputs
  • Parameters
  • Model outcome (results)

We found difficult to adapt this list of categories. However, the definitions behind the categories were familiar from the other work that we have been drafting – the properties of good risk assessment.

Risk assessment has general properties that can be identified and described. The diagram below illustrates the general properties of good risk assessments as a tree structure. The goal, good risk assessment, is the node on the left of the diagram and the required properties to achieve this goal are then defined and divided moving towards right in the diagram. It is important to notice that the arrows in this diagram describe how particular properties lead to the ultimate objective.

The diagram and more information from the general properties of RA can be found from Purpose and properties of good assessments.

Diagram 1: General properties of good risk assessment.

Error creating thumbnail: Unable to save thumbnail to destination


When we compared the diagram 1 and the definitions of the nodes with the Deliverable 7, we noticed several similarities. For example, the Context uncertainty, defined as the uncertainty in the choice of the boundaries, mean same as External relevance in Diagram 1. Table 1 summarize our findings from the terminology comparison (between location uncertainty and the properties of good risk assessment).

Table 1: Terminology comparison

Deliverable 7: location uncertainty General properties of good RA (Diagram 1)
Context External relevance
Model structure Internal relevance
Inputs Informativeness and calibration
Parameter Informativeness and calibration
Model outcome

Informativeness is the tightness of spread in a distribution (All results estimates of variables should be considered as distribution estimates, not point estimates). The tighter the spread, the smaller the variance and the better the informativeness. Informativeness is a property of each individual variable, but the informativeness of each variable is also affected by the informativeness of the variables upstream in the causal chain.

Calibration means the correctness of the result estimate of a variable, i.e. how close it is to the real value. Evaluating calibration can be complicated in many situations, but it is necessary to realize it as an important property, while evaluating the goodness of result estimates of an assessment.

Based on this comparison, we propose following three level hierarchy for the Location uncertainty:

  • Context – uncertainties related to boundaries of the risk assessment (in our terminology External relevance)
  • Model – uncertainties related to causal description of the risk assessment and to the structure of risk assessment (in our terminology Internal relevance)
  • Variable – uncertainties related to individual variables in the risk assessment (in our terminology Informativeness and Calibration)

With these three categories it should be possible to cover all the uncertainties related to risk assessment.

We decided to drop out the last category, results, from the list. Reasoning behind this is that we thought that model outcome is just a variable among others and belongs to “variable” location.

Level uncertainty

The Deliverable 7 defined location uncertainty as follows: “The level uncertainty refers to the severity of the uncertainty from the point of view of the decision maker”. We found the concept and definition of level uncertainty to be understandable and to fit well with our variable concept. In our risk assessment method, variable is a basic element for describing the reality. One variable could be for example primary fine particle emissions in Europe in year 2000.

One variable have been divided to four different attributes:

  • Name
  • Scope (what is the question)
  • Definition
  • Result (what is the answer)

See also Universal products.

In Deliverable 7 the level uncertainty is defined with two term; outcome and probabilities. In fact, these two have been resembled with two attributes of the variable; outcome with scope and probabilities with result. So in fact, the description of scenario uncertainty (known outcomes, unknown probabilities) could be defined Scope known, result poorly known.

The concept where uncertainties are categorized to 3-4 different categories representing the severity of the uncertainty is understandable and fits well with our risk assessment concepts. However, we would like to propose more flexibility for the descriptions and maybe re-naming of the categories. For example, it’s difficult to understand what Statistical uncertainty stands for with Context uncertainty. Therefore we propose that titles of these four categories could be changed to e.g. A-D type of uncertainties (where A represents the Statistical uncertainty).


Table 2: Term comparison between Deliverable 7 and risk assessment concepts

Deliverable 7: location uncertainty Attributes
Statistical Uncertainty
  • Known outcomes, known probabilities
A
  • Scope know, result known
Scenario uncertainty
  • Known outcomes, unknown probabilities
B
  • Scope known, result poorly known
Recognised ignorance
  • Unknown outcomes, unknown probabilities
C
  • Scope poorly understood
Total ignorance
  • Nothing is known!
D
  • Item not identified (total ignorance)

Summary In summary the concept of uncertainty and risk assessment concepts developed in KTL fits well to each other. The terminology is different but the meaning behind terms and practical implementation is similar.

Our proposal for the new Location and level categories is presented in the table 3. With proposed three Location categories and four Level categories all the uncertainties related t risk assessment should be possible to identify and assess – either quantitatively or qualitatively.

Table 3: Proposal for the new categories for both Location and level uncertainties.

Level
Location A: scope know, results known B: Scope known, results poorly known C: Scope poorly understood D: Item not identified (total ignorance)
Context
  • External relevance
Model
  • Internal relevance
Variable
  • Informativness
  • calibartion