Probabilistic scenarios

From Testiwiki
Revision as of 08:28, 15 June 2011 by Aino paakkinen (talk | contribs) (Created page with "{{encyclopedia|moderator=Aino paakkinen}} category:IEHIAS Table here The future can rarely be seen with any degree of certaint...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Table here

The future can rarely be seen with any degree of certainty. Prognostic assessments are therefore based not on precise specifications of what will happen, but on pictures of what might happen in the future. Despite this, most scenarios are, within themselves, unequivocal: they give fixed projections of the future. Uncertaintities are usually recognised only by providing alternative fixed projections (such as so-called best and worst case scenarios). Probabilistic scenarios attempt to be more realistic about the uncertainties in our projections of the future, by incorporating them into the scenario.

They usually do so in the form of distributions: i.e. by representing future elements of the system as a range of values, weighted by their likelihood. In an assessment of the impact of future climate change on water quality and health, for example, we can define temperatures not as a predefined average (e.g. present temperature plus 2 degrees) but as a distribution with a mean of, say, 2 degrees above present, but a standard deviation of 0.5 degres around that. In this way, a range of futures can be explored, of varying likelihood, and from these the statistical distribution of potential health effects can be estimated.

This approach is clearly more realistic and informative than approaches based on other types of scenario. It nevertheless comes at a substantial cost in terms of the complexity of the analysis, for when using probabilistic scenarios we have to model a large range of alternatives, each of which may have to be pursued right through the causal chain to final impacts. This is usually done using some form of Monte Carlo simulation. With large data sets, and a number of complex and interlinked models, the computing requirements rapidly escalate. Inevitably, therefore, this approach tends to be restricted to major assessments carried out by well-resourced research teams, and with the support of large government or commercial organuisations.

References