Difference between revisions of "Open policy practice"

From Testiwiki
Jump to: navigation, search
(Evaluation and management)
Line 41: Line 41:
  
 
=== Evaluation and management ===
 
=== Evaluation and management ===
 +
 +
====Properties of good decision support ====
 +
 +
:''Main article: [[Properties of good assessment]].
 +
 +
{|{{prettytable}}
 +
|+ '''Table 2. Properties of good decision support.''' A slightly modified version of the [[properties of good assessment]] framework.
 +
|-----
 +
! Category
 +
! Description
 +
! Guiding questions
 +
! Suggestions by open policy practice
 +
|-----
 +
| Quality of content
 +
| Specificity, exactness and correctness of information. Correspondence between questions and answers.
 +
| How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary?
 +
| Work openly, invite criticism (see Table 1.)
 +
|-----
 +
| rowspan="4"| Applicability
 +
| ''Relevance'': Correspondence between output and its intended use.
 +
| How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment?
 +
| Characterize the setting (see Table 3.)
 +
|-----
 +
| ''Availability'': Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users.
 +
| Is the information provided by the assessment (or would it be) available when, where and to whom is needed?
 +
| Work online using e.g. [[Opasnet]]. For evaluation, see Table 4.
 +
|-----
 +
| ''Usability'': Potential of the information in the output to generate understanding among its user(s) about the topic of assessment.
 +
| Would the intended users be able to understand what the assessment is about? Would the assessment be useful for them.
 +
| Invite participation from the problem owner and user groups early on (see Table 5.)
 +
|-----
 +
| ''Acceptability'': Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content.
 +
| Would the assessment (both its expected results and the way the assessment planned to be made) be acceptable to the intended users.
 +
| Use the test of shared understanding (see Table 6.)
 +
|-----
 +
| Efficiency
 +
| Resource expenditure of producing the assessment output either in one assessment or in a series of assessments.
 +
| How much effort would be needed for making the assessment? Would it be worth spending the effort, considering the expected results and their applicability for the intended users? Would the assessment results be useful also in some other use?
 +
| Use shared information objects with open license, e.g. [[Ovariable]]s.
 +
|}
 +
 +
==== Settings of assessments ====
 +
 +
:''Main article: [[:heande:Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy]] (unpublished, password-protected).
  
 
{|{{prettytable}}
 
{|{{prettytable}}
|+ Table A. Framework for characterizing the settings for health, safety and environmental assessments relevant to materials processing and related public policy.
+
|+ '''Table 3. Important settings for environmental health (and other) assessments and related public policy.<ref>Mikko V. Pohjola. (2015?) Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. Comprehensive Materials Processing, Volume 8. Health, Safety and Environmental issues (00814)</ref>
 
! Attribute
 
! Attribute
 
! Example categories
 
! Example categories
Line 79: Line 123:
 
|  
 
|  
 
* Who has the interest, responsibility and/or means to assess the issue?
 
* Who has the interest, responsibility and/or means to assess the issue?
* Who is seen to actually conduct the assessment?
+
* Who actually conducts the assessment?
 
* Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
 
* Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
 
* Who are affected by the impacts?
 
* Who are affected by the impacts?
Line 95: Line 139:
 
* Who can make use of the assessment results?
 
* Who can make use of the assessment results?
 
|-----
 
|-----
| Interaction ''(see tables B and C for advice)
+
| Interaction  
 
|  
 
|  
 
* Isolated
 
* Isolated
Line 103: Line 147:
 
* Shared
 
* Shared
 
|  
 
|  
* How does assessment interact with the intended use of its results?
+
* What is the degree of openness in assessment (and management)? (See Table 4.)
 +
* How does assessment interact with the intended use of its results? (See Table 5.)
 
* How does assessment interact with other actors in its context?
 
* How does assessment interact with other actors in its context?
* What is the degree of openness in assessment (and management)?
 
 
|}
 
|}
  
 +
====Dimensions of openness====
  
{|{{prettytable}}
+
:''Main article: [[Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results]].
|+ Table 1. Framework for characterizing the settings for health, safety and environmental assessments relevant to materials processing and related public policy.<ref>Mikko V. Pohjola. (2015?) Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. Comprehensive Materials Processing, Volume 8. Health, Safety and Environmental issues (00814)</ref>
 
! Attribute
 
! Categories
 
! Questions
 
|-----
 
| Impacts
 
|
 
* Environment
 
* Health
 
* Safety
 
* Other
 
|
 
* Which impacts are addressed in assessment?
 
* Which impacts are most significant?
 
* Which impacts are relevant to practice?
 
|-----
 
| Causes
 
|
 
* Processing
 
* Product
 
* Use
 
|
 
* Which causes of impacts are recognized in assessment?
 
* Which causes of impacts are most significant?
 
* Which causes of impacts are relevant to practice?
 
|-----
 
| Problem owner
 
|
 
* Policy maker
 
* Producer
 
* Expert
 
* User
 
* Public
 
|
 
* Who has the interest, responsibility and/or means to assess the issue?
 
* Who actually conducts the assessment?
 
* Who has the interest, responsibility and/or power to take action upon the issue?
 
* Who are affected by the impacts?
 
|-----
 
| Target
 
|
 
* Policy maker
 
* Producer
 
* Expert
 
* User
 
* Public
 
|
 
* Who are the intended users of assessment results?
 
* Who needs the assessment results?
 
* Who can make use of the assessment results?
 
|-----
 
| Interaction
 
|
 
* Isolated
 
* Informing
 
* Participatory
 
* Joint
 
* Shared
 
|
 
* How does assessment interact with the intended use of its results?
 
* How does assessment interact with other actors in its context?
 
* What is the degree of openness in assessment (and management)?
 
|}
 
 
 
{{attack|# |Copy also the tables of properties of good policy support and acceptability.|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 14:33, 27 February 2015 (UTC)}}
 
 
 
====Dimensions of openness====
 
  
{|{{prettytable}}
+
{| {{prettytable}}
|+ Table B. Dimensions of openness.
+
|+ '''Table 4. Dimensions of openness.<ref>Mikko V. Pohjola and Jouni T. Tuomisto: [[Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results]]. Environmental Health 2011, 10:58 http://www.ehjournal.net/content/10/1/58.</ref>
 
! Dimension
 
! Dimension
 
! Description
 
! Description
Line 198: Line 176:
 
| How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?
 
| How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?
 
|}
 
|}
 
:''This text was copied from [[Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results]].
 
  
 
One obstacle for effectively addressing the issue of effective
 
One obstacle for effectively addressing the issue of effective
Line 210: Line 186:
 
framework we call the dimensions of openness
 
framework we call the dimensions of openness
 
attempts to overcome this obstacle by considering the
 
attempts to overcome this obstacle by considering the
issue of effective participation in terms of openness in the processes of assessment and decision making. The
+
issue of effective participation in terms of openness in the processes of assessment and decision making.
framework was developed as a part of the assessment
 
methodology development in the INTARESE project,
 
and it is intended as guidance for designing and managing
 
participatory assessment and decision making practices.
 
In the project, the development work was
 
originally motivated by a notion of a simultaneous need
 
to improve effectiveness of assessments in environmental
 
health policy making as well as to improve effectiveness
 
and meaningfulness of stakeholder involvement in
 
environmental health assessments.
 
 
 
As the name implies, the framework consists of five
 
essential dimensions of openness in assessment and
 
decision making, or more generally, creation and use of
 
collective knowledge. The dimensions of openness does
 
not attempt to provide an exhaustive and mutually
 
exclusive list of all aspects of openness, but to explicate
 
and emphasize those that are seen as the most essential
 
ones in the context of environment and health assessment
 
and policy. The five dimensions of openness are:
 
 
 
 
 
* '''Scope of participation,''' referring to who are allowed to participate in the process.
 
* '''Access to information,''' referring to what information regarding the issue at hand is made available to participants.
 
* '''Timing of openness,''' referring to when participants are invited or allowed to participate.
 
* '''Scope of contribution,''' referring to which aspects of the issue at hand participants are invited or allowed to contribute to.
 
* '''Impact of contribution,''' referring to what extent are participant contributions allowed to have influence on the outcomes, i.e. how much weight is given to participant contributions.
 
 
 
 
 
The dimensions of openness compile the main issues
 
of participation in one solid framework. In the framework,
 
the more commonly addressed questions of access
 
(to process and to information) and timing of participation
 
are complemented with less commonly addressed
 
questions of extent and influence of participation on the
 
outcomes of the process. The five dimensions can be
 
considered as the determinants of the possibilities and
 
limitations provided by the context for the effectiveness
 
of participation. As such, the framework explicates the
 
aspects of openness that need to be taken account of in
 
order to match the processes and procedures of collective
 
knowledge creation and use, e.g. environmental
 
health assessment and related policy making, with their
 
aims and purposes. Thereby it also provides a means for
 
identifying the relationships between participation,
 
assessment, and decision making.
 
  
 
The framework bears resemblance e.g. to the criteria
 
The framework bears resemblance e.g. to the criteria
Line 284: Line 214:
 
should be arranged, but rests on the existing and continually
 
should be arranged, but rests on the existing and continually
 
developing knowledge base on participatory models
 
developing knowledge base on participatory models
and techniques. Although a contentual perspective to
+
and techniques.
participation, dimensions of openness does not contradict
 
with the procedural perspectives to participation,
 
but rather provides a backdrop for their effective
 
application.
 
 
 
The contentual perspective makes the framework
 
applicable in design and management of both assessment
 
and policy making processes. Although assessment
 
and decision making may appear as very different kinds
 
of processes, the choice of point of view is actually only
 
a question of adjusting the scope of application of the
 
framework; whether decision makers are included in an
 
assessment or not? After all, assessment and related
 
decision making should ideally only be alternative perspectives
 
to the same issue, the former emphasizing the
 
development of knowledge, the latter emphasizing the
 
use of knowledge. Within the contentual perspective,
 
everyone, including also e.g. authorities, project managers,
 
and experts, not only public, stakeholders, NGO’s
 
(non-governmental organizations) etc., are considered as
 
participants to development and implementation of
 
knowledge. They are all considered as, at least potentially,
 
relevant contributors to either creating knowledge
 
or deciding about an issue of interest. The different
 
kinds of participants naturally take different roles
 
according to their interests, capabilities, professions, as
 
well as formal and legal positions in relation to the
 
issue.
 
 
 
The degree of openness can be managed in terms of
 
the dimensions of openness according to specific purposes
 
and goals. The situational, contextual, and practical
 
issues, for example legal requirements, public
 
perceptions, available resources, time constraints, complexity
 
of the case, confidentiality etc. also need to be
 
taken account of in deciding upon suitable degree of
 
openness. The degree of openness can be adjusted separately
 
for different groups of participants, or even on an
 
individual basis, and varying from a case to another, as
 
needed. The overall openness of the process can be considered
 
as a function of all five dimensions across all
 
roles, although it should be noted that the dimensions
 
are not independent, but rather interrelated.
 
  
For example, the first dimension, scope of contribution,
+
While all dimensions contribute to the overall openness, it is
determines the participant groups among which
 
questions regarding e.g. access to information or scope
 
of contribution are only even relevant. In addition, while
 
all dimensions contribute to the overall openness, it is
 
 
the fifth dimension, the impact of contribution, which
 
the fifth dimension, the impact of contribution, which
 
ultimately determines the effect on the outcome.
 
ultimately determines the effect on the outcome.
Line 341: Line 224:
 
above.
 
above.
  
The greatest power of the framework is that it puts
+
==== Categories of interaction ====
the issue at hand in focus and does not build on any
 
preconceptions about possible or acceptable inputs to
 
its development. It allows to first ask what are the
 
inputs needed to develop the issue to achieve its purpose,
 
and then consider the arrangement for its realization,
 
without being preconfined to existing conventions
 
and institutions of participation, assessment and decision
 
making, which, as argued above, are in many cases
 
known to be inadequate. The framework i) provides a
 
context for evaluation and constructive criticism of
 
existing conventions and institutions, ii) facilitates innovative
 
application of existing means for participatory
 
processes within and alongside the existing conventions
 
and institutions, and iii) promotes development of new
 
means, conventions and institutions for participatory
 
practice. Thinking in terms of openness provides a new
 
perspective to participation in assessment and policy
 
making.
 
  
==== Categories of interaction ====
+
:''Table originally from [[Decision analysis and risk management 2013/Homework]].
  
 
{|{{prettytable}}
 
{|{{prettytable}}
|+ Table C. Explanations of categories of interaction within the knowledge-policy interaction framework.
+
|+ '''Table 5. Categories of interaction within the knowledge-policy interaction framework.
 
! Category
 
! Category
 
! Explanation
 
! Explanation
Line 384: Line 249:
 
|}
 
|}
  
====Properties of good assessment ====
+
==== Acceptability ====
 +
 
 +
:''Main article: [[Shared understanding]].
 +
 
 +
Acceptability can be measured with a test of shared understanding. In a decision situation there is shared understanding when all participants of the decision support or decision making process will give positive answers to the following questions.
  
{|{{prettytable}}
+
{| {{prettytable}}
|+ '''Table D. A slightly modified version of the properties of good assessment framework.
+
|+'''Table 6. Acceptability according to the test of shared understanding.
|-----
+
! Question !! Who is asked?
! Category
+
|----
! Description
+
| Is all relevant and important information described?
! Guiding questions
+
|rowspan="4"|All participants of the decision support or decision making processes.
|-----
+
|----
| Quality of content
+
| Are all relevant and important value judgements described?
| Specificity, exactness and correctness of information. Correspondence between questions and answers.
+
|----
| How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary?
+
| Are the decision maker's decision criteria described?
|-----
+
|----
| rowspan="4"| Applicability
+
| Is the decision maker's rationale from the criteria to the decision described?
| ''Relevance'': Correspondence between output and its intended use.
 
| How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment?
 
|-----
 
| ''Availability'': Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users.
 
| Is the information provided by the assessment (or would it be) available when, where and to whom is needed?
 
|-----
 
| ''Usability'': Potential of the information in the output to generate understanding among its user(s) about the topic of assessment.
 
| Would the intended users be able to understand what the assessment is about? Would the assessment be useful for them.
 
|-----
 
| ''Acceptability'': Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content.
 
| Would the assessment (both its expected results and the way the assessment planned to be made) be acceptable to the intended users.
 
|-----
 
| Efficiency
 
| Resource expenditure of producing the assessment output either in one assessment or in a series of assessments.
 
| How much effort would be needed for making the assessment? Would it be worth spending the effort, considering the expected results and their applicability for the intended users? Would the assessment results be useful also in some other use?
 
 
|}
 
|}
  

Revision as of 07:06, 4 March 2015


Open policy practice is a method to support societal decision making in an open society. One part of open policy practice is open assessment. This page should contain a detailed description of the practice, but while it is being written, please refer to pages in See also.

Question

What is open policy practice?

Answer

Error creating thumbnail: Unable to save thumbnail to destination
Shared understanding exists when all participants understand, what opinions exist, what disagreements exist and why.

Previous research has found that a major problem the science-policy interface actually lies in the inability of the current political processes to utilise scientific knowledge in societal decision making (Mikko Pohjola: Assessments are to change the world – Prerequisites to effective environmental health assessment. Doctoral dissertation. THL, 2013. http://urn.fi/URN:ISBN:978-952-245-883-4). This observation has lead to the development of a pragmatic guidance for closer collaboration between researchers and societal decision making. The guidance is called Open Policy Practice and it was developed by National Institute for Health and Welfare (THL) and Nordem Ltd in 2013. The main points of the practice are listed below.

Four main parts of work

The guidance focuses on the decision support part, although the whole chain of decision making from decision identification to decision support, actual making of the decision, implementation, and finally outcomes of the decision are considered during the whole process. The practice identifies four main parts of work:

  • The decision maker publishes the objectives of the decision. This is used to guide all subsequent work.
  • The execution of decision support is mostly about collecting, organising and synthesising scientific knowledge and values in order to inform the decision maker to reach her objectives.
  • Evaluation and management of the work (of decision support and decision making) continues all the way through the process. The focus is on evaluating whether the work produces the intended knowledge and helps to reach the objectives.
  • Interactional expertise is needed to organise and synthesise the information. This requires specific skills that are typically available neither among experts nor decision makers. It also contains specific practices and methods that may be in wide use in some areas, such as the use of probabilities for describing uncertainties, discussion rules, or quantitative modelling.

The execution of decision support may take different forms. Currently, the practices of risk assessment, health impact assessment, cost-benefit assessment, or public hearings all fall under this broad part of work. In general, the execution aims to answer these questions: "What would the outcomes be if decision option X was chosen, and would that be preferable to outcomes of other options?"

Execution

Six principles

Error creating thumbnail: Unable to save thumbnail to destination
Open policy practice has four parts: shared understanding as the main target of the work, execution, evaluation and management, and co-creation skills and facilitation. The execution is guided by six principles (see text).

In Open Decision Making Practice, the execution strictly follows six principles. Each of them is sometimes implemented already today, but so far they have not been implemented systematically together.

  • Intentionality: All that is done aims to offer better understanding to the decision maker about outcomes of the decision.
  • Shared information objects: all information is shared using a systematic structure and a common workspace where all participants can work.
  • Causality: The focus is on understanding the causal relations between the decision options and the intended outcomes.
  • Critique: All information presented can be criticised based on relevance and accordance to observations.
  • Reuse: All information is produced in a format that can easily be used for other purposes by other people.
  • Openness: All work and all information is openly available to anyone interested. Participation is free. If there are exceptions, these must be publicly justified.

Evaluation and management

Properties of good decision support

Main article: Properties of good assessment.
Table 2. Properties of good decision support. A slightly modified version of the properties of good assessment framework.
Category Description Guiding questions Suggestions by open policy practice
Quality of content Specificity, exactness and correctness of information. Correspondence between questions and answers. How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary? Work openly, invite criticism (see Table 1.)
Applicability Relevance: Correspondence between output and its intended use. How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment? Characterize the setting (see Table 3.)
Availability: Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users. Is the information provided by the assessment (or would it be) available when, where and to whom is needed? Work online using e.g. Opasnet. For evaluation, see Table 4.
Usability: Potential of the information in the output to generate understanding among its user(s) about the topic of assessment. Would the intended users be able to understand what the assessment is about? Would the assessment be useful for them. Invite participation from the problem owner and user groups early on (see Table 5.)
Acceptability: Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content. Would the assessment (both its expected results and the way the assessment planned to be made) be acceptable to the intended users. Use the test of shared understanding (see Table 6.)
Efficiency Resource expenditure of producing the assessment output either in one assessment or in a series of assessments. How much effort would be needed for making the assessment? Would it be worth spending the effort, considering the expected results and their applicability for the intended users? Would the assessment results be useful also in some other use? Use shared information objects with open license, e.g. Ovariables.

Settings of assessments

Main article: heande:Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy (unpublished, password-protected).
Table 3. Important settings for environmental health (and other) assessments and related public policy.[1]
Attribute Example categories Guiding questions
Impacts
  • Environment
  • Health
  • Other (what?)
  • Which impacts are addressed in assessment?
  • Which impacts are most significant?
  • Which impacts are most relevant for the intended use?
Causes
  • Production
  • Consumption
  • Transport
  • Heating, Power production
  • Everyday life
  • Which causes of impacts are recognized in assessment?
  • Which causes of impacts are most significant?
  • Which causes of impacts are most relevant for the intended use?
Problem owner
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who has the interest, responsibility and/or means to assess the issue?
  • Who actually conducts the assessment?
  • Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
  • Who are affected by the impacts?
Target
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who are the intended users of assessment results?
  • Who needs the assessment results?
  • Who can make use of the assessment results?
Interaction
  • Isolated
  • Informing
  • Participatory
  • Joint
  • Shared
  • What is the degree of openness in assessment (and management)? (See Table 4.)
  • How does assessment interact with the intended use of its results? (See Table 5.)
  • How does assessment interact with other actors in its context?

Dimensions of openness

Main article: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results.
Table 4. Dimensions of openness.[2]
Dimension Description
Scope of participation Who are allowed to participate in the process?
Access to information What information about the issue is made available to participants?
Timing of openness When are participants invited or allowed to participate?
Scope of contribution To which aspects of the issue are participants invited or allowed to contribute?
Impact of contribution How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?

One obstacle for effectively addressing the issue of effective participation may be the concept of participation itself. As long as the discourse focuses on participation, one is easily misled to considering it as an independent entity with purposes, goals and values in itself, without explicitly relating it to the broader context of the processes whose purposes it is intended to serve. The conceptual framework we call the dimensions of openness attempts to overcome this obstacle by considering the issue of effective participation in terms of openness in the processes of assessment and decision making.

The framework bears resemblance e.g. to the criteria for evaluating implementation of the Aarhus Convention principles by Hartley and Wood [23], the categories to distinguish a discrete set of public and stakeholder engagement options by Burgess and Clark [74], and particularly the seven categories of principles of public participation by Webler and Tuler [75]. However, whereas they were constructed for the use of evaluating or describing existing participatory practices or designs, the dimensions of openness framework is explicitly and particularly intended to be used as a checklist type guidance to support design and management of participatory assessment and decision making processes.

The perspective adopted in the framework can be characterized as contentual because it primarily focuses on the issue in consideration and describing the prerequisites to influencing it, instead of being confined to only considering techniques and manoeuvres to execute participation events. Thereby it helps in participatory assessment and decision making processes to achieve their objectives, and on the other hand in providing possibilities for meaningful and effective participation. The framework does not, however, tell how participation should be arranged, but rests on the existing and continually developing knowledge base on participatory models and techniques.

While all dimensions contribute to the overall openness, it is the fifth dimension, the impact of contribution, which ultimately determines the effect on the outcome. Accordingly, it is recommended that aspects of openness in assessment and decision making processes are considered step-by-step, following the order as presented above.

Categories of interaction

Table originally from Decision analysis and risk management 2013/Homework.
Table 5. Categories of interaction within the knowledge-policy interaction framework.
Category Explanation
Isolated Assessment and use of assessment results are strictly separated. Results are provided to intended use, but users and stakeholders shall not interfere with making of the assessment.
Informing Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to assessment, but mainly serve as recipients of assessment results.
Participatory Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
Joint Involvement of and exchange of summary-level information among multiple actors in scoping, management, communication and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
Shared Different actors involved in assessment retain their roles and responsibilities, but engage in open collaboration upon determining assessment questions to address and finding answers to them as well as implementing them in practice.

Acceptability

Main article: Shared understanding.

Acceptability can be measured with a test of shared understanding. In a decision situation there is shared understanding when all participants of the decision support or decision making process will give positive answers to the following questions.

Table 6. Acceptability according to the test of shared understanding.
Question Who is asked?
Is all relevant and important information described? All participants of the decision support or decision making processes.
Are all relevant and important value judgements described?
Are the decision maker's decision criteria described?
Is the decision maker's rationale from the criteria to the decision described?

Co-creation skills and facilitation

Also known as interactional expertise.


Implementation and critique

In THL, we have developed the workspace Opasnet that enables the kind of work described above (http://en.opasnet.org). We have also performed environmental health assessment in the workspace using probabilistic models that are open (and open source code) to the last detail. Most of the technical problems have been solved, so it is possible to start and perform new assessments as needed. However, we have also identified urgent development needs.

First, the proposed practice would change many established practices in both decision making and expert work. We have found it very difficult to convince people to try the new approach. It is clearly more time consuming in the beginning because there are a lot of new things to learn, compared with practices that are a routine. In addition, many people have serious doubts whether the practice could work in reality. The most common arguments are that open participation would cause chaos; learning the workspace with shared information objects is not worth the trouble; the authority of expertise (or experts) would decline; and that new practices are of little interest to experts as long as a decent assessment report is produced.

Second, there are many development needs in the interactional expertise. Even if there is theoretical understanding about how assessments should be done using shared information objects, little experience or practical guidance exists about how to do this in practice. The input data can be very versatile, such as a critical scientific review of an exposure-response function of a pollutant, a population impact estimate from an economic or environmental model, or a discussion in a public hearing. All of this data is expected to be transformed into a shared description that is in accordance with causality, critique, openness, and other principles listed above. Research, practical exercise, and training is needed to learn interactional expertise.

The framework for knowledge-based policy making can be considered as an attempt to solve the problem of management of natural and social complexity in societal decision making. It takes a broad view to decision making covering the whole chain from obtaining the knowledge-base to support decisions to the societal outcomes of decisions.

Within the whole, there are two major aspects that the framework emphasises. First is the process of creating the knowledge that forms the basis of decisions and, whose influence then flows down the stream towards the outcomes of decisions. In the view of the framework, the decision support covers both the technical dimension and the political dimension of decision support (Evans?, Collins?). Here technical dimension refers to the expert knowledge and systematic analyses conducted by experts to provide information on the issues addressed in decision making. Political dimension then refers to the discussions in which the needs and views of different societal actors are addressed and where the practical meanings of expert knowledge are interpreted. This approach is in line with the principles of open assessment described above.

The second major aspect is the top-level view of evaluating decisions. The evaluation covers all parts of the overall decision making process: decision support, decisions, implementation of decisions, as well as outcomes. In line with the principles of REA, the evaluation covers the phases of design, execution and follow-up of each part. When this evaluation is done to each part independently as well as in relation to other parts of the chain from knowledge to outcomes, all parts of the chain become evaluated in terms of four perspectives: process, product, use, and interaction (cf. Pohjola 2001?, 2006?, 2007?).

In addition to the knowledge production, the framework requires a systematic development of decision making practices, where the produced knowledge is utilized in the actual decision making and the decision making process is evaluated. It is argued that in order to make effective changes in decision making, it requires more than just producing an openly created knowledge-base to support the decision making. It requires that the practices of decision making need to be revised. This is another aspect into managing complexity of issues relating to decision making.

Rationale

How to include health aspects in non-health policies?

My experience is that established decision processes work reasonably well related to aspects they are designed for. Performers of environmental impact assessment can organise public hearings and include stakeholder views. A city transport department is capable of designing streets to reduce congestion or negotiating about subsidies to public transport with bus companies. This is their job and they know how to do it.

But including health impacts into these processes is another matter. A city transport department has neither resources nor capability to assess health impacts. It is not enough that researchers would know how to do it. The motivation, expertise, relevant data, and resources must meet in practice before any health assessment is done and included in a decision making process.

This is the critical question: how to make all these meet in practical situations? It should even be so easy that it would become a routine and a default rather than an exception. This will not happen without two parts.

1) There must be tools for making routine health impact assessments in such a way that all generic data and knowledge is already embedded in the tool, and the decision maker or expert only has to add case-specific data to make the assessment model run.

2) The decision making process must be developed in such a way that it supports such assessments and is capable of including their results into the final comparison of decision options.

Of these two, I would say that the first one is easier. We have already done a lot of work in that area and the current proposal promises to do more. But the second one is critical, because less work has been done there, and the need has not even been understood very well. Researchers cannot solve this by themselves. We have to collaborate closely with decision makers also within this project.

Should this collaboration happen within WP assessment or WP dissemination? In any case, we should have a deliberable where we recommend specific improvements in decision making processes to achieve these objectives.

See also

References

  1. Mikko V. Pohjola. (2015?) Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. Comprehensive Materials Processing, Volume 8. Health, Safety and Environmental issues (00814)
  2. Mikko V. Pohjola and Jouni T. Tuomisto: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011, 10:58 http://www.ehjournal.net/content/10/1/58.