Difference between revisions of "Open policy practice"

From Testiwiki
Jump to: navigation, search
(Evaluation and management)
(Answer)
Line 84: Line 84:
 
==== Settings of assessments ====
 
==== Settings of assessments ====
  
:''Main article: [[:heande:Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy]] (unpublished, password-protected).
+
:''Main article: [[:heande:Assessment of impacts to environment and health in influencing manufacturing and public policy]] (unpublished, password-protected).
  
 
{|{{prettytable}}
 
{|{{prettytable}}

Revision as of 07:45, 4 March 2015


Open policy practice is a method to support societal decision making in an open society. One part of open policy practice is open assessment. This page should contain a detailed description of the practice, but while it is being written, please refer to pages in See also.

Question

What is open policy practice?

Answer

Error creating thumbnail: Unable to save thumbnail to destination
Shared understanding exists when all participants understand, what opinions exist, what disagreements exist and why.

Previous research has found that a major problem the science-policy interface actually lies in the inability of the current political processes to utilise scientific knowledge in societal decision making (Mikko Pohjola: Assessments are to change the world – Prerequisites to effective environmental health assessment. Doctoral dissertation. THL, 2013. http://urn.fi/URN:ISBN:978-952-245-883-4). This observation has lead to the development of a pragmatic guidance for closer collaboration between researchers and societal decision making. The guidance is called Open Policy Practice and it was developed by National Institute for Health and Welfare (THL) and Nordem Ltd in 2013. The main points of the practice are listed below.

Four main parts of work

The guidance focuses on the decision support part, although the whole chain of decision making from decision identification to decision support, actual making of the decision, implementation, and finally outcomes of the decision are considered during the whole process. The practice identifies four main parts of work:

  • The decision maker publishes the objectives of the decision. This is used to guide all subsequent work.
  • The execution of decision support is mostly about collecting, organising and synthesising scientific knowledge and values in order to inform the decision maker to reach her objectives.
  • Evaluation and management of the work (of decision support and decision making) continues all the way through the process. The focus is on evaluating whether the work produces the intended knowledge and helps to reach the objectives.
  • Interactional expertise is needed to organise and synthesise the information. This requires specific skills that are typically available neither among experts nor decision makers. It also contains specific practices and methods that may be in wide use in some areas, such as the use of probabilities for describing uncertainties, discussion rules, or quantitative modelling.

The execution of decision support may take different forms. Currently, the practices of risk assessment, health impact assessment, cost-benefit assessment, or public hearings all fall under this broad part of work. In general, the execution aims to answer these questions: "What would the outcomes be if decision option X was chosen, and would that be preferable to outcomes of other options?"

Execution

Six principles

Error creating thumbnail: Unable to save thumbnail to destination
Open policy practice has four parts: shared understanding as the main target of the work, execution, evaluation and management, and co-creation skills and facilitation. The execution is guided by six principles (see text).

In Open Decision Making Practice, the execution strictly follows six principles. Each of them is sometimes implemented already today, but so far they have not been implemented systematically together.

  • Intentionality: All that is done aims to offer better understanding to the decision maker about outcomes of the decision.
  • Shared information objects: all information is shared using a systematic structure and a common workspace where all participants can work.
  • Causality: The focus is on understanding the causal relations between the decision options and the intended outcomes.
  • Critique: All information presented can be criticised based on relevance and accordance to observations.
  • Reuse: All information is produced in a format that can easily be used for other purposes by other people.
  • Openness: All work and all information is openly available to anyone interested. Participation is free. If there are exceptions, these must be publicly justified.

Evaluation and management

Properties of good decision support

Main article: Properties of good assessment.
Table 2. Properties of good decision support. A slightly modified version of the properties of good assessment framework.
Category Description Guiding questions Suggestions by open policy practice
Quality of content Specificity, exactness and correctness of information. Correspondence between questions and answers. How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary? Work openly, invite criticism (see Table 1.)
Applicability Relevance: Correspondence between output and its intended use. How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment? Characterize the setting (see Table 3.)
Availability: Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users. Is the information provided by the assessment (or would it be) available when, where and to whom is needed? Work online using e.g. Opasnet. For evaluation, see Table 4.
Usability: Potential of the information in the output to generate understanding among its user(s) about the topic of assessment. Would the intended users be able to understand what the assessment is about? Would the assessment be useful for them. Invite participation from the problem owner and user groups early on (see Table 5.)
Acceptability: Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content. Would the assessment (both its expected results and the way the assessment planned to be made) be acceptable to the intended users. Use the test of shared understanding (see Table 6.)
Efficiency Resource expenditure of producing the assessment output either in one assessment or in a series of assessments. How much effort would be needed for making the assessment? Would it be worth spending the effort, considering the expected results and their applicability for the intended users? Would the assessment results be useful also in some other use? Use shared information objects with open license, e.g. Ovariables.

Settings of assessments

Main article: heande:Assessment of impacts to environment and health in influencing manufacturing and public policy (unpublished, password-protected).
Table 3. Important settings for environmental health (and other) assessments and related public policy.[1]
Attribute Example categories Guiding questions
Impacts
  • Environment
  • Health
  • Other (what?)
  • Which impacts are addressed in assessment?
  • Which impacts are most significant?
  • Which impacts are most relevant for the intended use?
Causes
  • Production
  • Consumption
  • Transport
  • Heating, Power production
  • Everyday life
  • Which causes of impacts are recognized in assessment?
  • Which causes of impacts are most significant?
  • Which causes of impacts are most relevant for the intended use?
Problem owner
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who has the interest, responsibility and/or means to assess the issue?
  • Who actually conducts the assessment?
  • Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
  • Who are affected by the impacts?
Target
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who are the intended users of assessment results?
  • Who needs the assessment results?
  • Who can make use of the assessment results?
Interaction
  • Isolated
  • Informing
  • Participatory
  • Joint
  • Shared
  • What is the degree of openness in assessment (and management)? (See Table 4.)
  • How does assessment interact with the intended use of its results? (See Table 5.)
  • How does assessment interact with other actors in its context?

Dimensions of openness

Main article: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results.
Table 4. Dimensions of openness.[2]
Dimension Description
Scope of participation Who are allowed to participate in the process?
Access to information What information about the issue is made available to participants?
Timing of openness When are participants invited or allowed to participate?
Scope of contribution To which aspects of the issue are participants invited or allowed to contribute?
Impact of contribution How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?

One obstacle for effectively addressing the issue of effective participation may be the concept of participation itself. As long as the discourse focuses on participation, one is easily misled to considering it as an independent entity with purposes, goals and values in itself, without explicitly relating it to the broader context of the processes whose purposes it is intended to serve. The conceptual framework we call the dimensions of openness attempts to overcome this obstacle by considering the issue of effective participation in terms of openness in the processes of assessment and decision making.

The framework bears resemblance e.g. to the criteria for evaluating implementation of the Aarhus Convention principles by Hartley and Wood [23], the categories to distinguish a discrete set of public and stakeholder engagement options by Burgess and Clark [74], and particularly the seven categories of principles of public participation by Webler and Tuler [75]. However, whereas they were constructed for the use of evaluating or describing existing participatory practices or designs, the dimensions of openness framework is explicitly and particularly intended to be used as a checklist type guidance to support design and management of participatory assessment and decision making processes.

The perspective adopted in the framework can be characterized as contentual because it primarily focuses on the issue in consideration and describing the prerequisites to influencing it, instead of being confined to only considering techniques and manoeuvres to execute participation events. Thereby it helps in participatory assessment and decision making processes to achieve their objectives, and on the other hand in providing possibilities for meaningful and effective participation. The framework does not, however, tell how participation should be arranged, but rests on the existing and continually developing knowledge base on participatory models and techniques.

While all dimensions contribute to the overall openness, it is the fifth dimension, the impact of contribution, which ultimately determines the effect on the outcome. Accordingly, it is recommended that aspects of openness in assessment and decision making processes are considered step-by-step, following the order as presented above.

Categories of interaction

Table originally from Decision analysis and risk management 2013/Homework.
Table 5. Categories of interaction within the knowledge-policy interaction framework.
Category Explanation
Isolated Assessment and use of assessment results are strictly separated. Results are provided to intended use, but users and stakeholders shall not interfere with making of the assessment.
Informing Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to assessment, but mainly serve as recipients of assessment results.
Participatory Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
Joint Involvement of and exchange of summary-level information among multiple actors in scoping, management, communication and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
Shared Different actors involved in assessment retain their roles and responsibilities, but engage in open collaboration upon determining assessment questions to address and finding answers to them as well as implementing them in practice.

Acceptability

Main article: Shared understanding.

Acceptability can be measured with a test of shared understanding. In a decision situation there is shared understanding when all participants of the decision support or decision making process will give positive answers to the following questions.

Table 6. Acceptability according to the test of shared understanding.
Question Who is asked?
Is all relevant and important information described? All participants of the decision support or decision making processes.
Are all relevant and important value judgements described?
Are the decision maker's decision criteria described?
Is the decision maker's rationale from the criteria to the decision described?

Co-creation skills and facilitation

Also known as interactional expertise.


Implementation and critique

In THL, we have developed the workspace Opasnet that enables the kind of work described above (http://en.opasnet.org). We have also performed environmental health assessment in the workspace using probabilistic models that are open (and open source code) to the last detail. Most of the technical problems have been solved, so it is possible to start and perform new assessments as needed. However, we have also identified urgent development needs.

First, the proposed practice would change many established practices in both decision making and expert work. We have found it very difficult to convince people to try the new approach. It is clearly more time consuming in the beginning because there are a lot of new things to learn, compared with practices that are a routine. In addition, many people have serious doubts whether the practice could work in reality. The most common arguments are that open participation would cause chaos; learning the workspace with shared information objects is not worth the trouble; the authority of expertise (or experts) would decline; and that new practices are of little interest to experts as long as a decent assessment report is produced.

Second, there are many development needs in the interactional expertise. Even if there is theoretical understanding about how assessments should be done using shared information objects, little experience or practical guidance exists about how to do this in practice. The input data can be very versatile, such as a critical scientific review of an exposure-response function of a pollutant, a population impact estimate from an economic or environmental model, or a discussion in a public hearing. All of this data is expected to be transformed into a shared description that is in accordance with causality, critique, openness, and other principles listed above. Research, practical exercise, and training is needed to learn interactional expertise.

The framework for knowledge-based policy making can be considered as an attempt to solve the problem of management of natural and social complexity in societal decision making. It takes a broad view to decision making covering the whole chain from obtaining the knowledge-base to support decisions to the societal outcomes of decisions.

Within the whole, there are two major aspects that the framework emphasises. First is the process of creating the knowledge that forms the basis of decisions and, whose influence then flows down the stream towards the outcomes of decisions. In the view of the framework, the decision support covers both the technical dimension and the political dimension of decision support (Evans?, Collins?). Here technical dimension refers to the expert knowledge and systematic analyses conducted by experts to provide information on the issues addressed in decision making. Political dimension then refers to the discussions in which the needs and views of different societal actors are addressed and where the practical meanings of expert knowledge are interpreted. This approach is in line with the principles of open assessment described above.

The second major aspect is the top-level view of evaluating decisions. The evaluation covers all parts of the overall decision making process: decision support, decisions, implementation of decisions, as well as outcomes. In line with the principles of REA, the evaluation covers the phases of design, execution and follow-up of each part. When this evaluation is done to each part independently as well as in relation to other parts of the chain from knowledge to outcomes, all parts of the chain become evaluated in terms of four perspectives: process, product, use, and interaction (cf. Pohjola 2001?, 2006?, 2007?).

In addition to the knowledge production, the framework requires a systematic development of decision making practices, where the produced knowledge is utilized in the actual decision making and the decision making process is evaluated. It is argued that in order to make effective changes in decision making, it requires more than just producing an openly created knowledge-base to support the decision making. It requires that the practices of decision making need to be revised. This is another aspect into managing complexity of issues relating to decision making.

Rationale

How to include health aspects in non-health policies?

My experience is that established decision processes work reasonably well related to aspects they are designed for. Performers of environmental impact assessment can organise public hearings and include stakeholder views. A city transport department is capable of designing streets to reduce congestion or negotiating about subsidies to public transport with bus companies. This is their job and they know how to do it.

But including health impacts into these processes is another matter. A city transport department has neither resources nor capability to assess health impacts. It is not enough that researchers would know how to do it. The motivation, expertise, relevant data, and resources must meet in practice before any health assessment is done and included in a decision making process.

This is the critical question: how to make all these meet in practical situations? It should even be so easy that it would become a routine and a default rather than an exception. This will not happen without two parts.

1) There must be tools for making routine health impact assessments in such a way that all generic data and knowledge is already embedded in the tool, and the decision maker or expert only has to add case-specific data to make the assessment model run.

2) The decision making process must be developed in such a way that it supports such assessments and is capable of including their results into the final comparison of decision options.

Of these two, I would say that the first one is easier. We have already done a lot of work in that area and the current proposal promises to do more. But the second one is critical, because less work has been done there, and the need has not even been understood very well. Researchers cannot solve this by themselves. We have to collaborate closely with decision makers also within this project.

Should this collaboration happen within WP assessment or WP dissemination? In any case, we should have a deliberable where we recommend specific improvements in decision making processes to achieve these objectives.

See also

References

  1. Mikko V. Pohjola. (2015?) Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. Comprehensive Materials Processing, Volume 8. Health, Safety and Environmental issues (00814)
  2. Mikko V. Pohjola and Jouni T. Tuomisto: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011, 10:58 http://www.ehjournal.net/content/10/1/58.