Open policy practice

From Testiwiki
Revision as of 13:13, 26 January 2015 by Jouni (talk | contribs) (Answer: figures added)
Jump to: navigation, search


Open policy practice is a method to support societal decision making in an open society. One part of open policy practice is open assessment. This page should contain a detailed description of the practice, but while it is being written, please refer to pages in See also.

Question

What is open policy practice?

Answer

Error creating thumbnail: Unable to save thumbnail to destination
Shared understanding exists when all participants understand, what opinions exist, what disagreements exist and why.

Previous research has found that a major problem the science-policy interface actually lies in the inability of the current political processes to utilise scientific knowledge in societal decision making (Mikko Pohjola: Assessments are to change the world – Prerequisites to effective environmental health assessment. Doctoral dissertation. THL, 2013. http://urn.fi/URN:ISBN:978-952-245-883-4). This observation has lead to the development of a pragmatic guidance for closer collaboration between researchers and societal decision making. The guidance is called Open Policy Practice and it was developed by National Institute for Health and Welfare (THL) and Nordem Ltd in 2013. The main points of the practice are listed below.

Four main parts of work

The guidance focuses on the decision support part, although the whole chain of decision making from decision identification to decision support, actual making of the decision, implementation, and finally outcomes of the decision are considered during the whole process. The practice identifies four main parts of work:

  • The decision maker publishes the objectives of the decision. This is used to guide all subsequent work.
  • The execution of decision support is mostly about collecting, organising and synthesising scientific knowledge and values in order to inform the decision maker to reach her objectives.
  • Evaluation and management of the work (of decision support and decision making) continues all the way through the process. The focus is on evaluating whether the work produces the intended knowledge and helps to reach the objectives.
  • Interactional expertise is needed to organise and synthesise the information. This requires specific skills that are typically available neither among experts nor decision makers.

The execution of decision support may take different forms. Currently, the practices of risk assessment, health impact assessment, cost-benefit assessment, or public hearings all fall under this broad part of work. In general, the execution aims to answer these questions: "What would the outcomes be if decision option X was chosen, and would that be preferable to outcomes of other options?"

Six principles

Error creating thumbnail: Unable to save thumbnail to destination
Open policy practice has four parts: shared understanding as the main target of the work, execution, evaluation and management, and co-creation skills and facilitation. The execution is guided by six principles (see text).

In Open Decision Making Practice, the execution strictly follows six principles. Each of them is sometimes implemented already today, but so far they have not been implemented systematically together.

  • Intentionality: All that is done aims to offer better understanding to the decision maker about outcomes of the decision.
  • Shared information objects: all information is shared using a systematic structure and a common workspace where all participants can work.
  • Causality: The focus is on understanding the causal relations between the decision options and the intended outcomes.
  • Critique: All information presented can be criticised based on relevance and accordance to observations.
  • Reuse: All information is produced in a format that can easily be used for other purposes by other people.
  • Openness: All work and all information is openly available to anyone interested. Participation is free. If there are exceptions, these must be publicly justified.

Implementation and critique

In THL, we have developed the workspace Opasnet that enables the kind of work described above (http://en.opasnet.org). We have also performed environmental health assessment in the workspace using probabilistic models that are open (and open source code) to the last detail. Most of the technical problems have been solved, so it is possible to start and perform new assessments as needed. However, we have also identified urgent development needs.

First, the proposed practice would change many established practices in both decision making and expert work. We have found it very difficult to convince people to try the new approach. It is clearly more time consuming in the beginning because there are a lot of new things to learn, compared with practices that are a routine. In addition, many people have serious doubts whether the practice could work in reality. The most common arguments are that open participation would cause chaos; learning the workspace with shared information objects is not worth the trouble; the authority of expertise (or experts) would decline; and that new practices are of little interest to experts as long as a decent assessment report is produced.

Second, there are many development needs in the interactional expertise. Even if there is theoretical understanding about how assessments should be done using shared information objects, little experience or practical guidance exists about how to do this in practice. The input data can be very versatile, such as a critical scientific review of an exposure-response function of a pollutant, a population impact estimate from an economic or environmental model, or a discussion in a public hearing. All of this data is expected to be transformed into a shared description that is in accordance with causality, critique, openness, and other principles listed above. Research, practical exercise, and training is needed to learn interactional expertise.

The framework for knowledge-based policy making can be considered as an attempt to solve the problem of management of natural and social complexity in societal decision making. It takes a broad view to decision making covering the whole chain from obtaining the knowledge-base to support decisions to the societal outcomes of decisions.

Within the whole, there are two major aspects that the framework emphasises. First is the process of creating the knowledge that forms the basis of decisions and, whose influence then flows down the stream towards the outcomes of decisions. In the view of the framework, the decision support covers both the technical dimension and the political dimension of decision support (Evans?, Collins?). Here technical dimension refers to the expert knowledge and systematic analyses conducted by experts to provide information on the issues addressed in decision making. Political dimension then refers to the discussions in which the needs and views of different societal actors are addressed and where the practical meanings of expert knowledge are interpreted. This approach is in line with the principles of open assessment described above.

The second major aspect is the top-level view of evaluating decisions. The evaluation covers all parts of the overall decision making process: decision support, decisions, implementation of decisions, as well as outcomes. In line with the principles of REA, the evaluation covers the phases of design, execution and follow-up of each part. When this evaluation is done to each part independently as well as in relation to other parts of the chain from knowledge to outcomes, all parts of the chain become evaluated in terms of four perspectives: process, product, use, and interaction (cf. Pohjola 2001?, 2006?, 2007?).

In addition to the knowledge production, the framework requires a systematic development of decision making practices, where the produced knowledge is utilized in the actual decision making and the decision making process is evaluated. It is argued that in order to make effective changes in decision making, it requires more than just producing an openly created knowledge-base to support the decision making. It requires that the practices of decision making need to be revised. This is another aspect into managing complexity of issues relating to decision making.

Rationale

How to include health aspects in non-health policies?

My experience is that established decision processes work reasonably well related to aspects they are designed for. Performers of environmental impact assessment can organise public hearings and include stakeholder views. A city transport department is capable of designing streets to reduce congestion or negotiating about subsidies to public transport with bus companies. This is their job and they know how to do it.

But including health impacts into these processes is another matter. A city transport department has neither resources nor capability to assess health impacts. It is not enough that researchers would know how to do it. The motivation, expertise, relevant data, and resources must meet in practice before any health assessment is done and included in a decision making process.

This is the critical question: how to make all these meet in practical situations? It should even be so easy that it would become a routine and a default rather than an exception. This will not happen without two parts.

1) There must be tools for making routine health impact assessments in such a way that all generic data and knowledge is already embedded in the tool, and the decision maker or expert only has to add case-specific data to make the assessment model run.

2) The decision making process must be developed in such a way that it supports such assessments and is capable of including their results into the final comparison of decision options.

Of these two, I would say that the first one is easier. We have already done a lot of work in that area and the current proposal promises to do more. But the second one is critical, because less work has been done there, and the need has not even been understood very well. Researchers cannot solve this by themselves. We have to collaborate closely with decision makers also within this project.

Should this collaboration happen within WP assessment or WP dissemination? In any case, we should have a deliberable where we recommend specific improvements in decision making processes to achieve these objectives.

See also