Open policy practice

From Testiwiki
Revision as of 09:29, 5 March 2015 by Jouni (talk | contribs) (updates here and there)
Jump to: navigation, search


Open policy practice is a method to support societal decision making in an open society. One part of open policy practice is open assessment. This page should contain a detailed description of the practice, but while it is being written, please refer to pages in See also.

Question

What is open policy practice method such that it

  • applies open assessment as the main knowledge producing process,
  • gives practical guidance for the whole decision process from initiation to decision support to actual decision making to implementation and finally to outcomes,
  • is applicable to all kinds of societal decision situations in any administrative area or discipline?

Answer

Error creating thumbnail: Unable to save thumbnail to destination
Shared understanding exists when all participants understand, what opinions exist, what disagreements exist and why.

Previous research has found that a major problem the science-policy interface actually lies in the inability of the current political processes to utilise scientific knowledge in societal decision making (Mikko Pohjola: Assessments are to change the world – Prerequisites to effective environmental health assessment. Doctoral dissertation. THL, 2013. http://urn.fi/URN:ISBN:978-952-245-883-4). This observation has lead to the development of a pragmatic guidance for closer collaboration between researchers and societal decision making. The guidance is called Open Policy Practice and it was developed by National Institute for Health and Welfare (THL) and Nordem Ltd in 2013. The main points of the practice are listed below.

Four main parts of work

The guidance focuses on the decision support part, although the whole chain of decision making from decision identification to decision support, actual making of the decision, implementation, and finally outcomes of the decision are considered during the whole process. The practice identifies four main parts of work:

  • The objective of open policy practice is to produce shared understanding about the decision at hand, so that all participants can understand, what decision options are considered, what outcomes are of interest, what objectives are pursued, what facts, opinions, and disagreements exist, and finally why a particular decision option was selected.
  • The execution of decision support is mostly about collecting, organising and synthesising scientific knowledge and values in order to inform the decision maker to reach her objectives.
  • Evaluation and management of the work (of decision support and decision making) continues all the way through the process. The focus is on evaluating whether the work produces the intended knowledge and helps to reach the objectives.
  • Co-creation skills and facilitation (also known as Interactional expertise) is needed to organise and synthesise the information. This requires specific skills that are typically available neither among experts nor decision makers. It also contains specific practices and methods that may be in wide use in some areas, such as the use of probabilities for describing uncertainties, discussion rules, or quantitative modelling.

The execution of decision support may take different forms. Currently, the practices of risk assessment, health impact assessment, cost-benefit assessment, or public hearings all fall under this broad part of work. In general, the execution aims to answer these questions: "What would the outcomes be if decision option X was chosen, and would that be preferable to, or clearly worse than, outcomes of other options?"

Execution

Six principles

In open policy practice, the execution strictly follows six principles: intentionality, shared information objects, causality, critique, openness, and reuse. Each of them is sometimes implemented already today, but so far they have not been implemented systematically together. Opasnet web-workspace was developed to support all these principles in practical work.

Table 1. Six principles of execution of open policy practice.
Principle Explanation
Intentionality The decision maker will explicate her objectives and decision options under consideration. All that is done aims to offer better understanding about outcomes of the decision related to the objectives of the decision maker. Thus, the participation of the decision maker in the policy support process and the explicating of objectives is crucial.
Causality The focus is on understanding and describing the causal relations between the decision options and the intended outcomes. The aim is to predict what outcomes will occur if a particular decision option is chosen.
Critique All information presented can be criticised based on relevance and accordance to observations. The aim is to reject ideas, hypotheses - and ultimately decision options - that do not hold against criticism. Critique has a central role in the scientific method, and here we apply it in practical situations, because rejecting poor statements is much easier and more efficient than trying to prove statements true.
Shared information objects All information is shared using a systematic structure and a common workspace where all participants can work. The structure is based on substance rather than on persons, organisations, or processes (e.g. people should not use personal folders to store data). Objectives determine the information needs, which are then used to define research questions to be answered in the assessment. The assessment work is collaboration aiming to answer these questions in a way that holds against critique.
Openness All work and all information is openly available to anyone interested. Participation is free. If there are exceptions, these must be publicly justified. Openness is crucial because a priori it is impossible to know who may have important information or value judgements about the topic.
Reuse All information is produced in a format that can easily be used for other purposes by other people. Reuse is facilitated by shared information objects and openness, but there are specific requirements of reuse that are not covered by the two previous principles alone. For example, some formats such as PDF files can be open and shared, but for effective reuse better formats such as Opasnet variable pages must be used.
Error creating thumbnail: Unable to save thumbnail to destination
Open policy practice has four parts: shared understanding as the main target of the work, execution, evaluation and management, and co-creation skills and facilitation. The execution is guided by six principles (see text).

These principles aim to ensure that the decision support is justified, acceptable, and based on the best available information. Open policy practice helps in collecting expert knowledge, local information, and opinions, so that they are combined according to the needs of decision making. Thus, the implementation brings together

  1. information pull from decision making and
  2. information push from decision support (typically in the form of an environmental health assessment).

In other words, open policy practice combines the so called technical and political dimensions of decision making,[1] which are usually kept separate. During a decision support process, these dimensions have different weights. In the beginning, the objectives and information needs of the decision makers (political dimension) are on focus. When these things are clarified, the focus moves onto expert-driven research and thus technical dimension. Finally, the interpretation of results is mostly about opinions and values and thus within the political dimension.

Examples and clarifications

Intentionality

The intentionality is achieved by recognising the objectives of the decision maker and the information needs arising from the objectives. The information needs are formulated as research questions that are then answered during the decision support process. Also the objectives and questions can be openly discussed and criticised. In open assessment, this work produces the attributes for the assessment scope: question,

Yhteistoiminnallisen valmistelun tavoitteellisuus toteutetaan tunnistamalla päätöksenteon tavoitteiden pohjalta päätöksentekijöiden tiedon tarpeet ja muotoilemalla sen mukaiset kysymykset valmistelun osana tehtävien vaikutusarviointien ohjaamiseksi. Yhteistoiminnallisessa valmistelussa myös arviointikysymykset ja päätöksenteon tavoitteet ovat avoimen kritiikin ja keskustelun kohteina. Opasnetissä valmistelun tavoitteiden kuvaaminen tehdään määrittämällä vaikutusarvioinnin rajaus arvioinnin rakenteen mukaisesti. Yhteiskunnallista päätöksentekoa tukevissa vaikutusarvioinnissa järkevä kysymyksen asettelu on sellainen, että arvioinnissa tunnistetaan ja tarkastellaan erilaisia päätösvaihtoehtoja ja arvioidaan niihin liittyviä eri vaikutuksia.
Vaikka tavoitteiden tunnistaminen ja julkistaminen on periaatteessa varsin yksinkertainen toimenpide, on se tietoa hyödyntävän päätöksenteon kannalta erittäin tärkeä edellytys. Selkeä tavoitteenasettelu voi olla joskus vaikeaa ja vaatia moneen hankalaan miksi-kysymykseen vastaamista. Kyse on ensisijaisesti päättäjien asennoitumisesta ja sitoutumisesta läpinäkyvään toimintaan yhteiskunnallisessa päätöksenteossa. Samalla läpinäkyvyys luo mahdollisuuksia asiantuntijoille, sidosryhmille ja kansalaisille päätöksentekoon vaikuttamiseen perustelluin väittein.

Causality

Syysuhteiden kuvaus määrittää sitä miten päätösvalmistelun osana tehtävät vaikutusarvioinnit jäsennetään. Arvioinneissa mielenkiinnon kohteena olevat ilmiöt ja niihin vaikuttavat tekijät, eli erilaiset ns. muuttujat, tulee määritellä ja kuvata verkostoina, joissa muuttujat ovat syy-seuraussuhteissa toisiinsa. Opasnetissä kunkin muuttujan riippuvuudet, eli siihen vaikuttavat toiset muuttujat, kuvataan osana muuttujan tietorakennetta (kts. Variable). Lisäksi kuhunkin arviointiin sisältyvät muuttujat on usein hyvä kuvata kyseisellä arviointisivulla esim. vaikutuskaavion avulla (kts. esim. op_fi:Jatropan käyttö bioenergian lähteenä).

Critique

Yhteistoiminnallisessa valmistelussa kritiikin periaatetta toteutetaan siten, että kaikki väittämät ovat tosia kunnes toisin todistetaan. Väittämiä ja argumentteja tarkastellaan, eli joko hylätään tai hyväksytään toistaiseksi, kahden kriteerin, asiaankuuluvuuden (relevanssi) ja todenmukaisuuden (evidenssi), perusteella. Yhteistoiminnalliseen valmisteluun osallistuvilta edellytetään siten halukkuutta paitsi tuoda tietonsa julki, myös perustella ja puolustaa väittämiään. Numeerisen tietoaineiston osalta kritiikin periaate ohjaa epävarmuuden huomioimiseen ja numeeristen estimaattien esittämiseen todennäköisyysjakaumien avulla. Periaatteen mukaisesti jo pätemättömäksi osoitettujen väittämien uudelleen esittämistä ja toistamista ei tarvitse huomioida, jolloin yhteistoiminnallinen valmistelutyö ei jää jumiin keskustelupalsta-tyyppiseen kinasteluun vaan voi edetä uusien väittämien tarkastelun ja hylättyjen väittämien korjaamisen myötä.

Shared information objects

Jotta yhteistoiminnallisessa valmistelussa tehtävä työ olisi tavoitteitaan palvelevaa, se ohjataan kohdentumaan asiaan siten, että siihen liittyvät tiedot kerätään ja jäsennetään kaikille yhteisessä paikassa. Näin asian jaettu kuvaus, Opasnetissä avoimesti nähtävissä ja muokattavissa oleva vaikutusarviointi, organisoi valmisteluun osallistuvan joukon tekemän työn tuottamaan päätöksenteon kannalta hyödyllistä ja sen tavoitteiden mukaan jäsentynyttä tietoa. Opasnetin tietorakenne, mm. wiki, tietokanta, laskentaominaisuudet, standardoidut oliotyypit ja niiden informaatiorakenteet, on suunniteltu edistämään kohteellista yhteistyötä monista eri näkökulmista asiaa tarkastelevien osallistujien, kuten päättäjien, asiantuntijoiden ja muiden asianosaisten, kesken.

Openness

Yhteistoiminnallisessa valmistelussa ei lähtökohtaisesti ole tarpeen kieltää ketään jakamasta tietojaan ja mielipiteitään toisten valmisteluun osallistuvien kanssa, vaan kaikki väittämät, argumentit ja kannanotot ovat tervetulleita. Tämä asettaa kuitenkin haasteita osallistujajoukon kontribuutioiden kokoamiselle ja tarkastelulle. Edelliset periaatteet ja niiden toteutukset vastaavat osaan näistä haasteista, mutta tarvitaan myös sekä keinoja että työtä osallistujien kontribuutioiden saamiseksi, niiden välittämiseksi jaetun kuvauksen yhteyteen ja niiden jäsentämiseksi osaksi jaettua kuvausta. Opasnetissä osallistujat voivat suoraan muokata arviointeja tai niihin kuuluvia muuttujia, keskustella niiden sisällöstä keskustelusivuilla tai esittää näkemyksensä sivuilla olevien kommenttilaatikoidenn avulla. Lisäksi kontribuutiot voivat olla muita kirjallisia tai suullisia kannanottoja, jotka joku osallistujista välittää osaksi valmistelua. Tulivatpa osallistujien kontribuutiot millä hyvänsä tavalla, miltei aina on tarpeen, että niiden keräämiseksi, jäsentämiseksi ja jaettuun kuvaukseen sisällyttämiseksi tehdään erityistä yhteenvetämisen työtä. Yksi tapa tehdä yhteenvetämisen työtä Opasnetissä on jäsentää eri tahoilta saatuja väittämiä, argumentteja ja mielipiteitä keskusteluksi pragma-dialektisen argumentaation mukaan.

Reuse

Jotta vaikutusarvioinneissa ja päätösvalmistelussa muuten tuotettu tieto olisi mahdollisimman hyvin uudelleenkäytettävää, sen tulee rakentua modulaarisesti, eli että sen voi pilkkoa itsenäisiin, toisiin osiin uudelleen liitettäviin osiin. Lisäksi tiedon tulee tietysti olla saatavilla siellä missä ja milloin sitä satutaankaan tarvitsemaan. Opasnetissä muuttujat ovat avoimessa järjestelmässä kuvattuja itsenäisiä olioita, joten ne ovat helposti uudelleenkäytettäviä. Lisäksi Opasnetin olioissa toistuva kysymys-vastaus-perustelu -rakenne helpottaa muuttujien ja arviointien tulkintaa ja tuottamista, edistäen siten osaltaan uudelleenkäytettävyyttä.

Evaluation and management

Properties of good decision support

Main article: Properties of good assessment.
Table 2. Properties of good decision support. A slightly modified version of the properties of good assessment framework.
Category Description Guiding questions Suggestions by open policy practice
Quality of content Specificity, exactness and correctness of information. Correspondence between questions and answers. How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary? Work openly, invite criticism (see Table 1.)
Applicability Relevance: Correspondence between output and its intended use. How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment? Characterize the setting (see Table 3.)
Availability: Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users. Is the information provided by the assessment (or would it be) available when, where and to whom is needed? Work online using e.g. Opasnet. For evaluation, see Table 4.
Usability: Potential of the information in the output to generate understanding among its user(s) about the topic of assessment. Would the intended users be able to understand what the assessment is about? Would the assessment be useful for them. Invite participation from the problem owner and user groups early on (see Table 5.)
Acceptability: Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content. Would the assessment (both its expected results and the way the assessment planned to be made) be acceptable to the intended users. Use the test of shared understanding (see Table 6.)
Efficiency Resource expenditure of producing the assessment output either in one assessment or in a series of assessments. How much effort would be needed for making the assessment? Would it be worth spending the effort, considering the expected results and their applicability for the intended users? Would the assessment results be useful also in some other use? Use shared information objects with open license, e.g. Ovariables.

Settings of assessments

Main article: Assessment of impacts to environment and health in influencing manufacturing and public policy.
Table 3. Important settings for environmental health (and other) assessments and related public policy.[2]
Attribute Example categories Guiding questions
Impacts
  • Environment
  • Health
  • Other (what?)
  • Which impacts are addressed in assessment?
  • Which impacts are most significant?
  • Which impacts are most relevant for the intended use?
Causes
  • Production
  • Consumption
  • Transport
  • Heating, Power production
  • Everyday life
  • Which causes of impacts are recognized in assessment?
  • Which causes of impacts are most significant?
  • Which causes of impacts are most relevant for the intended use?
Problem owner
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who has the interest, responsibility and/or means to assess the issue?
  • Who actually conducts the assessment?
  • Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
  • Who are affected by the impacts?
Target
  • Policy maker
  • Industry, Business
  • Expert
  • Consumer
  • Public
  • Who are the intended users of assessment results?
  • Who needs the assessment results?
  • Who can make use of the assessment results?
Interaction
  • Isolated
  • Informing
  • Participatory
  • Joint
  • Shared
  • What is the degree of openness in assessment (and management)? (See Table 4.)
  • How does assessment interact with the intended use of its results? (See Table 5.)
  • How does assessment interact with other actors in its context?

Dimensions of openness

Main article: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results.
Table 4. Dimensions of openness.[3]
Dimension Description
Scope of participation Who are allowed to participate in the process?
Access to information What information about the issue is made available to participants?
Timing of openness When are participants invited or allowed to participate?
Scope of contribution To which aspects of the issue are participants invited or allowed to contribute?
Impact of contribution How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?

One obstacle for effectively addressing the issue of effective participation may be the concept of participation itself. As long as the discourse focuses on participation, one is easily misled to considering it as an independent entity with purposes, goals and values in itself, without explicitly relating it to the broader context of the processes whose purposes it is intended to serve. The conceptual framework we call the dimensions of openness attempts to overcome this obstacle by considering the issue of effective participation in terms of openness in the processes of assessment and decision making.

The framework bears resemblance e.g. to the criteria for evaluating implementation of the Aarhus Convention principles by Hartley and Wood [23], the categories to distinguish a discrete set of public and stakeholder engagement options by Burgess and Clark [74], and particularly the seven categories of principles of public participation by Webler and Tuler [75]. However, whereas they were constructed for the use of evaluating or describing existing participatory practices or designs, the dimensions of openness framework is explicitly and particularly intended to be used as a checklist type guidance to support design and management of participatory assessment and decision making processes.

The perspective adopted in the framework can be characterized as contentual because it primarily focuses on the issue in consideration and describing the prerequisites to influencing it, instead of being confined to only considering techniques and manoeuvres to execute participation events. Thereby it helps in participatory assessment and decision making processes to achieve their objectives, and on the other hand in providing possibilities for meaningful and effective participation. The framework does not, however, tell how participation should be arranged, but rests on the existing and continually developing knowledge base on participatory models and techniques.

While all dimensions contribute to the overall openness, it is the fifth dimension, the impact of contribution, which ultimately determines the effect on the outcome. Accordingly, it is recommended that aspects of openness in assessment and decision making processes are considered step-by-step, following the order as presented above.

Categories of interaction

Table originally from Decision analysis and risk management 2013/Homework.
Table 5. Categories of interaction within the knowledge-policy interaction framework.
Category Explanation
Isolated Assessment and use of assessment results are strictly separated. Results are provided to intended use, but users and stakeholders shall not interfere with making of the assessment.
Informing Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to assessment, but mainly serve as recipients of assessment results.
Participatory Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
Joint Involvement of and exchange of summary-level information among multiple actors in scoping, management, communication and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
Shared Different actors involved in assessment retain their roles and responsibilities, but engage in open collaboration upon determining assessment questions to address and finding answers to them as well as implementing them in practice.

Acceptability

Main article: Shared understanding.

Acceptability can be measured with a test of shared understanding. In a decision situation there is shared understanding when all participants of the decision support or decision making process will give positive answers to the following questions.

Table 6. Acceptability according to the test of shared understanding.
Question Who is asked?
Is all relevant and important information described? All participants of the decision support or decision making processes.
Are all relevant and important value judgements described?
Are the decision maker's decision criteria described?
Is the decision maker's rationale from the criteria to the decision described?

Co-creation skills and facilitation

Also known as interactional expertise.


Implementation and critique

In THL, we have developed the workspace Opasnet that enables the kind of work described above (http://en.opasnet.org). We have also performed environmental health assessment in the workspace using probabilistic models that are open (and open source code) to the last detail. Most of the technical problems have been solved, so it is possible to start and perform new assessments as needed. However, we have also identified urgent development needs.

First, the proposed practice would change many established practices in both decision making and expert work. We have found it very difficult to convince people to try the new approach. It is clearly more time consuming in the beginning because there are a lot of new things to learn, compared with practices that are a routine. In addition, many people have serious doubts whether the practice could work in reality. The most common arguments are that open participation would cause chaos; learning the workspace with shared information objects is not worth the trouble; the authority of expertise (or experts) would decline; and that new practices are of little interest to experts as long as a decent assessment report is produced.

Second, there are many development needs in the interactional expertise. Even if there is theoretical understanding about how assessments should be done using shared information objects, little experience or practical guidance exists about how to do this in practice. The input data can be very versatile, such as a critical scientific review of an exposure-response function of a pollutant, a population impact estimate from an economic or environmental model, or a discussion in a public hearing. All of this data is expected to be transformed into a shared description that is in accordance with causality, critique, openness, and other principles listed above. Research, practical exercise, and training is needed to learn interactional expertise.

The framework for knowledge-based policy making can be considered as an attempt to solve the problem of management of natural and social complexity in societal decision making. It takes a broad view to decision making covering the whole chain from obtaining the knowledge-base to support decisions to the societal outcomes of decisions.

Within the whole, there are two major aspects that the framework emphasises. First is the process of creating the knowledge that forms the basis of decisions and, whose influence then flows down the stream towards the outcomes of decisions. In the view of the framework, the decision support covers both the technical dimension and the political dimension of decision support (Evans?, Collins?). Here technical dimension refers to the expert knowledge and systematic analyses conducted by experts to provide information on the issues addressed in decision making. Political dimension then refers to the discussions in which the needs and views of different societal actors are addressed and where the practical meanings of expert knowledge are interpreted. This approach is in line with the principles of open assessment described above.

The second major aspect is the top-level view of evaluating decisions. The evaluation covers all parts of the overall decision making process: decision support, decisions, implementation of decisions, as well as outcomes. In line with the principles of REA, the evaluation covers the phases of design, execution and follow-up of each part. When this evaluation is done to each part independently as well as in relation to other parts of the chain from knowledge to outcomes, all parts of the chain become evaluated in terms of four perspectives: process, product, use, and interaction (cf. Pohjola 2001?, 2006?, 2007?).

In addition to the knowledge production, the framework requires a systematic development of decision making practices, where the produced knowledge is utilized in the actual decision making and the decision making process is evaluated. It is argued that in order to make effective changes in decision making, it requires more than just producing an openly created knowledge-base to support the decision making. It requires that the practices of decision making need to be revised. This is another aspect into managing complexity of issues relating to decision making.

Rationale

How to include health aspects in non-health policies?

My experience is that established decision processes work reasonably well related to aspects they are designed for. Performers of environmental impact assessment can organise public hearings and include stakeholder views. A city transport department is capable of designing streets to reduce congestion or negotiating about subsidies to public transport with bus companies. This is their job and they know how to do it.

But including health impacts into these processes is another matter. A city transport department has neither resources nor capability to assess health impacts. It is not enough that researchers would know how to do it. The motivation, expertise, relevant data, and resources must meet in practice before any health assessment is done and included in a decision making process.

This is the critical question: how to make all these meet in practical situations? It should even be so easy that it would become a routine and a default rather than an exception. This will not happen without two parts.

1) There must be tools for making routine health impact assessments in such a way that all generic data and knowledge is already embedded in the tool, and the decision maker or expert only has to add case-specific data to make the assessment model run.

2) The decision making process must be developed in such a way that it supports such assessments and is capable of including their results into the final comparison of decision options.

Of these two, I would say that the first one is easier. We have already done a lot of work in that area and the current proposal promises to do more. But the second one is critical, because less work has been done there, and the need has not even been understood very well. Researchers cannot solve this by themselves. We have to collaborate closely with decision makers also within this project.

Should this collaboration happen within WP assessment or WP dissemination? In any case, we should have a deliberable where we recommend specific improvements in decision making processes to achieve these objectives.

See also

References

  1. CITATION NEEDED!
  2. Mikko V. Pohjola. (2015?) Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. Comprehensive Materials Processing, Volume 8. Health, Safety and Environmental issues (00814)
  3. Mikko V. Pohjola and Jouni T. Tuomisto: Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011, 10:58 http://www.ehjournal.net/content/10/1/58.