Difference between revisions of "Pyrkilo guide 3 BENERIS"

From Testiwiki
Jump to: navigation, search
(first draft)
 
(Linkit sivuille purettu auki tekstiksi)
Line 1: Line 1:
 
{{encyclopedia|moderator=}}
 
{{encyclopedia|moderator=}}
  
This document aggregates information of the method called 'Pyrkilo', used in Beneris. Nowadays the method has a broader scope and it is called Open assessment. The following links are on the openly available wiki site.
+
= Introduction =
 +
This document aggregates information of the method called 'Pyrkilo', used in Beneris. Nowadays the method has a broader scope and it is called Open assessment. The following links in text are on the openly available wiki site. This page is a compilation of the contents on these pages.
  
 
'''Open assessment''' can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used. In practice, the assessment processes are performed using Internet tools (notably Opasnet) among traditional tools. Stakeholders and other interested people are able to participate, comment, and edit its contents already since an early phase of the process. Open assessment is based on a clear information structure and scientific method as the ultimate rule for dealing with disputes.
 
'''Open assessment''' can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used. In practice, the assessment processes are performed using Internet tools (notably Opasnet) among traditional tools. Stakeholders and other interested people are able to participate, comment, and edit its contents already since an early phase of the process. Open assessment is based on a clear information structure and scientific method as the ultimate rule for dealing with disputes.
  
{{{{ns:0}}:(Dealing with disputes   )}}
+
{{pagebreak}}
 +
 
 +
=Dealing with disputes =
 +
How to deal with disputes that may arise during an assessment? Lets start with the definition.
 +
{{{{ns:0}}:Dealing with disputes}}
 +
{{pagebreak}}
 +
 
 +
= How to use Open Assessment in research? =
 +
Next we introduce how and why use Open Assessment in research.
 +
 
 +
{{{{ns:0}}:Open assessment in research}}
 +
 
 +
{{pagebreak}}
 +
 
 +
= Open assessors network =
 +
{{{{ns:0}}:Open Assessors' Network}}
 +
 
 +
{{pagebreak}}
 +
 
 +
= Opasnet base =
 +
{{{{ns:0}}:Opasnet base}}
 +
{{{{ns:0}}:Dealing with disputes}}

Revision as of 10:28, 28 July 2009


Introduction

This document aggregates information of the method called 'Pyrkilo', used in Beneris. Nowadays the method has a broader scope and it is called Open assessment. The following links in text are on the openly available wiki site. This page is a compilation of the contents on these pages.

Open assessment can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used. In practice, the assessment processes are performed using Internet tools (notably Opasnet) among traditional tools. Stakeholders and other interested people are able to participate, comment, and edit its contents already since an early phase of the process. Open assessment is based on a clear information structure and scientific method as the ultimate rule for dealing with disputes.


Dealing with disputes

How to deal with disputes that may arise during an assessment? Lets start with the definition.


Dispute is a difference in opinion about a state of the world, or the preferability of a state of the world.

Purpose

When a diverse group of contributors participate in making a risk assessment, it is obvious that disputes may arise. One of the most instructive features of risk assessment is to understand both these disputes and the reasons why a particular outcome occurs. The risk assessment method must include guidance to deal with disputes, find resolutions and document the choices made so that they can be defended afterwards. Argumentation theory offers a basis for these methods.

Structure of the process

Input format

Procedure

Formal argumentation (according to the pragma-dialectical argumentation theory [1]) is used as the primary means to describe and resolve any disputes about scientific or valuation issues within the assessment. In traditional risk assessments, there is guidance to describe major disputes, but there are no structural rules for this. In addition, many disputes are (implicitly) resolved using conventions without challenging the foundations of the convention. The new method attempts to achieve more in dealing with disputes.

Van Eemeren and Grootendorst have operationalised the dispute resolving problem in the following way: "When should I, as a rational critic who judges reasonably, regard an argument as acceptable?" [1] Their answer is, very briefly, that disputes are solved using formal argumentation. The proponent and opponent of a statement can give arguments supporting their own statement (or other arguments) or attacking the other discussant's statement or arguments. There are certain criteria that each argument must fulfil, such as rationality and relevance. The dispute is resolved when one discussant is able to base his/her argumentation on arguments that both discussants agree on.

The structure of a discussion has three parts:
  • Dispute (what are the conflicting statements?)
  • Argumentation (a hierarchical thread of arguments related to the statements)
  • Outcome (the statements that remain valid after the discussion)
Possible arguments include
  • #1: : an attack against another argument (or statement) --Jouni 14:30, 31 August 2007 (EEST)
  • #2: : a defence of an argument --Jouni 14:30, 31 August 2007 (EEST)
  • --#3: : a comment --Jouni 14:30, 31 August 2007 (EEST)

An argument must always be signed. Otherwise, it is not valid.

An example of a resolved dispute

Can the collaborative workspace calculate?

How to read discussions

Statements: It is possible to calculate variable results in the collaborative workspace.

Resolution: Accepted.

(A stable resolution, when found, should be updated to the main page.)

Argumentation:

1 not part of scoping (and not very feasible either I think...) --Anne.knol 16:45, 15 March 2007 (EET)

3 : At least some (simple) common calculation methods that nearly everyone uses might be provided. If they are provided directly in the scoping diagram (by clicking on the variables) or not may be decided later. --Alexandra Kuhn 10:20, 19 March 2007 (EET)

2 : It is not directly a part of the scoping, but it puts demands on the scoping tool if this should be possible to do. As for the feasibility I dont know, but KTL are already doing something like this with the wikimedia <-> analytica tool --Sjur 12:04, 16 March 2007 (EET)


Management

Output format

Rationale

The theoretical background referred to here is the pragma-dialectical approach to argumentation theory, also known as the Amsterdam school of argumentation, developed by Frans van Eemeren and Rob Grootendorst from the University of Amsterdam. Only the main aspects of the theory in this scope are presented here and a more detailed and thorough representation of the theory can be found from e.g. van Eemeren, Grootendosrt, Henkemans: Argumentation - Analysis, Evaluation, Presentation. Lawrence Erlbaum Associates Inc., 2002. The view presented here (as well as pragma-dialectics itself) also builds on critical rationalism as philosophical basis.

Traditionally the main objective of the pragma-dialectical approach is to resolve a difference of opinion by means of argumentative discourse. Critical rationalism in practice means that there are no absolute truths, so everything can be questioned and standpoints are always accepted only as temporarily and they can be discarded or changed if better/improved ones are found.

Pragma-dialectical argumentation can also be seen as a means for knowledge production, i.e. to bridge the gap between current knowledge base and the needed knowledge e.g. within a group. From the point of view of environmental health risk assessment, this is probably the most useful aspect of using argumentation in risk assessments. Argumenting for and against is used as a means to explore the validity, acceptability and correctness of the central standpoints/statements in focus. Accordingly the standpoints/statements are refined, reformulated, discarded etc. as appears necessary along the argumentative discourse.

The pragma-dialectical argumentation theory presents an ideal case that always differs from real live implications of argumentation. Nevertheless the theory can well be used in making the argumentation schemes and especially the strengths/weaknesses of argumentation explicit. It thus offers a way of improving the analysis and evaluation of real-life argumentation and improves argumentative presentation. It does not however guarantee exact definite results, but is always situation and context specific and easily affected by the view taken by the analyst/evaluator/presenter.

Basic building blocks of argumentation

The essential terminology in relation to our uses of the theory that requires some explanantion is explained here.

  • Protagonist: The party that expresses a standpoint and is ready to defend that standpoint with arguments. The protagonist bears the burden of proof, i.e. is obliged to defend his/her standpoint by argument(s) in order to have his/her standpoint accepted.
  • Antagonist: The party that expresses doubts and/or counterarguments on the standpoint expressed by the protagonist. Note that the antagonist does not need to express an opposing standpoint to question the protagonists standpoint, also expressing a doubt towards it is enough.
  • Standpoint: A statement expressed by the protagonist, representing his/her view on some matter. Standpoint is the focal point of an agumentative discussion. Standpoints can be positive or negative and defending them means to justify or refute the standpoint respectively.
  • Argument: A defensive or attacking expression in relation to the standpoint or another argument. Defensive arguments are expressed by the protagonist(s) and attacking arguments are expressed by the antagonist(s).
  • Premise: Assumption presumed true within the argumentative discourse at hand. Premises form the basis and the background of the discourse. Can be explified or left implicit, but those premises that are likely to be perceived differently by the protagonist and the antagonist should be made explicit and agreed on before starting an argumentation.

See also

References

  1. 1.0 1.1 van Eemeren, F. H. & Grootendorst, R. 2004. A systematic theory of argumentation. Cambridge: Cambridge University Press.

Some outdated, yet interesting considerations e.g. on types of arguments, discussion manners, and discussion practices archived to [2], [3] and [4]


How to use Open Assessment in research?

Next we introduce how and why use Open Assessment in research.


Open assessment in research is a lecture about how open assessment method can be utilised in basic research, even if there is no policy need for the particular piece of information.

Scope

Purpose
To describe how open assessment method can be utilised in basic research, even if there is no policy need for the particular piece of information. To convince the audience to try open assessment out in their own work.
Intended audience
Researchers (especially at doctoral student level) in any field of science (mainly natural, not social scientists).
Duration
2.5 h

Definition

The major hindrances of applying open assessment are currently:

  • Resistance to change.
  • The lack of understanding about how mass collaboration improves and facilitates individual researcher's work.
  • The fear that openness destroys possibilities to gain merit from own work.
  • The traditional mindsetting that research is a publishing effort of articles in journals instead of a collaborative effort to understand reality.
  • The lack of understanding about what I get out of this unless everyone participates.

These hindrances should be overcome during the lecture.

In order to fully understand this lecture it is recommended to acquaint oneself also with the following lectures:

Objectives:

  • learn how traditional articles have two distinct parts
  • Learn how these parts can be organised in a better way
  • Become exposed to the idea of scientific method.
  • Identify what is the use of science in policy assessments.
  • Learn, for a piece of information, you know what is a good object type for it.
  • Learn to see the world as a collection of information pieces.
  • See your own work/research as a part of a global mass collaboration project.
  • Learn that it is possible to do the whole research process (idea - research plan - execution of a study - writing articles) in Opasnet.
  • Learn that the impact of you research may be higher in an open system.
  • See how Opasnet can be used in a practical case study (assessment, research).

Result

See the presentation file.

Reasons why people hesitate using Opasnet.

Problems Solutions
You can't make all people participate.
  • Reading is more important than writing.
  • Opasnet produces marginal net benefit even for a small group.
Strong stakeholders will hijack the system.
  • It is easier than before to shoot down untrue statements.
  • Quality control systems must be developed further.
It will be a chaos and a huge pile of junk.
  • Clear information structure with variables.
  • Argumentation organises discussion.
  • Redundant contributions are deleted.
Information is not found.
  • Linking is easy.
  • Google finds the content.
  • Moderating and cleaning up is necessary.
I have to give away my data without getting merit.
  • Protected project area can be used.
  • Showing data creates cooperation.
  • Journal of Open Assessment can be founded.


How do you spend your day as researcher? What takes your time?

  • Doing actual scientific work on your research topic.
  • Having meetings with your research group.
  • Having meetings with your administrative group (unit, department).
  • Reporting about your work to the research group of the adnimistration.
  • Writing
  • Explaining (and re-explaining) other people what you have done in the project.


What are the phases of the scientific work?

  • Reading articles and other material about your field.
  • Identifying research questions to be studied.
  • Developing study designs.
  • Organising material, personnel etc. for the execution of the study.
  • Executing the study.
  • Collecting samples from the study.
  • Analysing the samples.
  • Recording analysis results into a data file.
  • Analysing the data file statistically.
  • Making interpretations about the data.
  • Reading articles about issues related to the study.
  • Writing a document about the study and its results.
  • Formatting the document into a manuscript for a particular journal.
  • Submitting the manuscript.
  • Editing the manuscript according to reviewer comments.
  • Getting the manuscript published as an article.


Why is Google so popular?

  • It collects information about individual people's interpretation about important things.
  • It can automatically develop importance rankings that are probably useful for most people.
  • It brings you to the sources of information, but it does not provide further understanding.


Opasnet attempts to take the difficult step forward.

  • Opasnet organises and interprets information that is useful for the individual users (i.e., you!).
  • If some piece of information is useful for you, it is more likely to be useful for someone else, too. (Compared with the situation where the piece is useless to you.)
  • Therefore, you should write all useful pieces of information directly to Opasnet.
  • Therefore, Opasnet should be easy enough to use so that no additional work is needed compared with the way you usually write your information down.
  • Therefore, we want to develop Opasnet into a system where the duplicate recording is minimised.
  • Therefore, the existing information must be well organised and very easy to find.


Probably a typical problem is that people don't see how the individual pieces of information actually grow into a large, coherent system describing reality.

A critical thing is what you DON'T see: collaboration emerging because people can build on your ideas and work.


An illustrative example about how open assessment can change the working time needed and the merit obtained in a research study.
Phases of a scientific study Trad. Open assessm. Merit obtainedR↻

Time spent Time saved Quality improved Trad. OA
Reading articles and other material about your field, making notes. ** + +
*
Identifying research questions and study designs. *
+
*
Executing the study. ***



Working with the study data and analyses. **
+
**
Making interpretations about the data. * + +++
*
Writing a document about the study and its results. ** +

*
Getting the manuscript published as an article. * +++ + ******

Why open assessment is useful in scientific research?

Open assessment makes us focus on the only really important thing in our work: the description of some real-world phenomenon that we are studying. Our work is not about meetings, nor memos about the meetings, nor reading scientific articles, nor applying for funding. All this is secondary. The only really important this is to describe our topic.

If someone has already described the topic, we are wasting our time to do it again. Instead, we should have a centralised place, like Wikipedia, that contains the descriptions of all our research topics. Anyone interested could read a description of our topic, and all the researchers of that field could participate in writing that description.

Descriptions should be about quantifiable properties whenever possible. It clarifies our thinking when we are forced to think about clear terms like concentrations of pollutants, slopes of dose-responses, or numbers of life-years lost.

Each topic description divides into three main parts: 1) What is the property that we are estimating? 2) What do we know about the property? and 3) What is our current estimate of the property? Most of our work relates to number 2. All the relevant background literature is listed and the main points are described there; very likely someone has done that for you already. Your original research is a small piece of number 2, and that's what you want to describe.

Planning a study is more effective than traditionally, because the descriptions of your field are constantly being edited and updated, and they are structured in the natural way, i.e. independent on adminstrative or geographical boundaries. It's easier to get a good understanding about what is the best study that you should do right now. You may see some of the newest ideas on the wiki pages months before you would see the same info published in a peer-reviewed journal.

Executing a field study is pretty much what it is today; open assessment does not much change that.

Data storage can be improved. You have a web-based file management system, into which you can store your original results and protect them as you wish. You can yourself access them from anywhere in the world, but you can prevent others from seeing them unless you want to share them. Analysing your results is also very different. It is very likely that a large group of researchers want to do similar analyses as you, and they have a shared page for describing these methods. You can simply go to this method page in wiki to find ready-made code for analysing your data with state-of-the-art methods. For example, R is a software that is freely and widely used, and R code is very easy to share and improve collaboratively. You have your own code available for others, so you can easily ask for help.

When you have the results analysed, you can immediately use them to update the page about your research topic. Your name will show in the history of that page, so you can get the credit for being the first one to provide that knowledge. After this prepublication, you can go on writing a full scientific article into a peer-reviewed journal. This is no joke: this is the way physicists publish today, and the journals in physics have had to accept it. See http://arxiv.org .

Just think how we spend most of our time in work: trying to learn enough of our field to be able to ask relevant research questions; trying to make a new statistical analysis we wish we understood better with a new software package we don't know well; trying to rewrite a manuscript into the third journal so that we finally would get (after two years or trying) at least something published.

There is a huge loss of time and resources because a) we don't systematically share our background knowledge to everyone, b) details of our work are not visible to those who could easily help us out from problems that are very difficult for us, c) we falsely think that peer review must always be done before the first publishing.

We need methods, tools, and practices to make this happen. All the technology that is needed is already there. The major hindrance is our neglectance of the new possiblities. Many of the technical solutions are actually up and running, and just waiting for researchers to start using and improving them. Just check the following websites:

  • A statistical analysis performed online with hidden data: [5]
  • A prepublication of a concept (in Finnish) [6]
  • A description of pollutants in salmon: [7]
  • A standardised set of tools for reserach: [8]
  • A method description for formal argumentation: [9]


Open assessors network

This page is about a mass collaboration project Open Assessors' Network. For a description about the project's website and workspace, see Opasnet.

<section begin=glossary />

The Open Assessors' Network is a mass collaboration project for open assessors, that is people who are willing to promote the open assessment practices in the aim to improve societal decision-making. The major part of the collaboration happens on Opasnet, the website and workspace of this network: http://en.opasnet.org. In addition, there is a plan that Open Assessors' Network should be developed into a registered society for people interested in open assessment; the current plan is called Avary (description in Finnish). The society could maintain the Opasnet website and publish the Journal of Open Assessment.

<section end=glossary />

Question

How should Open Assessors' Network function to fulfil it purpose to

  • help interested people in making open assessments,
  • help people in learning skills needed in open assessments,
  • help skilled people in teaching their skills to others,
  • promote the enthusiasm and recruit new members, and
  • build the capacity to mobilise enough people to work on a new case even within hours?

Answer

  • Network is organised virtually using a Facebook group.
    • Members can post questions, tasks, and challenges on the wall.
  • Members can develop new skills for making open assessments. For a description, see Opasnet user skills.
    • Members can keep a record about their skills on a page User:Member/Skill. For description, see Aikakone (in Finnish).
    • Each skill must be verified and accepted by another member who already has that skill.
    • Some challenges and tasks (such as List of tasks for R experts) are only available to members with specific skills.
  • Tasks are pieces of work for which someone has promised to give compensation, such as money. At the moment, THL is the largest supplier of tasks that are paid for. There ares list of tasks from which anyone with enough skills can take a task and do it (after some administrative business). See List of open tasks, Avoimia tehtäviä (in Finnish), and List of tasks for R experts.
  • Challenges are like tasks but nobody has promised to pay for doing them. The motivation of doing challenges vary: they may be fun or challenging, or lead to social good. Also, social respect in the form of onors may be given to people who succeed in challenges, although this is difficult to promise by anyone beforehand. In addition, there is a challenge of developing the actual system of distributing respect in the form of onors.


Rationale

The activities in Open Assessors' Network should be like a game: fun to do, socially attractive, and challenging so that step-by-step challenges that lead to more and more demanding tasks with higher and higher rewards.

Tasks that open assessors should be able to do

Main article: Opasnet user skills.

See also


Opasnet base


This page is about the Opasnet Base database. The previous version is described on page Opasnet Base (2008-2011).

Question

How to improve the existing Opasnet Base? Following issues of Opasnet Base 1 must be resolved:

  • Opasnet Base 1 structure makes filtering by location very slow on big data
  • Opasnet Base 1 structure is perhaps unnecessarily complex = queries are hard to adapt and read
  • MySQL is not ideal for storing huge amounts of data, or at least this is how we assume?
  • MySQL tables have fixed column types = difficult to store data objects with varying types of indices into one data table
  • Multiple languages (localization) not supported

Answer

MySQL is good on relations but weak on dynamic big data. Let's keep the basic "scaffold" in MySQL and store the big data (locations and results) into noSQL-base. After few days of research the best candidate for noSQL-base seems to be MongoDB. Combining relational MySQL and non-relational MongoDB will be the foundation for the new Opasnet Base 2.

Table structure in the database

All tables

acts
Uploads, updates, and other actions
Field Type Null Extra Key
id int(10) unsigned NO auto_increment PRI
obj_id int(10) unsigned NO
series_id int(10) unsigned NO
unit varchar(64) YES
type ENUM('replace','append') NO
who varchar(255) NO
when timestamp NO
comments varchar(255) YES
lang char(3) NO ISO 639-2, default:'eng'

--1 : I guess this should have field obj_id to identify which object we are updating. In the previous version, act - obj was many-to-many relationship, but it can as well be many-to-one which makes life easier; it is not important to know that two variables were uploaded in the same model run. Also series_id and unit could be in this table. --Jouni 22:56, 22 February 2012 (EET) 2 : I have now altered the table structure to implement Jouni's idea. --Einari 12:31, 19 March 2012 (EET)


objs
Object information (all objects)
Field Type Null Extra Key
id int(10) unsigned NO auto_increment PRI
ident varchar(64) NO UNI
name varchar(255) NO
subset varchar(255) NO UNI(with ident)
type ENUM('variable','study','method','assessment','class','nugget','encyclopedia') NO
page int(10) unsigned NO
wiki_id tinyint(3) unsigned NO
inds
Indices
Field Type Null Extra Key
id int(10) unsigned NO auto_increment PRI
series_id int(10) unsigned NO UNI(with ident)
ident varchar(64) NO UNI(with series_id)
type ENUM('entity','number','time') NO
name varchar(255) NO
unit varchar(64) YES
page int(10) unsigned NO
wiki_id tinyint(3) unsigned NO
order_index int(10) unsigned NO
hidden boolean NO false
wikis
Wiki information
Field Type Null Extra Key
id tinyint(3) NO PRI
url varchar(255) NO
wname varchar(255) NO

MongoDB

Tables to store all locations and results

db.<objs.ident>(.<objs.subset>).dat

Column names:
sid(series.id), aid (act.id), <inds.ident>, <inds.ident>, ..., <inds.ident>, res (result)

Data:
<acts.series_id>, <acts.id>, <locs.id | data>, <locs.id | data>, ..., <locs.id | data>,

Index:
db.<objs.ident>(.<objs.subset>).dat.ensureIndex({_sid:1, _aid:1});


--# : I guess this is about loc.id's, not about ind.id's. Loc.id's are either identifiers explained below or values of continuous indices. --Jouni 22:56, 22 February 2012 (EET)
# : It was actually about ind.id, but the line described column names, not the data itself. To avoid future misleads I added data describing line as well. Object data can consist of location identifiers or e.g. real numbers, depending of the type of the column index. --Einari 13:33, 19 March 2012 (EET)

Tables to store real location values for entity type indices

db.<objs.ident>(.<objs.subset>).locs

Column names:
iid (index id), lid (location id), val

Index:
ensureIndex(array("iid" => 1, "lid" => 1));

# : If the index is entity type, inds.id is the identifier of the index (whose details are in MySQL) and value is the name of the location. If the index is continuous, it does not need rows in this table, as everything necessary is described in the tables inds and db.<object.ident>.dat. --Jouni 22:56, 22 February 2012 (EET)

--# : What is key? --Jouni 22:56, 22 February 2012 (EET)
# : Key was something that I cannot really confirm. I altered the table description so that index id and location id together make the key (indexed in that order). This means that location values must always be accessed through index id. I believe that this is the fastest way in practice. --Einari 14:35, 19 March 2012 (EET)


--Toinen asia OB2:een liittyen: Indekseille pitäisi pystyä antamaan yksiköt, koska ne eivät ole itsestäänselviä (Erkki huomasi tämän). Onko tämän toteutus suunniteltu? Seka kannassa pitää olla paikka, ja lisäksi esim. t2b:ssa olisi syytä olla parametri yksiköiden antamista varten (tai pitäisikö nykyistä unit-parametrin käyttöä laajentaa?).: --Jouni 16:25, 1 January 2013 (EET) {{{3}}}

JSON interface

The interface is based on a PHP script that outputs JSON, so data can be transferred using ordinary HTTP requests. One could even read through most of the meta-data in a web browser with a plugin that displays json better:

Rationale

See also

Other databases with (ot without) R connectivity

Pages related to Opasnet Base

Opasnet Base · Uploading to Opasnet Base · Data structures in Opasnet · Opasnet Base UI · Modelling in Opasnet · Special:Opasnet Base Import · Opasnet Base Connection for R (needs updating) · Converting KOPRA data into Opasnet Base · Poll

Pages related to the 2008-2011 version of Opasnet Base

Opasnet base connection for Analytica · Opasnet base structure · Related Analytica file (old version File:Transferring to result database.ANA) · Analytica Web Player · Removed pages and other links · Standard run · OpasnetBaseUtils


Dispute is a difference in opinion about a state of the world, or the preferability of a state of the world.

Purpose

When a diverse group of contributors participate in making a risk assessment, it is obvious that disputes may arise. One of the most instructive features of risk assessment is to understand both these disputes and the reasons why a particular outcome occurs. The risk assessment method must include guidance to deal with disputes, find resolutions and document the choices made so that they can be defended afterwards. Argumentation theory offers a basis for these methods.

Structure of the process

Input format

Procedure

Formal argumentation (according to the pragma-dialectical argumentation theory [1]) is used as the primary means to describe and resolve any disputes about scientific or valuation issues within the assessment. In traditional risk assessments, there is guidance to describe major disputes, but there are no structural rules for this. In addition, many disputes are (implicitly) resolved using conventions without challenging the foundations of the convention. The new method attempts to achieve more in dealing with disputes.

Van Eemeren and Grootendorst have operationalised the dispute resolving problem in the following way: "When should I, as a rational critic who judges reasonably, regard an argument as acceptable?" [1] Their answer is, very briefly, that disputes are solved using formal argumentation. The proponent and opponent of a statement can give arguments supporting their own statement (or other arguments) or attacking the other discussant's statement or arguments. There are certain criteria that each argument must fulfil, such as rationality and relevance. The dispute is resolved when one discussant is able to base his/her argumentation on arguments that both discussants agree on.

The structure of a discussion has three parts:
  • Dispute (what are the conflicting statements?)
  • Argumentation (a hierarchical thread of arguments related to the statements)
  • Outcome (the statements that remain valid after the discussion)
Possible arguments include
  • #1: : an attack against another argument (or statement) --Jouni 14:30, 31 August 2007 (EEST)
  • #2: : a defence of an argument --Jouni 14:30, 31 August 2007 (EEST)
  • --#3: : a comment --Jouni 14:30, 31 August 2007 (EEST)

An argument must always be signed. Otherwise, it is not valid.

An example of a resolved dispute

Can the collaborative workspace calculate?

How to read discussions

Statements: It is possible to calculate variable results in the collaborative workspace.

Resolution: Accepted.

(A stable resolution, when found, should be updated to the main page.)

Argumentation:

1 not part of scoping (and not very feasible either I think...) --Anne.knol 16:45, 15 March 2007 (EET)

3 : At least some (simple) common calculation methods that nearly everyone uses might be provided. If they are provided directly in the scoping diagram (by clicking on the variables) or not may be decided later. --Alexandra Kuhn 10:20, 19 March 2007 (EET)

2 : It is not directly a part of the scoping, but it puts demands on the scoping tool if this should be possible to do. As for the feasibility I dont know, but KTL are already doing something like this with the wikimedia <-> analytica tool --Sjur 12:04, 16 March 2007 (EET)


Management

Output format

Rationale

The theoretical background referred to here is the pragma-dialectical approach to argumentation theory, also known as the Amsterdam school of argumentation, developed by Frans van Eemeren and Rob Grootendorst from the University of Amsterdam. Only the main aspects of the theory in this scope are presented here and a more detailed and thorough representation of the theory can be found from e.g. van Eemeren, Grootendosrt, Henkemans: Argumentation - Analysis, Evaluation, Presentation. Lawrence Erlbaum Associates Inc., 2002. The view presented here (as well as pragma-dialectics itself) also builds on critical rationalism as philosophical basis.

Traditionally the main objective of the pragma-dialectical approach is to resolve a difference of opinion by means of argumentative discourse. Critical rationalism in practice means that there are no absolute truths, so everything can be questioned and standpoints are always accepted only as temporarily and they can be discarded or changed if better/improved ones are found.

Pragma-dialectical argumentation can also be seen as a means for knowledge production, i.e. to bridge the gap between current knowledge base and the needed knowledge e.g. within a group. From the point of view of environmental health risk assessment, this is probably the most useful aspect of using argumentation in risk assessments. Argumenting for and against is used as a means to explore the validity, acceptability and correctness of the central standpoints/statements in focus. Accordingly the standpoints/statements are refined, reformulated, discarded etc. as appears necessary along the argumentative discourse.

The pragma-dialectical argumentation theory presents an ideal case that always differs from real live implications of argumentation. Nevertheless the theory can well be used in making the argumentation schemes and especially the strengths/weaknesses of argumentation explicit. It thus offers a way of improving the analysis and evaluation of real-life argumentation and improves argumentative presentation. It does not however guarantee exact definite results, but is always situation and context specific and easily affected by the view taken by the analyst/evaluator/presenter.

Basic building blocks of argumentation

The essential terminology in relation to our uses of the theory that requires some explanantion is explained here.

  • Protagonist: The party that expresses a standpoint and is ready to defend that standpoint with arguments. The protagonist bears the burden of proof, i.e. is obliged to defend his/her standpoint by argument(s) in order to have his/her standpoint accepted.
  • Antagonist: The party that expresses doubts and/or counterarguments on the standpoint expressed by the protagonist. Note that the antagonist does not need to express an opposing standpoint to question the protagonists standpoint, also expressing a doubt towards it is enough.
  • Standpoint: A statement expressed by the protagonist, representing his/her view on some matter. Standpoint is the focal point of an agumentative discussion. Standpoints can be positive or negative and defending them means to justify or refute the standpoint respectively.
  • Argument: A defensive or attacking expression in relation to the standpoint or another argument. Defensive arguments are expressed by the protagonist(s) and attacking arguments are expressed by the antagonist(s).
  • Premise: Assumption presumed true within the argumentative discourse at hand. Premises form the basis and the background of the discourse. Can be explified or left implicit, but those premises that are likely to be perceived differently by the protagonist and the antagonist should be made explicit and agreed on before starting an argumentation.

See also

References

  1. 1.0 1.1 van Eemeren, F. H. & Grootendorst, R. 2004. A systematic theory of argumentation. Cambridge: Cambridge University Press.

Some outdated, yet interesting considerations e.g. on types of arguments, discussion manners, and discussion practices archived to [15], [16] and [17]