Difference between revisions of "Variable"

From Testiwiki
Jump to: navigation, search
m (Rationale)
(Answer: 'data not used' added to rationale)
 
(2 intermediate revisions by the same user not shown)
Line 33: Line 33:
 
! [[Attribute]]
 
! [[Attribute]]
 
! Sub-attribute
 
! Sub-attribute
! Comments specfic to the variable attributes
+
! Comments specific to the variable attributes
 
|-----
 
|-----
 
| '''Name'''
 
| '''Name'''
Line 47: Line 47:
 
| An answer presents an understandable and useful answer to the question. Its essence is often a machine-readable and human-readable probability distribution (which can in a special case be a single number), but an answer can also be non-numerical such as "very valuable" or a descriptive table like on this page. The units of interconnected variables need to be coherent with each other given the functions describing causal relations. The units of variables can be used to check the coherence of the causal network description. This is a so called [[Plausibility test|unit test]]. Typically the answer contains an [[R]] code that fetches the ovariable created under Rationale/Calculations and evaluates it.
 
| An answer presents an understandable and useful answer to the question. Its essence is often a machine-readable and human-readable probability distribution (which can in a special case be a single number), but an answer can also be non-numerical such as "very valuable" or a descriptive table like on this page. The units of interconnected variables need to be coherent with each other given the functions describing causal relations. The units of variables can be used to check the coherence of the causal network description. This is a so called [[Plausibility test|unit test]]. Typically the answer contains an [[R]] code that fetches the ovariable created under Rationale/Calculations and evaluates it.
 
|-----
 
|-----
| rowspan="4" | '''Rationale'''
+
| rowspan="5" | '''Rationale'''
 
|  
 
|  
 
| Rationale contains anything that is necessary to convince a critical reader that the answer is credible and usable. It presents the reader the information required to derive the answer and explains how it is formed. Typically it has the following sub-attributes, but also other are possible. Rationale may also contain lengthy discussions about relevant topics.
 
| Rationale contains anything that is necessary to convince a critical reader that the answer is credible and usable. It presents the reader the information required to derive the answer and explains how it is formed. Typically it has the following sub-attributes, but also other are possible. Rationale may also contain lengthy discussions about relevant topics.
Line 60: Line 60:
 
| Calculations {{reslink|Discussion on formula attribute}} is an operationalisation of how to calculate or derive the answer. Formula uses algebra, computer code, or other explicit methods if possible. Typically it is [[R]] code that produces and stores the necessary [[ovariable]]s to compute the current best answer to the question.
 
| Calculations {{reslink|Discussion on formula attribute}} is an operationalisation of how to calculate or derive the answer. Formula uses algebra, computer code, or other explicit methods if possible. Typically it is [[R]] code that produces and stores the necessary [[ovariable]]s to compute the current best answer to the question.
 
|----
 
|----
 +
| Data not used
 +
| Data not used are relevant for the research question, but for some reason they were not used in producing the current answer. I may be that the data was found after the synthesis, and an update has not yet been done; or it has been unclear how to merge these to the existing data. In any case, it is important to be differentiate and be explicit about whether data is irrelevant (and therefore removed from the page) or relevant but not used (and therefore waiting for further work).
 
|}
 
|}
  
Line 70: Line 72:
 
== Rationale ==
 
== Rationale ==
  
 +
[[File:Information_flow_within_open_policy_practice.png|thumb|450px]]
 
The structure is based on extensive discussions between Mikko Pohjola and Jouni Tuomisto in 2006-2008 and intensive application in Opasnet ever since.
 
The structure is based on extensive discussions between Mikko Pohjola and Jouni Tuomisto in 2006-2008 and intensive application in Opasnet ever since.
  
</noinclude>
+
For more detailed description about variables as information objects, see [[knowledge crystal]].
'''Open variables''' aka '''info crystals''' give the current best answer to some specific research question, based on the crowd's interpretation of existing information. Info crystals are the basic elements of [[assessment|assessments]]. They always describe some phenomenon in the real world. These may be descriptions of physical phenomenon like exposure to some chemical, but also things like opinion distributions in a population about immigration. It's a part of the nature of info crystals that they are never fully finished but that their content evolves with new knowledge and the work done to better them. They also are not bound to any one assessment but can be used as parts of many different assessments. It's worth noticing that the word variable (also used) is used in also many other meanings, but in this context it is used to mean precisely info crystals used in assessments.
 
 
 
Info crystals contain scientific knowledge, but they differ from classic products of scientific research. Here is a short description and comparison.
 
* A scientific article is the basic unit of doing science today. For it a researcher or a research group does research that gives out observation data. The data is analysed, and in the end interpretations and conclusions are made based on the new results and previous scientific articles. The goal is to publish the article in a peer reviewed paper, meaning a few researches in the field looks through the manuscript and backs it up before it is published. The peer review -system aims to raise the quality of the manuscripts and weed out bad research. For both purposes it is agreed that the system isn't especially efficient, but no one has come up with anything better.
 
* Expert reports are gathered by an expert well familiar with the field in question, and are usually about some specific question like the topic of a future decision. They usually don't produce any new knowledge and are usually not peer reviewed, so they're not well respected among researchers and research funders. However, they are much better suited to be used in decision support, because they answer the actual questions that are relevant to the decision at hand.
 
* Open data is usually measured raw data that has been made public for anyone to use. It depends on the case whether the data is well cultured and quality-proofed, but it usually isn't. The practises of open data have only begun to take shape in the last few years, because researches haven't been in the habit of publishing raw data before. The problem with supporting decision-making with raw data is that it doesn't involve any interpretations or conclusions, and even less so of the relevant issues. Open data is great raw material for someone who knows how to analyse and interpret it and has the time, but quite useless to anyone else.
 
* The idea of an info crystal is to combine the parts of all the other mentioned information products useful to decision support and avoid the bad ones. The idea of an info crystal is to built an information product around a specific research question. The question can be purely scientific, but in the case of decision support it can be phrased to help precisely the future decision. To answer the question experts gather all possible material that will help answer the question. This includes research articles, expert reports, open data and all other silent knowledge of the experts that is not found in written form. The info crystal is worked on from the beginning in an open web-workspace with the help of crowdsourcing, and all information it contains is free to use. The material is structured, assessed and interpreted. The result is an answer that has passed all critique that has come up during the working process. Thus the answer is the best current interpretation of how the thing the question asks is in reality. Criticising the info crystals openly during the work ensures that the result is scientifically sound. The answer is usually in a computer-readable form for models to use and also in text and picture form for humans. The strengths of an info crystal are that it uses all relevant information (not only own data as in an article), interprets the data (unlike open data) and is produced by following the principles of openness and critique (unlike an expert report).
 
 
 
<noinclude>
 
  
 
== See also ==
 
== See also ==

Latest revision as of 13:33, 16 December 2015


<section begin=glossary />

Variable is a description of a particular piece of reality. It can be a description of a physical phenomenon, or a description of value judgements. Also decisions included in an assessment are described as variables. Variables are continuously existing descriptions of reality, which develop in time as knowledge about the topic increases. Variables are therefore not tied into any single assessment, but instead can be included in other assessments. A variable is the basic building block of describing reality.<section end=glossary />

Question

What should be the structure of a variable such that it

  • is able to systematically handle all kinds of information about the particular piece of reality that the variable is describing, especially
    • it is generic enough to be a standard building block in decision support work (including interpretation of scientific information and political discussions),
  • is able to systematically describe causal relationships between phenomena and variables that describe them,
  • enables both quantitative and qualitative descriptions,
  • is suitable for any kinds of variables, especially physical phenomena, decisions, and value judgements,
  • inherits its main structure from universal objects,
  • complies with the PSSP ontology,
  • can be operationalised in a computational model system,
  • results in variables that are independent of the assessment(s) they belong to;
  • results in variables that pass the clairvoyant test.
  • can be implemented on a website, and
  • is easy enough to be usable and understood by interested non-experts?

Answer

Variable is implemented as a web page in Opasnet wiki web-workspace. A variable page has the following structure.

The attributes of a variable.
Attribute Sub-attribute Comments specific to the variable attributes
Name An identifier for the variable. Each Opasnet page have two kinds of identifiers: the name of the page (e.g. Variable) and the page identifier (e.g. Op_en2022). The former is used e.g. in links, the latter in R code.
Question Gives the question that is to be answered. It defines the scope of the variable. The question should be defined in a way that it has relevance in many different situations, i.e. makes the variable re-usable. (Compare to an assessment question, which is more specific to time, place and user need.)
Answer An answer presents an understandable and useful answer to the question. Its essence is often a machine-readable and human-readable probability distribution (which can in a special case be a single number), but an answer can also be non-numerical such as "very valuable" or a descriptive table like on this page. The units of interconnected variables need to be coherent with each other given the functions describing causal relations. The units of variables can be used to check the coherence of the causal network description. This is a so called unit test. Typically the answer contains an R code that fetches the ovariable created under Rationale/Calculations and evaluates it.
Rationale Rationale contains anything that is necessary to convince a critical reader that the answer is credible and usable. It presents the reader the information required to derive the answer and explains how it is formed. Typically it has the following sub-attributes, but also other are possible. Rationale may also contain lengthy discussions about relevant topics.
Data Data tells about direct observations (or expert judgements) about the variable itself.
Dependencies Dependencies R↻ tells what we know about how upstream variables (i.e. causal parents) affect the variable. In other words, we attempt to estimate the answer indirectly based on information of causal parents. Sometimes also reverse inference is possible based on causal children. Dependencies list the causal parents and expresses their functional relationships (the variable as a function of its parents) or probabilistic relationships (conditional probability of the variable given its parents).
Calculations Calculations R↻ is an operationalisation of how to calculate or derive the answer. Formula uses algebra, computer code, or other explicit methods if possible. Typically it is R code that produces and stores the necessary ovariables to compute the current best answer to the question.
Data not used Data not used are relevant for the research question, but for some reason they were not used in producing the current answer. I may be that the data was found after the synthesis, and an update has not yet been done; or it has been unclear how to merge these to the existing data. In any case, it is important to be differentiate and be explicit about whether data is irrelevant (and therefore removed from the page) or relevant but not used (and therefore waiting for further work).

In addition, it is practical to have additional subtitles on a variable page. These are not attributes, though.

  • See also
  • Keywords (not always used)
  • References
  • Related files

Rationale

Error creating thumbnail: Unable to save thumbnail to destination

The structure is based on extensive discussions between Mikko Pohjola and Jouni Tuomisto in 2006-2008 and intensive application in Opasnet ever since.

For more detailed description about variables as information objects, see knowledge crystal.

See also

References


Related files