Difference between revisions of "Peer review"

From Testiwiki
Jump to: navigation, search
m
(See also)
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
[[Category:Open assessment]]
 
[[Category:Open assessment]]
[[Category:Quality control]]
+
{{encyclopedia|moderator=Jouni
[[Category:Glossary term]]
+
| reference = {{publication
{{method}}
+
| authors        = Jouni T. Tuomisto, Mikko Pohjola
This page is about peer review in [[open assessment]]. For other uses, see the [[:en:Peer review|Peer review]] page in Wikipedia.
+
| page           = Peer review
 
+
| explanation    =  
<section begin=glossary />
+
| publishingyear = 2010
:'''Peer review''' in [[open assessment]] is a [[method]] for evaluating [[uncertainty|uncertainties]] that are not explicitly captured in the [[definition]] of an object (typically an [[assessment]] or a [[variable]]). Technically, it is a [[discussion]] on the Talk page of the object. In the case of a variable, it has the following [[statement]]:
+
| urn            =  
:: "The definition of this object is based on the state-of-the-art scientific knowledge and methods. The data used is representative and unbiased. The causalities are described in a well-founded way. The formula correctly describes how the result can be calculated based on the data and causalities. Overall, the information in the definition reflects the current scientific understanding and is unlikely to substantially change because of some existing information that is omitted."
+
| elsewhere      =  
<section end=glossary />
+
}}
 
+
}}
==Scope==
 
 
 
What is such a method for gaining acceptance to an [[object]] from the scientific community that fulfils the following criteria?
 
* It is based on an evaluation of the object by peer researchers.
 
* It is not in conflict with [[open assessment]].
 
* It evaluates the relationship of the [[scope]] and the [[definition]]. The question is whether the relationship is well-founded according to the current scientific information.
 
 
 
==Definition==
 
 
 
===Input===
 
 
 
The input is an object to-be-evaluated.
 
  
===Output===
+
<section begin=glossary/>
 +
:'''Peer review''' is a [[method]] for evaluating the scientific quality of a piece of information. In peer review a number of people that can be considered as reasonably acquainted with the topic that the piece of information addresses give their statement whether or not the piece of information is of good enough quality for publication in a scientific journal.
 +
<section end=glossary/>
  
The output is a statement about the relationship of the scope and the definition in the light of the current scientific information.  
+
Most often peer review is considered in the context of publishing scientific articles that tend to be descriptions of scientific studies and their results. Peer review can also used as a means of controlling quality of assessments and their outputs. Basically peer review is actually about acceptability of the process of producing information, and thereby also acceptability of the outcomes of that process. However, peer review is usually not a systematic method, but rather a practice that builds on the assumption that peers can implicitly identify good works from bad ones based on their own expertise. Consequently peer review also often ends up addressing also questions of e.g. usability and relevance, in a relatively random fashion. Despite its shortcomings, peer review does have value in quality control, also in the context of assessment.
  
===Rationale===
+
Technically the peer review can be done so that any piece of information is set available for peers to access and evaluate, and anyone who feels qualified to evaluate a given piece of information can go ahead and give her statement about its quality. The pieces of information can be assessments, or individual variables, or studies of any kind. The [[statement]] basically is whether the evaluator thinks that the piece of information is or is not of good enough quality that it could be published in a scientific journal. The levels of evaluation can then be e.g. 1) not reviewed, 2) reviewed, but not accepted, 3) reviewed and accepted. The amount of required acceptance statements can be agreed according to what is seen suitable for the system to be flexible, but still credible. Perhaps two or three, as in many scientific journals, is enough.
  
In the case of a variable, the definition (the quality of content of the data, causalities, and formula attributes) is evaluated against the scope, which is fixed. In the case of a nugget, the scope is evaluated against the definition (i.e. the scientific work performed), which is fixed. Thus, the question is about how much it is possible to generalise from the results of a study.
+
'''Peer review in Opasnet - an example of an open web-based review system
 
 
==Result==
 
 
 
===Procedure===
 
 
 
'''Peer review of the definition
 
 
 
'''Peer review''' in [[open assessment]] is a [[method]] for evaluating [[uncertainty|uncertainties]] that are not explicitly captured in the [[definition]] of an object (typically an [[assessment]] or a [[variable]]). Technically, it is a [[discussion]] on the Talk page of the object and has the following [[statement]]:
 
: "The definition of this object is based on the state-of-the-art scientific knowledge and methods. The data used is representative and unbiased. The causalities are described in a well-founded way. The formula correctly describes how the result can be calculated based on the data and causalities. Overall, the information in the definition reflects the current scientific understanding and is unlikely to substantially change because of some existing information that is omitted."
 
 
 
The following classification can be used for each attribute:
 
* The attribute description is according to the state-of-the-art.
 
* The attribute description has minor deficiencies.
 
* The attribute description is unreliable because of its major deficiencies.
 
* Cannot be evaluated.
 
 
 
 
 
'''Who can and should do a peer review?
 
  
 
Basically, Opasnet is applying an open peer review process in its widest sense. It means that anyone can make a peer review about anything. However, a peer review is worthless unless the readers believe that the reviewer actually is a peer, which means a person who has enough relevant expertise, usually a fellow researcher. Therefore, the following guidance is advised:
 
Basically, Opasnet is applying an open peer review process in its widest sense. It means that anyone can make a peer review about anything. However, a peer review is worthless unless the readers believe that the reviewer actually is a peer, which means a person who has enough relevant expertise, usually a fellow researcher. Therefore, the following guidance is advised:
Line 55: Line 27:
 
** The roles of each contributor are clarified in the [[Acknowledgements]] of the page.
 
** The roles of each contributor are clarified in the [[Acknowledgements]] of the page.
  
====Alternative approaches to peer review====
+
== Research: Increasing value, reducing waste ==
  
Typically, a peer review means an evaluation of the [[Definition]] of a page. Whether the result is actually truthful is something that a peer cannot usually review. However, in some cases an evaluation is possible, and these are described below. Also, the [[scope]] can be reviewed in some cases. Whether these should be called peer review, is an open question.
+
''Increasing value, reducing waste'' is a special issue in Lancet focussing on how to improve research and the evaluation processes of scientific work. It was published January 8, 2014.[http://www.thelancet.com/series/research]
  
 +
The Lancet presents a Series of five papers about research. In the first report Iain Chalmers et al discuss how decisions about which research to fund should be based on issues relevant to users of research. Next, John Ioannidis et al consider improvements in the appropriateness of research design, methods, and analysis. Rustam Al-Shahi Salman et al then turn to issues of efficient research regulation and management. Next, An-Wen Chan et al examine the role of fully accessible research information. Finally, Paul Glasziou et al discuss the importance of unbiased and usable research reports. These papers set out some of the most pressing issues, recommend how to increase value and reduce waste in biomedical research, and propose metrics for stakeholders to monitor the implementation of these recommendations.
  
'''Peer review of the result based on an external reference
+
* Sabine Kleinert, Richard Horton. How should medical science change? [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62329-6/fulltext]
 +
* Malcolm R Macleod, Susan Michie, Ian Roberts, Ulrich Dirnagl, Iain Chalmers, John P A Ioannidis, Rustam Al-Shahi Salman, An-Wen Chan, Paul Glasziou. Biomedical research: increasing value, reducing waste. The Lancet, Volume 383, Issue 9912, Pages 101 - 104, 11 January 2014. {{doi|10.1016/S0140-6736(13)62329-6}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62329-6/fulltext]
 +
* Iain Chalmers, Michael B Bracken, Ben Djulbegovic, Silvio Garattini, Jonathan Grant, A Metin Gülmezoglu, David W Howells, John P A Ioannidis, Sandy Oliver. How to increase value and reduce waste when research priorities are set. The Lancet, Volume 383, Issue 9912, Pages 156 - 165, 11 January 2014. {{doi|10.1016/S0140-6736(13)62229-1}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62229-1/fulltext]
 +
* John P A Ioannidis, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, Robert Tibshirani. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet, Volume 383, Issue 9912, Pages 166 - 175, 11 January 2014. {{doi|10.1016/S0140-6736(13)62227-8}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62227-8/fulltext]
 +
* Rustam Al-Shahi Salman, Elaine Beller, Jonathan Kagan, Elina Hemminki, Robert S Phillips, Julian Savulescu, Malcolm Macleod, Janet Wisely, Iain Chalmers. Increasing value and reducing waste in biomedical research regulation and management. The Lancet, Volume 383, Issue 9912, Pages 176 - 185, 11 January 2014. {{doi|10.1016/S0140-6736(13)62297-7}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62297-7/fulltext]
 +
* An-Wen Chan, Fujian Song, Andrew Vickers, Tom Jeff erson, Kay Dickersin, Peter C Gøtzsche, Harlan M Krumholz, Davina Ghersi,H Bart van der Worp. Increasing value and reducing waste: addressing inaccessible research. The Lancet, Volume 383, Issue 9913, Pages 257 - 266, 18 January 2014. {{doi|10.1016/S0140-6736(13)62296-5}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62296-5/fulltext]
 +
* Paul Glasziou, Douglas G Altman, Patrick Bossuyt, Isabelle Boutron, Mike Clarke, Steven Julious, Susan Michie, David Moher, Elizabeth Wager. Reducing waste from incomplete or unusable reports of biomedical research. The Lancet, Volume 383, Issue 9913, Pages 267 - 276, 18 January 2014. {{doi|10.1016/S0140-6736(13)62228-X}} [http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62228-X/fulltext]
  
Peer review can be performed for the result, if the peer has an alternative way of deriving the result. Then, it can be used as an external reference for evaluating the discrepancy between the result and the external reference. Of course, the validity of this review is totally dependent on the validity of the external reference. [[Informativeness]] and [[calibration]] can be evaluated against the reference.
+
== References ==
  
In addition, a '''discrepancy test''' can be performed. The aim of this test is to evaluate whether it is credible to believe the result of a [[value of information]] (VOI) analysis related to the variable. The VOI analysis gives low values if the need to improve the model is low. A problem with the VOI analysis is that if the model is too bad, it might also give low VOI estimates, thus falsely implying a good model. Fortunately there are a few indirect methods to evaluate this. One way is to do a peer review of the definition. Another one is to do a discrepancy test. It measures whether the external reference is essentially included in the current result of the variable. If this is the case, it is unlikely that the VOI will be underestimated. the [[:en:Kolmogorov–Smirnov test|Kolmogorov–Smirnov test]] is a relevant discrepancy test.
+
<references/>
 
 
In the numerical VOI analysis, the result distribution is divided into n equally probable bins. The discrepancy test asks, what the probability is that the result is in a higher (lower) bin than the external reference. If both probabilities are fairly high, it is unlikely that the result is falsely too narrow and biased. What is "fairly high", remains to be determined.
 
 
 
 
 
'''Peer review of the scope of a study
 
 
 
[[Study]] is a special kind of object in the sense that the definition typically describes a particular study that has been performed. Therefore, it is not possible to evaluate (in the sense of attempting to improve) the definition as such, because what was done was done. Instead, the interesting question is about the generalisability of the results. If they are not at all generalisable, the study is worthless. Let's take an example: there is a study that is an epidemiological case-control study with a questionnaire from all and and blood measurements of a pollutant from a subset of patients. If the blood measurements were done with an unreliable method, the results cannot be generalised even to the patient that was studied, i.e. we don't learn anything about the patient's pollutant levels even if we know the blood test result. If the blood test is good, we can believe that it reflects the patient's true pollutant level in blood. The next question is whether the patient is representative of his/her group, i.e. whether the result can be generalised to the whole group of cases or controls. In this case, blood was drawn only from a fraction of those who returned the questionnaire, and there is doubt whether the subgroup was somehow a biased sample of the bigger group. If it is biased, we cannot generalise to the group of cases or controls. There is also the question whether the group in the study actually reflects the population of interest. If the controls are drawn from a different population than the cases, it is doubtful whether they can be used as controls to compute odds ratios of the pollutant causing the disease of concern.
 
 
 
Thus, the peer review of a nugget aims to answer this question: "To what question(s) does the nugget actually answer reliably, based on the current scientific understanding?"
 
 
 
===Management===
 
 
 
The [[peer review]] [[discussion]] has the following form:
 
 
 
<big>'''Peer review'''</big>
 
 
 
{{discussion
 
|Dispute= The definition of this object is based on the state-of-the-art scientific knowledge and methods. The data used is representative and unbiased. The causalities are described in a well-founded way. The formula correctly describes how the result can be calculated based on the data and causalities. Overall, the information in the definition reflects the current scientific understanding and is unlikely to substantially change because of some existing information that is omitted.
 
|Outcome=
 
|Argumentation =
 
{{defend|1|The data used is representative and unbiased.|--[[User:Jouni|Jouni]] 11:37, 16 January 2009 (EET)}}
 
 
 
{{defend_invalid|2|The causalities are described in a well-founded way.|--[[User:Jouni|Jouni]] 23:04, 19 January 2009 (EET)}}
 
:{{attack|6|Attack these arguments if necessary.|--[[User:Jouni|Jouni]] 23:04, 19 January 2009 (EET)}}
 
 
 
{{defend|3|The formula correctly describes how the result can be calculated based on the data and causalities.|--[[User:Jouni|Jouni]] 23:04, 19 January 2009 (EET)}}
 
 
 
{{attack|5|The issue described in argument 4 is missing.|--[[User:Jouni|Jouni]] 11:37, 16 January 2009 (EET)}}
 
:{{defend|4|The issue of ...(describe the issue here)... is important and relevant for this object.|--[[User:Jouni|Jouni]] 11:37, 16 January 2009 (EET)}}
 
 
 
}}
 
  
 
==See also==
 
==See also==
  
==References==
+
* [http://opeer.org/ o'Peer website]
 +
** [https://www.phas.ubc.ca/users/jesse-brewer Jesse Brewer, the founder of o'Peer]
 +
* [http://blogs.biomedcentral.com/bmcblog/2014/04/11/are-journals-ready-to-abolish-peer-review-2/?utm_campaign=14_05_14_BMCUpdate_Newsletter Are journals ready to abolish peer review] (a BMC event and blog)
 +
* [http://blogs.biomedcentral.com/bmcblog/2014/04/08/peer-review-chipped-not-broken/ Peer review chipped, not broken]
 +
* [[Peer review method]]
 +
* [[Template:Review]]
 +
* [[Quality assurance and quality control]]
 +
* [[Quality evaluation criteria]]
 +
* [[Template:Quality assessment]]
 +
* [http://openwetware.org/wiki/Peer_Review_Simulation_Project#Project_Details Peer review protocol for OpenWetWare]
  
<references/>
+
[[Category:Open assessment]]
 +
[[Category:Quality control]]
 +
[[Category:Glossary term]]
 +
[[Category:THL publications 2009]]
 +
[[Category:THL publications 2010]]

Latest revision as of 09:17, 14 June 2014



<section begin=glossary/>

Peer review is a method for evaluating the scientific quality of a piece of information. In peer review a number of people that can be considered as reasonably acquainted with the topic that the piece of information addresses give their statement whether or not the piece of information is of good enough quality for publication in a scientific journal.

<section end=glossary/>

Most often peer review is considered in the context of publishing scientific articles that tend to be descriptions of scientific studies and their results. Peer review can also used as a means of controlling quality of assessments and their outputs. Basically peer review is actually about acceptability of the process of producing information, and thereby also acceptability of the outcomes of that process. However, peer review is usually not a systematic method, but rather a practice that builds on the assumption that peers can implicitly identify good works from bad ones based on their own expertise. Consequently peer review also often ends up addressing also questions of e.g. usability and relevance, in a relatively random fashion. Despite its shortcomings, peer review does have value in quality control, also in the context of assessment.

Technically the peer review can be done so that any piece of information is set available for peers to access and evaluate, and anyone who feels qualified to evaluate a given piece of information can go ahead and give her statement about its quality. The pieces of information can be assessments, or individual variables, or studies of any kind. The statement basically is whether the evaluator thinks that the piece of information is or is not of good enough quality that it could be published in a scientific journal. The levels of evaluation can then be e.g. 1) not reviewed, 2) reviewed, but not accepted, 3) reviewed and accepted. The amount of required acceptance statements can be agreed according to what is seen suitable for the system to be flexible, but still credible. Perhaps two or three, as in many scientific journals, is enough.

Peer review in Opasnet - an example of an open web-based review system

Basically, Opasnet is applying an open peer review process in its widest sense. It means that anyone can make a peer review about anything. However, a peer review is worthless unless the readers believe that the reviewer actually is a peer, which means a person who has enough relevant expertise, usually a fellow researcher. Therefore, the following guidance is advised:

  • If you need the information of a page in your assessment or other work and the page has not been reviewed yet, you should consider reviewing the page yourself before using it. Or, if you don't feel qualified, you should put some effort in finding a person who could review the page. This way, you increase the credibility of your own work, and you also help the Open Assessors' Network to evaluate and improve the contents of Opasnet.
  • You can peer review a page in Opasnet, if you have a credible record of expertise in the area of the page. It is advised that reviewers put enough information about this on their user page (maybe a brief curriculum vitae and a list of publications).
  • You should not be a major contributor of the page you review, i.e. you should not be one of those who have brought a substantive amount of scientific material to the page. Technical and linguistic edits can be done without limitation.
    • The roles of each contributor are clarified in the Acknowledgements of the page.

Research: Increasing value, reducing waste

Increasing value, reducing waste is a special issue in Lancet focussing on how to improve research and the evaluation processes of scientific work. It was published January 8, 2014.[2]

The Lancet presents a Series of five papers about research. In the first report Iain Chalmers et al discuss how decisions about which research to fund should be based on issues relevant to users of research. Next, John Ioannidis et al consider improvements in the appropriateness of research design, methods, and analysis. Rustam Al-Shahi Salman et al then turn to issues of efficient research regulation and management. Next, An-Wen Chan et al examine the role of fully accessible research information. Finally, Paul Glasziou et al discuss the importance of unbiased and usable research reports. These papers set out some of the most pressing issues, recommend how to increase value and reduce waste in biomedical research, and propose metrics for stakeholders to monitor the implementation of these recommendations.

  • Sabine Kleinert, Richard Horton. How should medical science change? [3]
  • Malcolm R Macleod, Susan Michie, Ian Roberts, Ulrich Dirnagl, Iain Chalmers, John P A Ioannidis, Rustam Al-Shahi Salman, An-Wen Chan, Paul Glasziou. Biomedical research: increasing value, reducing waste. The Lancet, Volume 383, Issue 9912, Pages 101 - 104, 11 January 2014. doi:10.1016/S0140-6736(13)62329-6 [4]
  • Iain Chalmers, Michael B Bracken, Ben Djulbegovic, Silvio Garattini, Jonathan Grant, A Metin Gülmezoglu, David W Howells, John P A Ioannidis, Sandy Oliver. How to increase value and reduce waste when research priorities are set. The Lancet, Volume 383, Issue 9912, Pages 156 - 165, 11 January 2014. doi:10.1016/S0140-6736(13)62229-1 [5]
  • John P A Ioannidis, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, Robert Tibshirani. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet, Volume 383, Issue 9912, Pages 166 - 175, 11 January 2014. doi:10.1016/S0140-6736(13)62227-8 [6]
  • Rustam Al-Shahi Salman, Elaine Beller, Jonathan Kagan, Elina Hemminki, Robert S Phillips, Julian Savulescu, Malcolm Macleod, Janet Wisely, Iain Chalmers. Increasing value and reducing waste in biomedical research regulation and management. The Lancet, Volume 383, Issue 9912, Pages 176 - 185, 11 January 2014. doi:10.1016/S0140-6736(13)62297-7 [7]
  • An-Wen Chan, Fujian Song, Andrew Vickers, Tom Jeff erson, Kay Dickersin, Peter C Gøtzsche, Harlan M Krumholz, Davina Ghersi,H Bart van der Worp. Increasing value and reducing waste: addressing inaccessible research. The Lancet, Volume 383, Issue 9913, Pages 257 - 266, 18 January 2014. doi:10.1016/S0140-6736(13)62296-5 [8]
  • Paul Glasziou, Douglas G Altman, Patrick Bossuyt, Isabelle Boutron, Mike Clarke, Steven Julious, Susan Michie, David Moher, Elizabeth Wager. Reducing waste from incomplete or unusable reports of biomedical research. The Lancet, Volume 383, Issue 9913, Pages 267 - 276, 18 January 2014. doi:10.1016/S0140-6736(13)62228-X [9]

References


See also