Difference between revisions of "Quality assurance and quality control"

From Testiwiki
Jump to: navigation, search
(added 2 links that might be of interest)
(EPA report added)
Line 60: Line 60:
 
** A non-systematic literature review or an informal expert elicitation was used.
 
** A non-systematic literature review or an informal expert elicitation was used.
 
** A systematic literature review or a formal expert elicitation was used.
 
** A systematic literature review or a formal expert elicitation was used.
 +
 +
==Development, evaluation, and application of environmental models==
 +
 +
There is an EPA report about this topic <ref>The Council for Regulatory Environmental Modeling: '''Draft Guidance on the Development, Evaluation, and Application of Regulatory Environmental Models'''. Models Guidance Draft - November 2003. [http://www.epa.gov/crem/library/CREM%20Guidance%20Draft%2012_03.pdf]</ref>
 +
 +
"This Guidance recommends best practices to help determine when a model, despite its
 +
uncertainties, can be appropriately used to inform a decision. Specifically, it recommends that
 +
model developers and users: (a) subject their model to credible, objective peer review; (b) assess
 +
the quality of the data they use; (c) corroborate their model by evaluating the degree to which it
 +
corresponds to the system being modeled; and (d) perform sensitivity and uncertainty analyses.
 +
Sensitivity analysis evaluates the effect of changes in input values or assumptions on a model's
 +
results. Uncertainty analysis investigates the effects of lack of knowledge and other potential
 +
sources of error in the model (e.g., the “uncertainty” associated with model parameter values)
 +
and when conducted in combination with sensitivity analysis allows a model user to be more
 +
informed about the confidence that can be placed in model results. A model’s quality to support
 +
a decision becomes known when information is available to assess these factors."
  
 
== See also ==
 
== See also ==
Line 65: Line 81:
 
* [http://www.wikisym.org/ws2008/index.php/Measuring_Author_Contributions_to_the_Wikipedia Measuring Author Contributions to the Wikipedia]
 
* [http://www.wikisym.org/ws2008/index.php/Measuring_Author_Contributions_to_the_Wikipedia Measuring Author Contributions to the Wikipedia]
 
* [http://www.wikisym.org/ws2008/index.php/A_Method_for_Measuring_Co-authorship_Relationships_in_MediaWiki A Method for Measuring Co-authorship Relationships in MediaWiki]
 
* [http://www.wikisym.org/ws2008/index.php/A_Method_for_Measuring_Co-authorship_Relationships_in_MediaWiki A Method for Measuring Co-authorship Relationships in MediaWiki]
 +
 +
==References==
 +
 +
<references/>

Revision as of 07:23, 24 October 2008

<accesscontrol>members of projects</accesscontrol>

Quality control measures are needed to ensure that the risk assessments are reflecting the current best scientific knowledge and the state-of-the-art methods. Quality control will impact the process of making an assessment, as well as the contents of the product. Assessments that fulfil the quality criteria at every part are called full open risk assessments. If an assessment fulfils the criteria only partly, it is called a draft open risk assessment.

Four different quality controls of the content

The content of an assessment is evaluated for each object (assessment, variable) separately.

A systematic literature reviews are performed for

  • the data sub-attribute of each variable
  • the causality sub-attribute of each variable
  • the definition attribute (the variable list) of an assessment [is this possible?]

A critical evaluation using open participatory discussion is performed for

  • the scope attribute of the assessment
  • the analyses sub-attribute of the assessment
  • the conclusion sub-attribute of the assessment
  • the scope attribute of each variable [how is this different from the definition attribute of an assessment?]

A technical quality check is performed for

  • the results sub-attribute of the assessment
  • the unit sub-attribute of each variable
  • the formula sub-attribute of each variable

An uncertainty analysis is performed for

  • the result attribute of each variable

Systematic literature review

A systematic literature review is performed according to the rules of Cochrane reviews.

Open participatory discussion

An open participatory discussion is performed according to a pragma-dialectical argumentation theory[1].

A discussion is considered saturated when [possible suggestions for a systematic criteria]

  • explicit comments has been received from all identified stakeholders or their representatives
  • more than 300 independent contributions have been received
  • the new contributions do not bring substantive information to what exists already

Technical quality check

A technical quality check is performed by peer review by two experienced experts of the methodology used.

Uncertainty analysis

For a variable, the data, causality, and formula subattributes result in a probability distribution that in theory reflects the current understanding of the uncertainty about the variable. However, this is only true for variables that have gone through the whole quality control process. Ucertainty analyses need to answer the following question about the level of uncertainty:

  • Which of the following is true (about the absolute amount of data used)?
    • The data used is only a placeholder for illustration
    • The data used is based on some understanding of the issue, and it is a reasonable guess
    • The data used is based on reasonable amount of information about the expected value, but inadequate data about uncertainties
    • The data allows for a reasonable estimate of the full probability distribution.
    • The data is extensive and it is unlikely that even large amounts of new data would change the current estimate substantially.

In addition, draft risk assessments should answer the following question:

  • Which of the following is true about the data used (in relation to the amount of existing data)?
    • No existing data was used
    • Only some general knowledge was used
    • A few easily available information sources were used
    • A non-systematic literature review or an informal expert elicitation was used.
    • A systematic literature review or a formal expert elicitation was used.

Development, evaluation, and application of environmental models

There is an EPA report about this topic [2]

"This Guidance recommends best practices to help determine when a model, despite its uncertainties, can be appropriately used to inform a decision. Specifically, it recommends that model developers and users: (a) subject their model to credible, objective peer review; (b) assess the quality of the data they use; (c) corroborate their model by evaluating the degree to which it corresponds to the system being modeled; and (d) perform sensitivity and uncertainty analyses. Sensitivity analysis evaluates the effect of changes in input values or assumptions on a model's results. Uncertainty analysis investigates the effects of lack of knowledge and other potential sources of error in the model (e.g., the “uncertainty” associated with model parameter values) and when conducted in combination with sensitivity analysis allows a model user to be more informed about the confidence that can be placed in model results. A model’s quality to support a decision becomes known when information is available to assess these factors."

See also

References

  1. van Eemeren and Grootendorst, 2004
  2. The Council for Regulatory Environmental Modeling: Draft Guidance on the Development, Evaluation, and Application of Regulatory Environmental Models. Models Guidance Draft - November 2003. [1]