Difference between revisions of "Evaluating assessment performance"
m (→Result) |
m (small edits to lecture contents) |
||
Line 2: | Line 2: | ||
{{Lecture}} | {{Lecture}} | ||
− | '''Evaluating assessment performance''' is a lecture about the factors that constitute the performance, the ''goodness'', of assessments and they can be evaluated within a single general framework. This page was converted | + | '''Evaluating assessment performance''' is a lecture about the factors that constitute the performance, the ''goodness'', of assessments and they can be evaluated within a single general framework. |
+ | |||
+ | This page was converted from an encyclopedia article to a lecture as there are other descriptive pages already existing about the same topic. The old content was archived and can be found [http://en.opasnet.org/en-opwiki/index.php?title=Evaluating_assessment_performance&oldid=4383 here]. | ||
==Scope== | ==Scope== | ||
− | '''Purpose:''' To | + | '''Purpose:''' To summarize what assessments are about overall, what are the factors that constitute the overall performance of assessment, how the factors are interrelated, and how the performance, the ''goodness'' of assessment can be evaluated. |
'''Intended audience:''' Researchers (especially at doctoral student level) in any field of science (mainly natural, not social scientists). | '''Intended audience:''' Researchers (especially at doctoral student level) in any field of science (mainly natural, not social scientists). | ||
Line 14: | Line 16: | ||
==Definition== | ==Definition== | ||
− | In order to understand this lecture it is recommended to | + | In order to understand this lecture it is recommended to also acquaint oneself with the following lectures: |
* [[Open assessment in research]] | * [[Open assessment in research]] | ||
* [[Assessments - science-based decision support]] | * [[Assessments - science-based decision support]] | ||
Line 22: | Line 24: | ||
==Result== | ==Result== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
* Introduction through analogy: what makes a mobile phone good? | * Introduction through analogy: what makes a mobile phone good? | ||
** chain from production to use | ** chain from production to use | ||
Line 33: | Line 30: | ||
** user interface, appearance design, use context, packaging, logistics, marketing, sales (applicability) | ** user interface, appearance design, use context, packaging, logistics, marketing, sales (applicability) | ||
** mass production/customization: components, code, assembly (efficiency) | ** mass production/customization: components, code, assembly (efficiency) | ||
− | * Contemporary conventions: | + | * Assessments |
+ | ** serve 2 masters: truth (science) and practical need (societal decision making, policy) | ||
+ | ** must meet the needs of their use | ||
+ | ** must strive for truth | ||
+ | ** both requirements must be met, which is not easy, but possible | ||
+ | ** a business of creating understanding about reality | ||
+ | *** making right questions, providing good answers, getting the (questions and) answers where they are needed | ||
+ | *** getting the questions right (according to need) is primary, getting the answers right is only conditional to the previous | ||
+ | * Contemporary conventions of addressing performance: | ||
** quality assurance/control: process approach | ** quality assurance/control: process approach | ||
** uncertainty assessment: product approach | ** uncertainty assessment: product approach | ||
− | * Performance | + | * Performance in the context of assessments as science-based decision support: properties of good assessment |
− | * | + | ** takes the production point of view |
− | * | + | ** quality of content |
− | * | + | *** informativeness, calibration, relevance |
− | * | + | ** applicability |
− | + | *** availability, usability, acceptability | |
+ | ** efficiency | ||
+ | ** different properties have different points of reference and criteria, | ||
+ | ** a means of managing design and execution or evaluating past work | ||
+ | ** not an orthogonal set: applicability conditional on quality of content, efficiency conditional on both quality of content and applicability |
Revision as of 06:48, 18 February 2009
This page is a lecture.
The page identifier is Op_en2082 |
---|
Moderator:Nobody (see all) Click here to sign up. |
Give your opinion to the peer rating of the content of this page. |
Upload data
|
Evaluating assessment performance is a lecture about the factors that constitute the performance, the goodness, of assessments and they can be evaluated within a single general framework.
This page was converted from an encyclopedia article to a lecture as there are other descriptive pages already existing about the same topic. The old content was archived and can be found here.
Scope
Purpose: To summarize what assessments are about overall, what are the factors that constitute the overall performance of assessment, how the factors are interrelated, and how the performance, the goodness of assessment can be evaluated.
Intended audience: Researchers (especially at doctoral student level) in any field of science (mainly natural, not social scientists).
Duration: 1 hour 15 minutes
Definition
In order to understand this lecture it is recommended to also acquaint oneself with the following lectures:
- Open assessment in research
- Assessments - science-based decision support
- Variables - evolving interpretations of reality
- Science necessitates collaboration
Result
- Introduction through analogy: what makes a mobile phone good?
- chain from production to use
- goodness from whose perspective, producer or user? Can they be fit into one framework?
- phone functionalities (qofC)
- user interface, appearance design, use context, packaging, logistics, marketing, sales (applicability)
- mass production/customization: components, code, assembly (efficiency)
- Assessments
- serve 2 masters: truth (science) and practical need (societal decision making, policy)
- must meet the needs of their use
- must strive for truth
- both requirements must be met, which is not easy, but possible
- a business of creating understanding about reality
- making right questions, providing good answers, getting the (questions and) answers where they are needed
- getting the questions right (according to need) is primary, getting the answers right is only conditional to the previous
- Contemporary conventions of addressing performance:
- quality assurance/control: process approach
- uncertainty assessment: product approach
- Performance in the context of assessments as science-based decision support: properties of good assessment
- takes the production point of view
- quality of content
- informativeness, calibration, relevance
- applicability
- availability, usability, acceptability
- efficiency
- different properties have different points of reference and criteria,
- a means of managing design and execution or evaluating past work
- not an orthogonal set: applicability conditional on quality of content, efficiency conditional on both quality of content and applicability