Show simple item record

dc.contributor.authorElander, James
dc.contributor.authorHardman, David
dc.date.accessioned2020-09-16T15:25:34Z
dc.date.available2020-09-16T15:25:34Z
dc.date.issued2002
dc.identifier.citationElander, J. & Hardman, D. (2002). 'An application of judgement analysis to examination marking in psychology'. British Journal of Psychology, 93, pp. 303-328.en_US
dc.identifier.issn0007-1269
dc.identifier.doi10.1348/000712602760146233
dc.identifier.urihttp://hdl.handle.net/10545/625168
dc.description.abstractStatistical combinations of specific measures have been shown to be superior to expert judgement in several fields. In this study judgement analysis was applied to examination marking to investigate factors that influenced marks awarded and contributed to differences between first and second markers. Seven markers in psychology rated 551 examination answers on seven 'aspects' for which specific assessment criteria had been developed to support good practice in assessment. The aspects were addressing the question, covering the area, understanding, evaluation, development of argument, structure and organisation, and clarity. Principal components analysis indicated one major factor and no more than two minor factors underlying the seven aspects. Aspect ratings were used to predict overall marks, using multiple regression regression to ‘capture’ the marking policies of individual markers. These varied from marker to marker in terms of the numbers of aspect ratings that made independent contributions to the prediction of overall marks and the extent to which aspect ratings explained the variance in overall marks. The number of independently predictive aspect ratings, and the amount of variance in overall marks explained by aspect ratings, were consistently higher for first markers (question setters) than for second markers. Co-markers’ overall marks were then used as an external criterion to test the extent to which a simple model consisting of the sum of the aspect ratings improved on overall marks in the prediction of co-markers marks. The model significantly increased the variance in co-markers’ marks accounted for, but only for second markers, who had not taught the material and not set the question. Further research is needed to develop the criteria and especially to establish the reliability and validity of specific aspects of assessment. The present results support the view that, for second markers at least, combined measures of specific aspects of examination answers may help to improve the reliability of marking.en_US
dc.description.sponsorshipN/Aen_US
dc.language.isoenen_US
dc.publisherWileyen_US
dc.relation.urlhttps://onlinelibrary.wiley.com/doi/abs/10.1348/000712602760146233en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://doi.wiley.com/10.1002/tdm_license_1.1
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectExaminationen_US
dc.subjectMarkingen_US
dc.subjectJudgementen_US
dc.titleAn application of judgment analysis to examination marking in psychologyen_US
dc.typeArticleen_US
dc.contributor.departmentUniversity of Derbyen_US
dc.contributor.departmentLondon Guildhall Universityen_US
dc.identifier.journalBritish Journal of Psychologyen_US
dc.source.journaltitleBritish Journal of Psychology
dc.source.volume93
dc.source.issue3
dc.source.beginpage303
dc.source.endpage328
dcterms.dateAccepted2002
refterms.dateFOA2020-09-16T15:25:35Z
dc.author.detail779740en_US


Files in this item

Thumbnail
Name:
Elander & Hardman 2002 Judgement ...
Size:
469.5Kb
Format:
PDF
Description:
Accepted manuscript

This item appears in the following Collection(s)

Show simple item record