Cogprints

Sources of Measurement Error in an ECG Examination: Implications for Performance-Based Assessments

Solomon Ph.D., David J. and Ferenchick MD, Gary (2003) Sources of Measurement Error in an ECG Examination: Implications for Performance-Based Assessments. [Preprint]

Full text available as:

[img]
Preview
PDF
37Kb

Abstract

Objective: To assess the sources of measurement error in an electrocardiogram (ECG) interpretation examination given in a third-year internal medicine clerkship. Design: Three successive generalizability studies were conducted. 1) Multiple faculty rated student responses to a previously administered exam. 2) The rating criteria were revised and study 1 was repeated. 3) The examination was converted into an extended matching format including multiple cases with the same underlying cardiac problem. Results: The discrepancies among raters (main effects and interactions) were dwarfed by the error associated with case specificity. The largest source of the differences among raters was in rating student errors of commission rather than student errors of omission. Revisions in the rating criteria may have helped increase inter-rater reliability slightly however, due to case specificity, it had little impact on the overall reliability of the exam. The third study indicated the majority of the variability in student performance across cases was in performance across cases within the same type of cardiac problem rather than between different types of cardiac problems. Conclusions: Case specificity was the overwhelming source of measurement error. The variation among cases came mainly from discrepancies in performance between examples of the same cardiac problem rather than from differences in performance across different types of cardiac problems. This suggests it is necessary to include a large number of cases even if the goal is to assess performance on only a few types of cardiac problems.

Item Type:Preprint
Keywords:Electrocardiogram, educational, measurement, generalizability, performance based assessment, reliability
Subjects:Psychology > Behavioral Analysis
ID Code:3083
Deposited By: David, Solomon
Deposited On:25 Jul 2003
Last Modified:11 Mar 2011 08:55

References in Article

Select the SEEK icon to attempt to find the referenced article. If it does not appear to be in cogprints you will be forwarded to the paracite service. Poorly formated references will probably not work.

Brennan RL. Generalizability Theory. Assessment Systems Corporation, St. Paul Mn, 2001.

Crick JE, Brennan RL, A general purpose analysis of variance system. Version 2.2, 1984, American College Testing Service.

Downing S. (2000) Assessment of knowledge with written test forms. in Norman GR, van der Vleuten CPM Newble DI. (eds) International Handbook of Research in Medical Education. pp. 647-672. Dordrecht/Boston/London: Kluwer Academic Publishers.

Mavis BE, Henry RC, Ogle KS, Hoppe RB. (1996) The emperor’s new clothes: The OSCE reassessed. Academic Medicine May;71(5):447-53.

Swanson D.B, Norcini J.J. (1989) Factors influencing reproducibility of tests using standardized patients. Teaching and Learning in Medicine 1: 158-66.

Swanson, D.B., Norman, G.R., Linn, R.L. (1995) Performance-Based Assessments: Lessons from the Health Professions. Educational Researcher 24(5): 5-35.

Van Thiel, J., Kraan ,H.F., Van Der Vleuten, C.P.M. (1991) Reliability and feasibility of measuring medical interviewing skills: the revised Maastricht history-taking and advice checklist. Medical Education 25: 224-229

Metadata

Repository Staff Only: item control page