Objective: OSCE has been shown to be a reliable method of assessing both clinical competency and higher levels of clinical reasoning. The aim of the study was to assess the correlation of marking between the examiner and the standard setter in the clinical year OSCE.
Methods: The School examined 110 graduate-entry medical students in Year 3 (2010) using OSCE across 2 sites. There were 8 stations examining: history taking, clinical examination skills on simulated patients, real patients or plastic models with a total examination time of 100 minutes. The content of each station had been standardized by a multi-disciplinary panel. A detailed criterion-based marking assessing various clinical competencies was employed. Participating examiners and standard setters were trained before the examination. An examiner and a standard setter were assigned to each station. The two markers did not confer or discussion in the process. The Intra-Class Correlation (ICC) co-efficient between the examiner and standard setter, the overall reliability score (Cronbach’s alpha) were calculated using the SPSS® statistical package.
Results: The ICC correlations between the examiner and standard setter were significant (p<0.001) (Table 1). There was a low correlation in Station 7 at Melbourne School. The overall Cronbach’s alpha reliability score is 0.7.
Conclusions: The significant correlations between examiners and standard setters revealed that pre-examination training and standardisation reduce inter-rater variability and improve overall reliability. The analysis could also pick up potential discrepancy in marking (eg Station 7) and appropriate score adjustment made. The standard setter also serves as a reserve/backup examiner.
Wan, S. H., & Canalese, R. (2012). Using standard setters to improve reliability for the Objective Structured Clinical Examination (OSCE) in clinical year medical students. Paper presented at the 9th Asia Pacific Medical Education Conference (APMEC). National University of Singapore, 11-15 January, 2012.