Abstract

Background

It is vital that undergraduate medical students demonstrate competency in clinical skills upon graduation. To confidently state that students are competent in these skills, medical schools must assess students using tools that provide an accurate measure of their abilities. As with many medical schools, the School of Medicine at the University of Notre Dame Australia uses the Objective Structured Clinical Examination (OSCE). Research findings strongly support the OSCE as a reliable and valid measure of students’ clinical competence. However, traditional item analyses of OSCE papers were not providing the School with enough insight into how individual stations were performing and whether they were successfully measuring stated learning objectives. As such, item response theory, specifically the Rasch measurement model was adopted which has provided more in-depth analyses, the findings from which are broad-reaching in terms of improving assessment practices within the School.

Objective

This paper discusses the findings from the first phase of a three-part study where the overall objective is to use results from psychometric analyses based on the Rasch measurement model to inform and improve assessment practices. Phase 1 involves running Rasch analyses on an end-of-year OSCE paper to determine internal construct validity (among other things). Phase 2 involves using the findings from these analyses to inform station design for future papers, along with the professional development of staff and curriculum designers. Phase 3 will involve the implementation of further testing to determine if interventions in assessment practices have been successful, that is, if the internal construct validity of the OSCE has improved.

Methods

At the end of 2008, students in their final year of the four-year course sat a 10 station OSCE paper. Each of these stations was 15 minutes in duration and covered a range of clinical skills and disciplines. Using RUMM2020 (Andrich, Sheridan & Luo, 2004), a Rasch analysis was conducted whereby each OSCE station was considered an item. The Rasch unidimensional measurement model was employed for item and test analysis to serve as a quality monitoring and quality assurance procedure. The internal construct validity was assessed in terms of the characteristic of unidimensionality. The investigation of model fit was done concurrently for the test as a whole as well as at the individual item level.

Results

There was strong evidence of internal construct validity in terms of unidimensionality for the OSCE paper. For the OSCE exam as a whole, there was an excellent fit between the empirical data and the Rasch unidimensional measurement model (χ2 = 10.36, 20, p=0.96). Each individual station has also displayed excellent fit to the model. The task difficulty for each station ranges from -0.716 to +0.876 logits. The overall person separation index, however, is moderate (PSI=0.67). The traditional test statistics in terms of Point Biserial index provides further evidence of the quality of the OSCE examination.

Conclusion

The application of the Rasch unidimensional measurement model for the validation of the OSCE assessment has proven successful in this phase of the study. Findings show that Rasch provides in-depth insight into the psychometric properties of an OSCE which may be of interest to other medical schools. Suggestions and implications for continued use and refinement of OSCE testing are also discussed.

Keywords

Medical Education, Assessment, OSCE, Psychometric

Comments

Further information about the 7th Asia Pacific Medical Education Conference may be accessed here

Share

COinS