Medical education assessment

With a team of colleagues, there is an ongoing and internationally-recognised programme of research centred on improving the quality of performance and knowledge assessments. We have published widely in this area, for example, on appropriate metrics for measuring 'quality' in performance assessments, and on how to set and maintain standards.  We are currently researching examiner effects on standards under borderline regression, and student attitudes to sequential testing.

Impact

Members and former members of the team have a wide range of external roles with, for example, the General Medical Council. Our approaches to assessment have been influential across the UK, and beyond, and we advise a number of medical schools across the world on their assessment practices. We also run regular workshops on assessment at international conferences.

Publications and outputs

  • Fuller, R., Homer, M.S., Pell, G. and Hallam, J. 2017. Managing extremes of assessor judgement within the OSCE. Medical Teacher. 39(1), pp.58–66.
  • Fuller, R., Pell, G. and Homer, M.S. 2013. Longitudinal interrelationships of OSCE station level analyses, quality improvement and overall reliability. Medical Teacher. 35(6), pp.515–517.
  • Homer, M., Fuller, R., Hallam, J. and Pell, G. n.d. Setting defensible standards in small cohort OSCEs: Understanding better when borderline regression can ‘work’. Medical Teacher.
  • Homer, M.S. and Darling, J.C. 2016. Setting standards in knowledge assessments: comparing Ebel and Cohen via Rasch. Medical Teacher. 38(12), pp.1267–1277.
  • Homer, M.S., Fuller, R. and Pell, G. 2018. The benefits of sequential testing: Improved diagnostic accuracy and better outcomes for failing students. Medical Teacher. 40(3), pp.275–284.
  • Homer, M.S. and Pell, G. 2009. The impact of the inclusion of simulated patient ratings on the reliability of OSCE assessments under the borderline regression method. Medical Teacher. 31(5), pp.420–425.
  • Homer, M.S., Pell, G. and Fuller, R. 2017. Problematizing the concept of the ‘borderline’ group in performance assessments. Medical Teacher. 39(5), pp.469–475.
  • Homer, M.S., Pell, G., Fuller, R. and Patterson, J. 2016. Quantifying error in OSCE standard setting for varying cohort sizes: a resampling approach to measuring assessment quality. Medical Teacher. 38(2), pp.181–188.
  • Homer, M.S., Setna, Z., Jha, V., Higham, J., Roberts, T.E. and Boursicot, K. 2013. Estimating and comparing the reliability of a suite of workplace-based assessments: an obstetrics and gynaecology setting. Medical Teacher. 35(8), pp.684–691.
  • Pell, G., Fuller, R., Homer, M.S. and Roberts, T. 2010. How to measure the quality of the OSCE: A review of metrics. Medical Teacher. 32(10), pp.802–811.
  • Pell, G., Fuller, R., Homer, M.S. and Roberts, T. 2012. Is short-term remediation after OSCE failure sustained? A retrospective analysis of the longitudinal attainment of underperforming students in OSCE assessments. Medical Teacher. 34(2), pp.146–150.
  • Pell, G., Homer, M.S. and Fuller, R. 2015. Investigating disparity between global grades and checklist scores in OSCEs. Medical Teacher.

Project website

https://medicinehealth.leeds.ac.uk/dir-record/research-groups/924/quality-and-innovation-in-assessment