Print this page Email this page Users Online: 1399 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
BRIEF COMMUNICATION
Year : 2007  |  Volume : 20  |  Issue : 1  |  Page : 6

Incorporating Patients' Assessments into Objective Structured Clinical Examinations


University of Leeds, Leeds, United Kingdom

Date of Submission24-Feb-2007
Date of Web Publication18-Apr-2007

Correspondence Address:
S Kilminster
Medical Education Unit, Level 7 Worsley Building, Faculty of Medicine and Healthcare, University of Leeds, Leeds LS2, 9NL
United Kingdom
Login to access the Email id

Source of Support: None, Conflict of Interest: None


PMID: 17647174

  Abstract 

Introduction: There is a need to improve the validity of performance assessments and to develop better ways of identifying and assessing what students actually do in practice. Incorporating patients' assessments into OSCEs has the potential to offer both an expert assessment of aspects of the doctor-patient interaction and improve validity. Therefore, we held a trial using simulated patient (SP) assessments in history-taking, explaining and communication skills stations in third year OSCEs.
Methods: SPs made two separate ratings of each student they saw in the OSCE. Examiners graded students using checklists and an overall 'borderline' grade. SP and examiners' marks were subject to statistical analysis.
Results: The reliability of the SP ratings was .77. The reliability of the SP borderline grades was .68. The reliability of the ratings and grades combined was .86. SPs reached consensus on the characteristics of high and low performing students.
Conclusions: SP assessments are reliable. Statistical analysis demonstrated that SPs and clinicians are assessing different aspects of students' performance. We concluded that, due to our approach to working with SPs, their assessments increased validity.

Keywords: patient assessments; OSCE; simulated patients; patient involvement


How to cite this article:
Kilminster S, Roberts T, Morris P. Incorporating Patients' Assessments into Objective Structured Clinical Examinations. Educ Health 2007;20:6

How to cite this URL:
Kilminster S, Roberts T, Morris P. Incorporating Patients' Assessments into Objective Structured Clinical Examinations. Educ Health [serial online] 2007 [cited 2020 Oct 25];20:6. Available from: https://www.educationforhealth.net/text.asp?2007/20/1/6/101641

Introduction



Patients are now much more actively involved in both healthcare delivery and education. This is partly a result of changing perceptions and understandings about patient care. Perhaps this is particularly in relation to communication and decision-making, and partly due to professional and government imperatives (GMC, 2002; DoH, 2001). There is a relatively small, albeit growing body of literature about the different ways in which patients are involved in medical education (Towle & Weston, 2006; Howe & Anderson, 2003; Morris, 2006; Spencer et al., 2000).



Patients can be considered to be experts in aspects - particularly communication and professional manner - of the doctor/patient interaction in much the same way as clinicians can be considered experts in the medical aspects (Norcini & Boulet, 2003). It follows that patients may well be more accurate than clinicians in assessing those aspects in which they are expert (Thistlethwaite, 2004). There is an imperative to improve the validity of examinations and to develop better ways of identifying and assessing what candidates actually do in practice (van der Vleuten, 2000). Incorporating patients’ assessments into Objective Standard Clinical Evaluations (OSCEs) has the potential to both offer an expert assessment of aspects of the doctor-patient interaction and improve validity. Therefore, as part of our efforts to improve the quality of OSCEs (Kilminster & Roberts, 2004), we held a trial using simulated patient (SP) assessments in history-taking, explaining and communication skills stations in third year OSCEs.



Patients are involved in our curriculum as simulated patients (SPs), expert patients or ‘real’ patients. The underlying principle of our work with SPs is that, although they have a standardised ‘role description’, they do not use a script but respond ‘naturally’ to the approach and questioning of the student. This approach enhances the validity of the interaction between student and SP because real life clinical encounters vary in this way. For example, the same patient will give different information to different doctors. However, this means that OSCE encounters are not ‘standardised’ and so there were some concerns about reliability of SP assessments.



Methods



SP Training: Thirty of 50 SPs attended a three-hour training session with two expert SP trainers and two teaching staff as facilitators; 20 SPs were unable to attend because of other commitments. First, SPs worked in small groups, discussing interpretation and portrayal of the OSCE roles. Next, the assessment process was explained. Finally SPs identified various aspects of the behaviour and attitudes of high and low scoring students.



SP assessments: During the OSCE, all 50 SPs were asked to grade students on two independent statements:

  1. “I felt the student showed respect for me and responded to my concerns and questions in a professional manner.” (SP rating from 1 low - 5 high);

  2. “In your opinion was the student’s performance clear fail/borderline/clear pass/very good pass/excellent pass?” (borderline grade).


Examiners used separate, more detailed marking sheets that included the borderline grade. Moreover, examiners and SPs were asked to assess students independently.



Analysis: Fifty SPs were involved in the OSCE and there were approximately 590 SP/student interactions. Statistical analyses were done using SPSS and the assumption that SP and assessor grades were linear was made. Reliability was calculated using Cronbach’s alpha. Examiner and SP grades were compared using paired sample t-tests to minimise the student effect (as each student had two different assessors). Linear regression analysis was used to investigate the relationship between SP and assessor grades (if they were assessing the same things there would have been a close relationship between the two scores).



Results



Reliability: The reliability of the SP ratings was 0.77, and the reliability of the SPs borderline grades was 0.68. The reliability of the ratings and grades combined was 0.86. A Cronbach’s alpha of 0.7 is considered satisfactory for this type of examination.



Borderline grades: We compared the borderline grades given at each station by the SPs and examiners. Linear regression analysis showed the examiners’ grade had little predictive value for the SPs’ grade (r-square values ranged from .010 to .026). One factor affecting the relationship between SP and assessor grades was the unwillingness of SPs to use the full range of options in the ‘borderline’ grade.



SP assessment training: SPs identified various aspects of the behaviour and attitudes of students who would score high and low marks. These are summarised below – each SP had a copy of this table (Figure 1) during the OSCEs.







Figure 1:  Some indicators for high and low scoring students.



There was general agreement that a high scoring student would be one whom the SP would choose to see again, someone they would seek out if they needed another consultation. A low scoring student would be one whom they would not want to see again and a middle scoring student would be one whom they would be prepared to see again.



Discussion



SP assessments showed high reliability. SP assessments do offer additional information to the examiners’ assessments. If both were assessing the same thing, the examiners’ grade would have had higher predictive value for the SPs’ grade. Consequently, in addition to attaining the pass mark and passing the requisite number of history and examination stations, students should be expected to pass a specified number of SP assessments.



These findings are important because our approach to working with SPs means that their responses are not standardised. We consider that our approach to working with SPs enhances validity and this study shows it has good reliability. Good assessment practice suggests that only trained SPs should be used in OSCE examinations and we are conducting further work to ascertain the effects of SP training.



The inclusion of SP markers in examinations complements current thinking about professional re-accreditation and 360 degree feedback (Schuwirth & van der Vleuten, 2006) in which all aspects of a physician’s performance are taken into account. SP assessments offer a way to triangulate information about each student’s performance and could be used to increase feedback to the students. We are conducting further work to quantify the effects of incorporating SP assessments on the overall reliability of OSCE marks. The next step is to investigate whether it is possible to use the assessments of real patients.



References



Department of Health (2001). Involving Patients and the Public in Healthcare: A discussion document. London: Department of Health.



General Medical Council (2002). Tomorrow’s doctors. London: GMC.



Howe, A. & Anderson, J. (2003) Involving patients in medical education. British Medical Journal, 327, 326-328.



Kilminster, S.M. & Roberts, T.E. (2004). Standard setting for OSCEs: trial of borderline approach. Advances in Health Sciences Education, 9(3), 201-209.



Morris, P. (2006). The patient’s voice in doctors’ learning. In, J. Thistlethwaite & P. Morris (Eds.), The Patient Doctor Consultation in Primary Care: Theory and Practice. London: Royal College of General Practitioners.



Norcini, J. & Boulet, J. (2003). Methodological issues in the use of standardised patients for assessment. Teaching and Learning in Medicine, 15, 293-297.



Schuwirth, L.W.T. & Vleuten, C.P.M. van der (2006). A plea for new psychometric models in assessment. Medical Education, 40, 296-300.



Spencer, J., Blackmore, D., Heard, S., McCrorie, P., McHaffie, D., Scherpbier, A., Gupta, T.S., Singh, K. & Southgate, L. (2000). Patient-oriented learning: a review of the role of the patient in the education of medical students. Medical Education, 34, 851-857.



Thistlethwaite, J. (2004). Simulated patient versus clinician marking of doctors’ performance: which is more accurate? Medical Education, 38, 456.



Towle, A. & Weston, W. (2006). Patient’s voice in health professional education. Patient Education and Counselling, 63, 1-2.



Vleuten, C. van der (2000). Validity of final examinations in undergraduate medical training. British Medical Journal, 321, 1217-1219.




 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract

 Article Access Statistics
    Viewed1477    
    Printed53    
    Emailed1    
    PDF Downloaded211    
    Comments [Add]    

Recommend this journal