|Year : 2019 | Volume
| Issue : 2 | Page : 95-98
Developing an objective structured clinical examination in comprehensive geriatric assessment – A pilot study
Michael Vassallo1, Joseph Grey2, Anthony Hemsley3, Liliana Chris4, Stuart G Parker5
1 Department of Older Person Medicine, Royal Bournemouth Hospital, Bournemouth, Devon, UK
2 Department of Geriatric Medicine, Cardiff and Vale UHB, University Hospital Wales, Cardiff, Devon, UK
3 Department of Elderly Care, Royal Devon and Exeter NHS Foundation Trust, Exeter, Devon, UK
4 Department of Research, Royal College of Physicians, London, UK
5 Institute for Ageing and Health, Newcastle University, Campus for Ageing and Vitality, Tyne, UK
|Date of Web Publication||18-Nov-2019|
Royal Bournemouth Hospital, Castle Lane East, Bournemouth, BH7 7DW
Source of Support: None, Conflict of Interest: None
Background: Acquiring medical competencies alone does not necessarily lead to the delivery of quality clinical care. Many UK training programs are soon to be based on the curricula of entrustable professional capabilities (EPCs). These are tasks carried out in practice requiring proficiency in several competencies for quality practice. Assessments to evaluate EPCs for independent practice are needed. Comprehensive geriatric assessment (CGA) is an EPC in geriatric medicine. We describe the development of an assessment of CGA as an example of examining EPCs. Methods: A CGA station was introduced in the Diploma in Geriatric Medicine clinical examination. Candidates rotate through four stations: three single competency-based stations (history, communication/ethics and physical examination) and an EPC-based station in CGA. Results: One hundred and seventy-eight (female: 96 [53.9%]) candidates took it. There was a weak but significantly positive correlation between the score at CGA and the total score in the other stations (r = 0.46; P < 0.001). Most candidates passing the station passed the examination. Correlation with other stations similarly showed weak significant correlations (Station 1: r = 0.38; P < 0.001, Station 3: r = 0.28; P < 0.001, and Station 4: r = 0.37; P < 0.001). There was 61.4% (kappa: 0.61; P = 0.000) agreement between examiners whether a candidate passed or failed. Agreement was higher for the other stations, i.e. Station 1 (kappa: 0.85; P < 0.001), Station 3 (kappa: 0.72; P < 0.001), and Station 4 (kappa: 0.85; P < 0.001). Discussion: Performance on the station correlated positively with overall performance, suggesting that it has discriminatory value in differentiating candidates with varying ability and the more able candidates pass the examination.
Keywords: Comprehensive geriatric assessment, development, entrustable professional capabilities, objective structured clinical examination, summative assessment
|How to cite this article:|
Vassallo M, Grey J, Hemsley A, Chris L, Parker SG. Developing an objective structured clinical examination in comprehensive geriatric assessment – A pilot study. Educ Health 2019;32:95-8
|How to cite this URL:|
Vassallo M, Grey J, Hemsley A, Chris L, Parker SG. Developing an objective structured clinical examination in comprehensive geriatric assessment – A pilot study. Educ Health [serial online] 2019 [cited 2020 Jul 5];32:95-8. Available from: http://www.educationforhealth.net/text.asp?2019/32/2/95/271187
| Background|| |
Acquiring competencies alone does not necessarily lead to the delivery of quality practice. This led to the concept of entrustable professional capabilities (EPCs) aimed to link competency acquisition to clinical practice. They describe tasks/jobs required in the specialty that a practitioner would be expected to carry out once training is completed., They invariably require proficiency in several competencies. Many UK training programs are soon to be based on the curricula of EPCs, and assessments to evaluate such EPCs are needed.
Comprehensive geriatric assessment (CGA) is an EPC in geriatric medicine. It is a multidimensional, multidisciplinary process which identifies medical, social, and functional needs to develop an appropriate integrated/coordinated care plan., It has become an established part of practice as the benefits are increasingly being recognized for several health outcomes such as death, institutionalization, and disability.,,,,, It is, therefore, important to be able to examine the ability of clinicians in the specialty to lead on CGA. The current assessments are work based such as case-based discussions and often happen on an unstandardized, individual basis with little calibration. The Diploma in Geriatric Medicine (DGM) examination provides a summative assessment of proficiency in geriatric medicine. It comprises a written and a clinical examination and candidates need to pass both. The clinical part provides the ideal setting to develop a CGA assessment station.
The aim was to examine whether candidates have mastered the translation of competence domains into the ability to conduct a CGA. The examination had four stations including the GGA station. The stations examining history taking, communication, and clinical examination examine competencies that would be fundamental to the carrying out of CGA but which on their own do not test the CGA.
| Methods|| |
The DGM clinical examination comprises four stations: history taking, CGA, communication/ethics, and clinical examination, taking place simultaneously with four candidates rotating in turn. Each station lasts 14 min with 5-min interval between stations [Figure 1]. There are two examiners per station and each can give a maximum of 4 (clear pass) and a minimum of 1 (clear fail) marks (per station or case). After an initial calibration guided by anchor statements, examiners mark independently. The CGA station contributes 8 out of 40 marks. The station is not about making a diagnosis and therefore does not test history, communication, or examination skill competencies. It is about testing the process of leading a multidisciplinary team to formulate a patient-centered management plan. The assessment, therefore, concentrates on the ability to formulate a problem list and management plan for frail/complex older patients.
Template for comprehensive geriatric assessment scenario
Scenarios are developed using a structured template, around common presentations such as falls, stroke, deteriorating mobility, confusion, incontinence, multiple comorbidity, polypharmacy, frailty, and others. For realism, writers are encouraged to base these on real-life cases. The template consists of three parts: (1) the medical assessment, (2) CGA components (including activities of daily living (ADLs), mobility, continence, skin integrity, cognition, mental health, nutrition, and others such as speech and language therapy, social circumstances, and environment), and (3) problem list and action plan. The examiners are given the complete scenario. The candidates are given a much-shortened scenario with most of the medical information but much-reduced information relating to other CGA components.
The comprehensive geriatric assessment examination
The comprehensive geriatric assessment station is conducted in three stages:
- Preparation: In the 5-min interval before the station starts, candidates are expected to read the redacted scenario. They can make notes for later reference. Examiners meet before the start for calibration. Guided by the anchor statements, they must agree the areas that need to be covered to constitute a pass or clear pass
- Introductory period (up to 5 min): By now, the candidate should have formulated a summary of the case with an opinion of what further information is required to develop a holistic management plan. This period is about the candidate asking for further relevant information by discussion with the examiners. For example, the candidate information might have included a history, physical examination, comorbidity, and ADLs and the candidate may decide to request more detail about concordance to medication, nutrition, cognition, and social input. At the end of this introductory period, the examiners proceed to more formal questioning
- Discussion about priorities and management (about 9 min): This second part is about developing a management plan. The candidate must identify, discuss, and demonstrate the understanding of the missing elements of CGA, bring them together, and provide a prioritized summary and ethical management plan. The examiners then score the candidate independently of each other according to criteria agreed at calibration.
Candidates are given a clear pass (4 marks) if they demonstrate understanding and use all the elements of CGA to produce a prioritized problem list and management plan; pass (3 marks) if they understand and use most elements of CGA; fail (2 marks) if they demonstrate limited understanding and use only a few elements of CGA; and clear fail (1 mark) if they do not understand CGA and are unable to produce a problem list and management plan.
Evaluation and statistics
We used the Pearson's correlation coefficient to evaluate the relationship between the station score, the other stations individually, and the total score in the remaining stations excluding the index station. We looked at the level of agreement of examiners for pass and fail cutoffs using the kappa statistic and Cronbach's alpha for agreement between different parts of the examination. We used SPSS version 23. IBM Corp. IBM Armonk, NY: USA.
| Ethics|| |
This study relied on anonymized pooled demographic data collected by the examination administrators for quality control and monitoring reasons, and no ethical approval was required.
| Results|| |
Between November 2015 and May 2017, 178 (female: 96 [53.9%]) candidates sat the examination over four sessions. There was a weak but significantly positive correlation between the score at Station 2 (CGA) and the total score in the other stations (r = 0.46; P < 0.001). Most passing the station passed the examination. The trend was also demonstrated in the other stations [Figure 2]: Station 1 (r = 0.51; P < 0.001), Station 3 (r = 0.42; P < 0.001), and Station 4 (r = 0.46; P < 0.001). The correlation with other individual stations showed weaker yet statistically significant correlations (Station 1: r = 0.38; P < 0.001, Station 3: r = 0.28; P < 0.001, and Station 4: r = 0.37; P < 0.001). There was 61.4% (kappa: 0.61; P < 0.001) interexaminer reliability for pass or fail. This was higher for the other stations: Station 1 (kappa: 0.85; P < 0.001), Station 3 (kappa: 0.72; P < 0.001), and Station 4 (kappa: 0.85; P < 0.001). Cronbach's alpha shows good agreement between the different parts of the examination (Station 1 = 0.893; Station 2 = 0.894; Station 3 = 0.863; Station 4 = 0.880; all stations = 0.830).
|Figure 2: Correlation of scores at the individual station to score in the remaining stations (correlation statistics shown as a legend adjacent to relevant figure)|
Click here to view
| Discussion|| |
We developed a summative assessment to examine a candidate's ability to lead in CGA. We found that performance on the station correlated well with overall examination performance suggesting discriminatory value where the more able candidates go on to pass. The examiners showed an acceptable level of agreement between themselves, considering the complex nature of the station.
The effective use of CGA is central to the practice of geriatric medicine. At present, assessment of CGA is mostly carried out as a work-based assessment on an individual basis by an individual assessor. There is no standardization and often occurs in a formative rather than summative setting. Our new method provides an additional tool for structured and summative assessment.
There are several limitations including the relatively small number of candidates and limited sampling where only one CGA scenario is used. Although all stations showed significantly positive correlations with the total score in the other stations, there were differences in the extent to which this seems to have occurred. In addition, the level of agreement between examiners for pass/fail was the lowest for the examination despite calibration. This may have been due to the relatively small sample size, differences in the psychometric characteristics of the stations, and a different understanding by examiners what is to be expected from the passing candidate CGA. More examiner training is required to mitigate this.
It is also not a real-life situation, and there is a potential for variability for task difficulty as different scenarios are used over time. Although scenarios are based on real cases, candidates may behave differently than in their workplace, so performance may not be representative of real clinical care. This is true of all examinations; however, further validation work by increasing the number of CGA stations with varying levels of difficulty and evaluation along the four domains in Kane is required. While we have developed the test in relation to scoring and generalization domains, more work is required in relation to extrapolation (using the score[s] as a reflection of real-world performance) and implications.
We believe that this station is a modern assessment to evaluate the ability of a professional working in geriatric medicine to perform this task at a level of independent practice. This initiative to develop a CGA station is relevant in the context of current curricular thinking and is an example of developing assessments for EPCs as opposed to an individual competency.
Financial support and sponsorship
The DGM examination is run by the Royal College of Physicians of London in conjunction with the British Geriatrics Society.
Conflicts of interest
All authors are examiners of the DGM examination and claim traveling expenses in accordance with the Expenses Policy of the Royal College of Physicians of London.
| References|| |
ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med 2007;82:542-7.
ten Cate O, Young JQ. The patient handover as an entrustable professional activity: Adding meaning in teaching and practice. BMJ Qual Saf 2012;21 Suppl 1:i9-12.
Kwan J, Crampton R, Mogensen LL, Weaver R, van der Vleuten CP, Hu WC, et al.
Bridging the gap: A five stage approach for developing specialty-specific entrustable professional activities. BMC Med Educ 2016;16:117.
Parker SG, McCue P, Phelps K, McCleod A, Arora S, Nockels K, et al.
What is comprehensive geriatric assessment (CGA)? An umbrella review. Age Ageing 2018;47:149-55.
Soobiah C, Daly C, Blondal E, Ewusie J, Ho J, Elliott MJ, et al.
An evaluation of the comparative effectiveness of geriatrician-led comprehensive geriatric assessment for improving patient and healthcare system outcomes for older adults: A protocol for a systematic review and network meta-analysis. Syst Rev 2017;6:65.
Pilotto A, Cella A, Pilotto A, Daragjati J, Veronese N, Musacchio C, et al.
Three decades of comprehensive geriatric assessment: Evidence coming from different healthcare settings and specific clinical conditions. J Am Med Dir Assoc 2017;18:192.e1-192.e11.
Shields L, Henderson V, Caslake R. Comprehensive geriatric assessment for prevention of delirium after hip fracture: A systematic review of randomized controlled trials. J Am Geriatr Soc 2017;65:1559-65.
Partridge JS, Harari D, Martin FC, Peacock JL, Bell R, Mohammed A, et al.
Randomized clinical trial of comprehensive geriatric assessment and optimization in vascular surgery. Br J Surg 2017;104:679-87.
Bureau ML, Liuu E, Christiaens L, Pilotto A, Mergy J, Bellarbre F, et al.
Using a multidimensional prognostic index (MPI) based on comprehensive geriatric assessment (CGA) to predict mortality in elderly undergoing transcatheter aortic valve implantation. Int J Cardiol 2017;236:381-6.
Schulkes KJ, Souwer ET, Hamaker ME, Codrington H, van der Sar-van der Brugge S, Lammers JJ, et al.
The effect of A geriatric assessment on treatment decisions for patients with lung cancer. Lung 2017;195:225-31.
Fisher JM, Bates C, Banerjee J. The growing challenge of major trauma in older people: A role for comprehensive geriatric assessment? Age Ageing 2017;46:709-12.
IBM Corp. IBM SPSS Statistics for Windows. Ver. 23.0. Armonk, NY: IBM; 2013.
Kane MT. Validating the interpretations and uses of test scores. J Educ Meas 2013;50:1-73.
[Figure 1], [Figure 2]