Print this page Email this page Users Online: 757 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
ORIGINAL RESEARCH ARTICLE
Year : 2018  |  Volume : 31  |  Issue : 2  |  Page : 103-108

Effect of faculty personality, rating styles, and learner traits on student assessment in medical education: A mixed-method study from the Aga Khan University, Karachi


1 Department of Family Medicine, Aga Khan University, Karachi, Pakistan
2 Department of Educational Development, Aga Khan University, Karachi, Pakistan

Date of Web Publication30-Nov-2018

Correspondence Address:
Saniya R Sabzwari
Aga Khan University Hospital, Stadium Road, P.O. Box 3500, Karachi 74800
Pakistan
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/efh.EfH_10_17

  Abstract 


Background: Medical colleges invest considerable effort in developing assessment programs to effectively evaluate students across attributes of knowledge, skills, and behavior. While assessment by direct observation is designed to be objective, “soft characteristics” such as personality, demeanor of student, and assessor may make assessment more subjective. The effect of such attributes in medical education remains unclear and needs exploration. The objective of this study was to explore non-cognitive traits of assessor and learner to understand their roles in student assessment. Methods: A mixed-method study was conducted during March to June 2015. All clinical faculty members at the Aga Khan University were invited to participate in this study. A questionnaire was designed and completed by the study participants. Two focus group discussions (FGDs) with faculty members explored teacher and learner traits influencing student assessment. A documentary analysis of the yearly student feedback report with a focus on the section on assessment was also utilized. Data triangulation was achieved by combining three sets of data. Results: Fifty-four (28%) clinical faculty members completed the questionnaire and 11 participated in the FGDs. About 68% reported rating students leniently. More than 50% reported their personality as a factor influencing assessment and 76% reported student appearance influencing assessment. The documentary analysis identified faculty personality and rating styles as key issues affecting the validity of student assessment during ongoing observation. In the FGDs, traits such as eagerness, intuition in students, and body language were reported to influence faculty members during the assessment. Discussion: Softer attributes of trainer and trainee increase the subjectivity of student assessment. Ongoing faculty training and rater feedback are required for a robust and objective assessment.

Keywords: Faculty rating, medical education, student assessment


How to cite this article:
Sabzwari SR, Pinjani S, Nanji K. Effect of faculty personality, rating styles, and learner traits on student assessment in medical education: A mixed-method study from the Aga Khan University, Karachi. Educ Health 2018;31:103-8

How to cite this URL:
Sabzwari SR, Pinjani S, Nanji K. Effect of faculty personality, rating styles, and learner traits on student assessment in medical education: A mixed-method study from the Aga Khan University, Karachi. Educ Health [serial online] 2018 [cited 2020 Aug 12];31:103-8. Available from: http://www.educationforhealth.net/text.asp?2018/31/2/103/246745




  Background Top


Assessment programs in medical education follow rigorous national or international standards to allow for the development of safe and competent physicians.[1] Medical colleges invest considerable time and effort developing assessment programs to effectively evaluate students across attributes of knowledge, skills, and behavior.

Students in clinical clerkships are primarily assessed in settings such as ward rounds and outpatient clinics. Most commonly utilized tools are checklists or forms that aim to objectively assess knowledge, clinical reasoning, skills, and behavior. While the instrument or tool used for these attributes may vary between institutions, direct observation is a common method of assessing learners on an ongoing basis and has been deemed the golden standard in medical education.[2] The advantage of direct ongoing observation lies in the evaluation of traits such as critical thinking, clinical reasoning, skill development, and professionalism over a period of time documenting student progress in real-time settings. Direct observation under supervision with feedback on performance is found to positively impact skill development;[3] however, training of the observer is also important.[4]

While assessment by direct observation is designed to be objective and specific, the influence of softer qualities such as student personality, demeanor, appearance, or sociability is often not taken into account. These attributes are generally harder to assess and are thus termed “soft” skills or attributes. Such noncognitive traits do influence student assessment and academic performance in preclinical years of medical school and nursing education.[5],[6] Whether these traits impact medical student assessment in clinical years remain unclear. Other important domains such as professionalism and communication have no single best way of measurement.[7] It is important to identify how soft attributes contribute to the assessment of these important domains.

Noncognitive traits of the assessor may also influence student assessment. Scheepers et al. report a direct effect of certain personality traits in teachers that influence their teaching and learning;[8] whether this influences assessment needs further exploration. Rating styles of faculty may also impact student assessment. Some faculty members are stricter in their marking of students and are termed “Hawks;” others are more lenient, award marks more generously, and are termed “Doves.” The influence of these rating styles of faculty “dove” versus “hawk” in student assessment during formal examination settings has been cited;[9] whether this impacts ongoing assessment during clinical clerkships remain to be seen.

Furthermore, assessment of “non-cognitive” student traits such as personality and work habits forms a gray area.[10] Chibnall and Blaskiewicz reported that student characteristics such as agreeableness and warmth were associated with positive evaluations from faculty members.[11] How these affect the validity of assessment remains a question.

Assessment in clinical years in our institution includes ongoing assessment during clerkships and end-of-clerkship clinical examination. The ongoing assessment based on direct observation is both formative and summative. Attributes each student is assessed on include professionalism, knowledge, physical examination, communication and leadership skills, data interpretation, patient care and safety, task completion, and patient education. All of these attributes are combined on a form termed student continuous assessment form with ratings on a Likert scale. This form was developed in line with the American Association of Medical Colleges student competencies.

Anecdotal evidence from our institute through various educational forums and curriculum reviews has identified some issues in assessment via direct observation. Feedback from students in clinical clerkships showed overall satisfaction with assessment in examination settings, but some concern regarding clerkship assessments which they reported were dependent on rating styles of faculty rather than individual student performance.

Thus, it is important to explore the clinical faculty members' perception and the role of their rating styles during student assessment to determine how to strengthen assessment. The aim of this study was also to explore the influence of noncognitive attributes of assessor and the learner on student assessment during direct observation.


  Methods Top


A mixed-method study was conducted from March to June 2015 at the Aga Khan University to understand the role of student and faculty personality and rating styles during ongoing assessment through direct observation.

Quantitative

Data collection tools

A questionnaire was designed after a comprehensive literature search and was composed of two sections. The first part included demographic information of the faculty such as current position, department, number of years in teaching, and type of postgraduate qualification. The second part included a 5-point Likert scale to allow faculty to choose the most relevant descriptor for each question. The scale ranged from never (0%) to sometimes (30%) and always (100%). The second part of the questionnaire explored faculty understanding and style of assessment, difficulty during the assessment, student factors impacting assessment, and factors that could improve assessment. A pilot study was conducted with ten faculty members to establish a clear understanding of the domains of the questionnaire. Feedback from the pilot was obtained and minor modifications were made to the form.

Setting and participants

All clinical faculties (n = 190) who were involved in undergraduate medical teaching for at least 6 months at the Aga Khan University (AKU), Karachi, Pakistan, were invited to participate in the study. The medical college at AKU is an international private university which offers a 5-year undergraduate program leading to MBBS.

Data collection

Clinical faculty members were approached through e-mail and invited to participate in the study. Those faculty members who consented to participate were then sent the study questionnaire. Reminder e-mails were sent twice over the next 4 weeks. Faculty members were also approached through their affiliation to various university academic committees to ensure the maximum participation. Over a 3-month time period, 54 (28.4%) forms were received from medical, surgical, and allied specialties.

Data analysis

SPSS version 19 was used for data analysis (IBM SPSS Statistics for Windows, Armonk, NY: IBM Corp). Frequencies and proportions were calculated for all variables of interest. Pearson Chi-square test was applied to observe the relation between teaching experience and faculty rating styles. Logistic regression analysis was performed to identify the association of number of years of teaching with the rating style of faculty and student factors. A dependent binary variable was created on the basis of years of teaching, that is, junior (<10 years) and senior faculty (>10 years). Univariate analysis was performed to observe the independent effect of each factor with years of experience, and results were reported as unadjusted odds ratio with 95% confidence interval. Throughout the analysis, P = 0.05 was considered statistically significant.

Qualitative

Methods

A documentary analysis of the annual student feedback report obtained from trainees in clinical clerkships during years 3, 4, and 5 was performed.

FGDs were conducted with the clinical faculty.

Study participants

Faculty members who had completed the quantitative part of the questionnaire were invited to participate in the FGD and those willing to participate were recruited. A total of 11 faculty members were selected.

Data collection

Two FGDs were conducted by the researcher over 60–90-min duration. For each group, a mix of junior and senior faculties were included to ensure adequate representation of views and perspectives based on their experience. Participants were briefed about the objectives and process before the start of the FGD. Broad questions on the importance of assessment, what constituted effective assessment, self-reflection as assessors, and ways to improve assessment were included. The FGDs were audio recorded and transcribed.

Data analysis

The student feedback report was explored for content. All informations pertaining to student assessment were selected for documentary analysis. Key themes were generated based on repetition and areas in assessment given emphasis by students. Based on these themes, data were then categorized on the types of assessment issues (with assessment as per student perspective).

FGD transcripts were shared with participants for validation or member checking for accuracy. Transcripts were read and reread; repetitive words and phrases were selected. These keywords were then used to categorize data inductively. Open codes were derived from the content of the transcripts. These codes were then grouped together to generate themes.

Data triangulation was then achieved by combining the information from the questionnaire and FGD with the documentary analysis of the student feedback report.

Ethical approval

Approval was obtained from the University Ethical Review Committee AKU, Pakistan. Written informed consent was obtained from all the study participants. Codes were assigned to all identifying information for confidentiality. Permission to use student feedback report by primary researchers was obtained from the relevant academic committee.


  Results Top


Quantitative

Fifty-four clinical faculty members (28.4%) completed the quantitative part of the study; of these, eleven participated in the FGDs. Reliability for internal consistency of questionnaire was calculated and the Cronbach's alpha was found to be 0.76. Faculty, age, gender, department, number of years of teaching, and other demographic information are outlined in [Table 1].
Table 1: Demographic characteristic of study participants (n=54)

Click here to view


About 68% of faculty reported rating like doves (lenient assessors). The sub-analysis showed that the concept of “dove and hawk” in student assessment was known to two-thirds (66.7%) of the senior faculties, in contrast only one-third (33.3%) of the junior faculties were familiar with it. This difference was statistically significant (P = 0.015).

More than half of the participants reported their personality as a factor influencing student assessment.

Among student factors, almost 76% reported student appearance influencing assessment. Proficiency in English influenced 37% of the faculty. More than 80% of faculty members reported being influenced by student demeanor (i.e., mannerism, attitude, and conduct). Student attendance was a factor affecting assessment for 92% of the faculty members [Figure 1].
Figure 1: Student factors influencing assessment

Click here to view


In the subanalysis, student demeanor influenced junior faculty more than senior faculty. The difference was statistically significant (P = 0.049) [Table 2]. Proficient use of English was a greater influencing factor (50%) for those faculties who had <10 years of teaching; this difference, however, was not statistically significant (P = 0.078).
Table 2: Sub-analysis - Student Factors Influencing Assessment based on years of teaching

Click here to view


Qualitative

A detailed and open discussion took place in both FGDs. Majority were females, that is, nine out of eleven. Of these, 50% were junior-level faculty members, that is, up to assistant professor.

All agreed on the importance of one-on-one encounters with students over time for a valid assessment. Half of them reported that traits such as eagerness, intuition in students, and body language during patient interaction influenced their assessment. These were above and beyond the attributes measured in the assessment tool.

Behaviors such as disinterest and unwillingness were also reported by a few faculty members to influence their overall assessment negatively. One faculty member stated that “I find it hard to rate students who are quiet or disinterested” expressing her sense of discomfort in assessing student performance of less vocal students.

On reflecting as assessors, experience in the assessment was considered very important by one-third of the participants. Only one faculty member reported his assessment as being unbiased. Half of the participants admitted to have specific rating styles. One participant stated that “I used to be stricter now I have become more lenient over time.” Another respondent reported that “I have become more comfortable and relaxed with assessment through experience” and felt that students also liked working with “more lenient” faculty. One-third of the participants reported reluctance in failing students. One faculty member stated that “I find it hard to rate students poorly.”

Faculty training, both at the time of recruitment and ongoing, was considered to be one of the key strategies in making assessment more robust. In addition, half of the participants expressed the need to get yearly reports on student scores they awarded, to allow self-reflection on their rating styles. “I believe comparison with assessment of other faculty members at the end of an academic year would help me to become a better assessor,” as one faculty member said.

One of the key findings from the documentary analysis of student feedback report was that majority of the students were satisfied with end-of-clerkship examination scoring, but ongoing assessment during clerkships was felt to have a clear subjective trend which students felt undermined the validity of the assessment despite the use of a comprehensive assessment tool Students reported that assessment was often nondiscriminatory, based on the group assessment rather than individual performance as quoted by one student that “groups/partners get assessed as units not individually.” Students also felt that individual personalities of faculty members influenced student assessment. Another student said that “Assessment in clerkships is based on the consultant's mood or preference.”


  Discussion Top


This study was able to identify certain subtle traits of trainer and trainee that had an effect on student assessment in clinical years of medical education. While prior studies have looked at faculty rating styles and student attributes separately, this study obtained information from the learner and the assessor, both key stakeholders in assessment.

The documentary analysis of the student feedback report provided background information on issues related to assessment during ongoing observation in clinical years. Both the qualitative and quantitative sets of data obtained from faculty members corroborated the student feedback report.

Faculty personality influenced student assessments was first identified in the student feedback report and then affirmed by responses from faculty members. The personality traits of physicians affecting teaching/learning have been cited in a previous study.[8] This study showed its influence during assessment as well, but the exact extent of the influence remains unquantified.

The impact of rating styles (dove and hawk) was identified as an issue in the student feedback report. Whereas, students alluded to faculty having a more stringent style of marking, majority of the faculty members reported themselves as lenient assessors. The ones who identified themselves as “hawks” felt that their stringency in marking decreased with time. The reason for this increasing leniency remains unclear. A previous study that assessed rating styles reported increasing strictness in student marking when larger number of candidates were examined, one key difference was that the study measured on examiner rating styles during a clinical examination (i.e., PACES MRCPUK) rather than ongoing assessment during clerkships.[9]

The influence of personality on rating styles was cited in a more recent study which reported an association of neurotic traits with examiner strictness in rating.[12] Although overall personality appeared to influence assessment, no particular traits related to temperament were identified in our study.

A general reluctance to fail students was reported by faculty members. The lenient rating style of majority of the participating faculty members may be a contributing factor.

A clear need for faculty members to self-assess and reflect on their own assessment practices was found.

Student factors also played a significant role during direct observation for student assessment. Although student's conscientiousness and dutifulness are important predictors of the future performance,[13] this study found that a superficial criterion such as student attendance was serving as a proxy for professionalism. Attributes such as student appearance and proficient use of English were also being used to rate students in the areas of professionalism, knowledge, communication skills, and problem-solving skills. As one respondent in the FGD said “Assessment is heavily influenced by communication skills.” Whereas, communication is an important competence to achieve, using it as a predominant domain may weaken overall assessment. Another faculty member stated that “appearance gave half the marks.”

Reliance on such traits by faculty members is bound to affect the validity of assessments and also undermine student effort and learning.[14]

Student personality affecting clinical performance has been cited in a previous study, in which sociability significantly affected clinical evaluations;[15] this was not identified as a major influence in this study.

Although issues related to student gender were not explored in this study, a previous study reported that faculty inferred the presence of certain attributes in students based on gender, for example, females having more compassion.[16] The need to address such inherent biases becomes important to bring rigor to assessment.

Another issue that emerged from this study was that assessment of students rotating in groups was similar and unable to distinguish individual performances. This was identified both in the student report and also by faculty. Some students in the feedback reported that “groups/partners get assessed as units not individually.” One FGD participant reported that “It is difficult to assess students in groups as only a few standouts and those are the ones you remember. The quiet ones are hard to assess.” In our study, time constraints during direct observation and assessment and degree of student participation (i.e., students making their presence felt through their responses to questions) appeared to affect group assessment the most.

The FGD also highlighted the need for ongoing training of faculty members involved in assessment through direct observation. Previous studies have identified that training improves rater confidence and comfort level in the assessment of clinical skills, enhancing validity of the assessment.[17],[18] A more recent study that identified examiner leniency (dove effect) as a predictor in passing high stakes summative assessment, stressed on the need for assessor training.[19]

The key strength of this study was the strong overlap of findings from data triangulation. One limitation was the inability to obtain a larger sample size. Time constraints of clinical faculty appeared to be the key reason for this. As faculty members were invited to participate with the questionnaire and in the FGD, self-selection bias was also present. A validated tool was not used for data collection; however, the questionnaire showed good internal reliability.

Strengthening faculty training by periodic workshops aimed at promoting a deeper understanding of issues in the assessment was a key recommendation.

Other solutions proposed by faculty members were provisions of regular faculty feedback on their performance as assessors. In addition, sharing comparative assessments of individual faculty versus peer faculty would allow a better understanding of individual rating styles.


  Conclusion Top


Assessment of medical students in clinical clerkships is a complex task that is influenced by subtle attributes of trainer and trainee. Ongoing faculty training, regular rater, and student feedback and review are required for a robust assessment in order to strengthen student learning and performance outcomes.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945-9.  Back to cited text no. 1
    
2.
Council on Medical Student Education in Pediatrics Direct observation of 3rd Year Pediatric Students by Teaching Faculty; 2011. Available from: http//www.comsep.org/scholarlyactivities/template. [Last updated on 2017 Oct].  Back to cited text no. 2
    
3.
Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA 2009;302:1316-26.  Back to cited text no. 3
    
4.
Hauer KE, Holmboe ES, Kogan JR. Twelve tips for implementing tools for direct observation of medical trainees' clinical skills during patient encounters. Med Teach 2011;33:27-33.  Back to cited text no. 4
    
5.
Adam J, Bore M, McKendree J, Munro D, Powis D. Can personal qualities of medical students predict in-course examination success and professional behaviour? An exploratory prospective cohort study. BMC Med Educ 2012;12:69.  Back to cited text no. 5
    
6.
Megginson L. Noncognitive constructs in graduate admissions: An integrative review of available instruments. Nurse Educ 2009;34:254-61.  Back to cited text no. 6
    
7.
Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387-96.  Back to cited text no. 7
    
8.
Scheepers RA, Lombarts KM, van Aken MA, Heineman MJ, Arah OA. Personality traits affect teaching performance of attending physicians: Results of a multi-center observational study. PLoS One 2014;9:e98107.  Back to cited text no. 8
    
9.
McManus IC, Thompson M, Mollon J. Assessment of examiner leniency and stringency ('hawk-dove effect') in the MRCP(UK) clinical examination (PACES) using multi-facet Rasch Modelling. BMC Med Educ 2006;6:42.  Back to cited text no. 9
    
10.
Hojat M, Erdmann JB, Gonnella JS. Personality assessments and outcomes in medical education and the practice of medicine: AMEE guide no 79. Med Teach 2013;35:e1267-301.  Back to cited text no. 10
    
11.
Chibnall JT, Blaskiewicz RJ. Do clinical evaluations in a psychiatry clerkship favor students with positive personality characteristics? Acad Psychiatry 2008;32:199-205.  Back to cited text no. 11
    
12.
Finn Y, Cantillon P, Flaherty G. Exploration of a possible relationship between examiner stringency and personality factors in clinical assessments: A pilot study. BMC Med Educ 2014;14:1052.  Back to cited text no. 12
    
13.
Lievens F, Coetsier P, De Fruyt F, De Maeseneer J. Medical students' personality characteristics and academic performance: A five-factor model perspective. Med Educ 2002;36:1050-6.  Back to cited text no. 13
    
14.
Zahn CM, Nalesnik SW, Armstrong AY, Satin AJ, Haffner WH. Variation in medical student grading criteria: A survey of clerkships in obstetrics and gynecology. Am J Obstet Gynecol 2004;190:1388-93.  Back to cited text no. 14
    
15.
Davis KR, Banken JA. Personality type and clinical evaluations in an obstetrics/gynecology medical student clerkship. Am J Obstet Gynecol 2005;193:1807-10.  Back to cited text no. 15
    
16.
Axelson RD, Ferguson KJ. Bias in assessment of noncognitive attributes. Virtual Mentor 2012;14:998-1002.  Back to cited text no. 16
    
17.
Cook DA, Dupras DM, Beckman TJ, Thomas KG, Pankratz VS. Effect of rater training on reliability and accuracy of mini-CEX scores: A randomized, controlled trial. J Gen Intern Med 2009;24:74-9.  Back to cited text no. 17
    
18.
Holmboe ES, Hawkins RE, Huot SJ. Effects of training in direct observation of medical residents' clinical competence: A randomized trial. Ann Intern Med 2004;140:874-81.  Back to cited text no. 18
    
19.
Daly M, Salamonson Y, Glew P, Everett B. Hawks doves: The influence of nurse assessor stringency and leniency on pass grades in clinical skills assessments. Collegian 2016;24:449-54.  Back to cited text no. 19
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Background
Methods
Results
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed1292    
    Printed57    
    Emailed0    
    PDF Downloaded207    
    Comments [Add]    

Recommend this journal