Showing posts with label Objective Structured Clinical Exams. Show all posts
Showing posts with label Objective Structured Clinical Exams. Show all posts

March 30, 2021

Should Standardized Patients Score Student Performance?

by Ashley Miller, PharmD, PGY1 Community Pharmacy Resident, University of Mississippi School of Pharmacy

It's the end of the semester, and the last thing standing between you and your summer break is the objective structured clinical examination (OSCE). You know that you’ll be entering multiple interactive stations that will assess your ability to perform patient care-related activities. Who do you hope will be grading your performance – a teacher you’ve had, or a stranger — a standardized patient (SP)? I know what I preferred when I was the one undergoing these evaluations, but I was curious to learn more about what other professional students, faculty, and researchers had to say about who is the best person to evaluate and score a student’s performance.

OSCEs date back to the 1960s and were first used as assessments in medical schools.  Each OSCE station is intended to represent a realistic clinical scenario during the student interacts with a “patient.”1,2  At many schools, the patient role is played by an experienced actor known as a standardized patient (SP).  An OSCE allows students to "practice" in an environment safe for both them and patients.1 OSCEs are reliable and valid assessment tools and predict students' future success.1,2  Their use was expanded to other health professional programs including dentistry, pharmacy, and nursing.1,2 They were designed to comprehensively evaluate clinical, interpersonal, and problem-solving skills and consistently portray the clinical scenario so that every student has the same experience (and opportunities).1,3 While preparing and delivering an OSCE is very time-consuming, both educators and students alike agree that OSCEs are a valuable learning and assessment tool.3

One thing not always agreed upon when considering OSCEs is whether a faculty member or SP should grade performance. In some instances, an SP may interact with the student while a faculty member grades the interaction while observing the encounter either remotely or in the same room. Some argue that having faculty graders introduces additional bias and negatively influences students' performance when compared to a more neutral grader.3 Others claim SPs do not have the skillset or training needed to properly assess students.3 Previous studies involving faculty versus SP graders have not provided a clear answer as to who makes the “best” grader.

Different evaluator factors contribute to variability when scoring OSCE encounters, including lack of defined criteria, lack of training, and the number of items to be assessed.4  One study looked at factors that affected student scores during an OSCE when evaluated by faculty versus SPs.4 Before grading began, all examiners were first put through a series of training on the OSCE process and the criteria they were to use for scoring students.4 The researchers found that the scores given by SPs were higher than those given by faculty members, suggesting that the type of grader does influence scores.4 Another interesting finding was that the faculty evaluators assessed technical skills more strictly, yet were not as strict when grading communication skills when compared to SP evaluators.4 The technical skills assessed included history-taking, physical examination, and patient education.4 Communication skills that were graded include the attitude of the student, active listening, ability to build rapport, and effective questioning.4 Notably, faculty members who were scoring items related to their specialty tended to assign lower grades.4 The authors hypothesized that these differences are seen because faculty graders are more familiar with assessing the technical skills (particularly if it was relevant in their specialty) and have higher expectations for performance, while SPs are not as comfortable giving lower scores on technical matters.4

While some faculty members believe that their presence does not impact student performance, students often report that knowing teachers are grading OSCEs increases testing anxiety.3 The increased stress then impacts performance which, in turn, affects students’ grades.3,4 In a study conducted using student questionnaires to survey the use of SP versus faculty graders, McLaughlin et al. found that the majority of students felt SPs helped create a less stressful testing environment, were as good at giving feedback as faculty graders, and felt they were adequately equipped to assess their skills.2 The findings of this study demonstrate that students generally prefer to be graded by an SP and believe that an SP can competently assess their performance.2

So, who should grade a student’s performance during an OSCE? It likely depends on who you are asking. Overall, it seems that most students feel SPs are equipped for the task, are fair graders, and help them to feel more at ease. Much like how I felt when I was a student, it seems students would prefer an SP in these encounters since it is a more realistic experience — similar to interacting with patients in the “real world.”3  However, some may contend that, while students may not be as comfortable, having professors performing the assessment is in the student's best interest long-term because they can more accurately assess the student’s technical skill. One point made for this argument is that some studies have shown that grades given by faculty are predictive of future performance.2 Another point made by researchers and those in academia for having faculty graders is that they are content experts and may be able to identify students who have only surface-level knowledge but appear confident and skillful to a non-expert.2 It is also possible to have SPs interact with the students while faculty members observe and grade the encounter synchronously or asynchronously.  In this way, the student performance is scored by both the SP and faculty members.  However, this would cost more time and money as both SPs and faculty would need to be trained. Research shows SPs focus more on communication while faculty focus more on technical skills in an encounter, thus, it may come down to the most important skill being assessed in a particular OSCE station when choosing who should score it.

References 

  1. Alsaid A, Al-Sheikh M. Student and Faculty Perception of Objective Structured Clinical Examination: A Teaching Hospital Experience. Saudi J Med Med Sci [Internet]. 2017;5 (1):49-55.
  1. McLaughlin K, Gregor L, Jones A, et al. Can SPs Replace Physicians as OSCE Examiners? BMC Med Educ [Internet]. 2006;6: Article 12. 
  1. Salinitri FD, O’Connell MB, Garwood CL, et al. An Objective Structured Clinical Examination to Assess Problem-Based Learning. Am J Pharm Educ [Internet]. 2012;76(3): Article 44. 
  1. Park YS, Chun KH, Lee KS, et al. A Study on Evaluator Factors Affecting Physician-Patient Interaction Scores in Clinical Performance Examinations: A Single Medical School Experience. Yeungnam Univ J Med 2021;38(2):118-126.