January 16, 2023

Achieving the Promise of Authentic Workplace-Based Assessments

by Sophie Durham, PharmD, PGY1 Community Pharmacy Practice Resident, Mississippi State Department of Health Pharmacy

Workplace-based assessments (WBAs) can be intimidating and burdensome for students and evaluators alike; however, these assessments pose an opportunity to use real-time direct observation to provide feedback that supports a learner’s growth and development.1 Unfortunately, students often fail to see the usefulness of feedback in clinical settings or feel that their grades might be negatively affected by observations reported through workplace-based assessments.

Throughout my Advanced Pharmacy Practice Experiences (APPEs), I craved feedback so that I could develop as a clinician and ensure that I was providing optimal patient care. I valued the feedback that I received at the midpoint and final evaluations; however, these evaluations were used to determine my final grade. As a student, I benefitted from receiving more frequent, informal feedback to improve my performance in real time. By providing students with more timely formative assessments, preceptors allow students to reflect on their experiences and make necessary corrections to improve their practices without the stress of contributing to their grades.


WBAs are used to evaluate trainees’ performance in practice and can be used by learners as relevant feedback to engage in reflection. WBAs encompass a wide range of assessment strategies that require evaluators to move away from merely assigning numbers toward a more structured format of assessment. WBAs can be used to provide feedback on trainee-patient interactions, procedural skills, case-based discussions, and multi-source feedback.2

Lauren Phinney and colleagues at the University of California San Francisco used cultural historical activity theory (CHAT) to identify feedback system elements and tensions among these elements to explore workplace-based assessment used during medical clerkships. The school introduced a WBA tool in 2019 that includes drop-down items describing the clerkship specialty, skills observed, entrustment ratings adapted from the Ottawa scale, and space for narrative comments to encourage formative feedback. Students are required to gather two WBAs per week. The research interviewed first and second-year medical students participating in core clerkship rotations.1

CHAT allows investigators to examine how tools mediate activities. An activity system is defined as the interaction between learners and tools to achieve an outcome. Tensions among these elements can promote change, create knowledge, and lead to new activity patterns.1 After interviewing students in a series of focus groups, researchers identified five tensions:

  1. Misinterpretation of WBA Feedback as Summative Assessment. Although WBAs were intended to serve formative purposes, first-year students perceived the object, or purpose, of the WBA to be for summative purposes. Formative assessments are intended to monitor student learning to provide ongoing feedback to improve teaching and learning. More specifically, formative assessments help students identify strengths and weaknesses. This allows students to target areas of improvement and help faculty pinpoint areas where students are struggling to provide assistance.3 On the other hand, the goal of summative assessments is to evaluate student learning at the end of a rotation and are often high stakes, resulting in the assignment of a grade or score. Even when second-year students correctly identified the purpose of WBAs as low-stakes feedback, students were still concerned that this feedback would be used to inform summative assessments and strategically chose to use WBAs when they anticipated positive feedback instead of opportunities for constructive feedback. Two ways to enhance the distinction between summative and formative evaluations in WBAs are to use two different platforms to complete WBAs and summative assessments and allow students to self-complete WBAs.1
  2. Cumbersome Tool Design that Delayed Feedback. WBA requests were sent via computer, so many of these requests were sent hours after feedback encounters. Students found that the distribution and completion of WBAs were delayed, which resulted in generic or untimely feedback. Utilizing QR codes on smart phones and improvements in technology facilitated supervisor engagement and rapid feedback.1
  3. Concern About Burdening Supervisors with WBA Tasks. While clerkship leaders encouraged students to seek feedback, students were concerned about interrupting workflow or interfering with patient care. Students found the assessments to be labor-intensive and redundant. Students employed strategies to streamline the process, such as recording and submitting comments that preceptors provided during the encounter with the WBA request form, which made it easier for preceptors to complete the assessments.1
  4. WBA Requirement as Checking Boxes vs. Learning Opportunity. The weekly quota of completion of two WBAs overshadowed the purpose of WBAs as a formative feedback mechanism. The authenticity and usefulness of the feedback could be jeopardized when students and supervisors focus on the rule instead of the opportunity to provide feedback. On the other hand, some students reframed this requirement to benefit them. One benefit of the requirement included the ability for students to direct their learning to meet self-identified goals and receive timely feedback to ensure that they were making progress toward achieving these goals. Another benefit of the rule was to initiate consistent feedback discussions with preceptors who did not volunteer to provide feedback.1
  5. WBA Within Clerkship-Specific Learning Culture. Supervisors’ promotion and acceptance of WBAs ultimately set the tone for WBA encounters. Students found that preceptors that actively facilitated WBA encounters provided more useful feedback, while preceptors that gave pushback created a barrier. In addition to using more convenient platforms to complete WBAs, students identified more convenient situations, logged feedback retrospectively, and bypassed tool discussion to minimize the burden on team members in settings that were not conducive to WBAs.1

In competitive cultures like medicine, it can be difficult to facilitate formative assessments. The author concluded that by incorporating learner input to make intentional changes, perceptions and utilization of WBAs can be enhanced.1

The authors provided potential solutions to the perceived problems with WBAs. There is often a disconnect between the intention and interpretation of workplace-based assessments.  Thus, we need to consider structuring their format and delivery by gathering student feedback. Through this collaboration with students, we can strive to achieve authentic workplace-based assessments that accurately reflect learners’ progress and are used to improve future performance.

While this study focused on the benefits of WBAs in student-preceptor interactions at one medical school, WBAs can be used in several ways. WBAs can be applied across multiple settings and can be separated into three different categories: observation of clinical performance, discussion of clinical cases, and feedback from peers, coworkers, and patients. These assessment tools provide insight to the trainee, assessor, and academics alike.2

In addition to getting student feedback, I believe we need to gather feedback from preceptors to determine their perceptions of WBAs. Thus, WBAs could be further improved to meet the needs of both students and preceptors. To ensure that we are providing useful and timely feedback to learners, its important to reduce the barriers to WBA use. By using QR codes, separate platforms to differentiate summative and formative assessments, and platforms that are compatible with smartphones when computers are not available, schools can establish user-friendly and time-efficient processes and ensure that WBAs that are valuable without adding substantial burden that jeopardize feedback quality.1

References:

  1. Phinney, LB, Fluet A, O’Brien BC, Seligman L, Hauer KE. Beyond checking boxes: Exploring tensions with use of a workplace-based assessment tool for formative assessment in clerkships. Acad Med 2022; 97: 1511-1520.
  2. Guraya, SY. Workplace-based assessment; Applications and educational impact. Malays J Med Sci 2015; 22: 5-10.
  3. Formative vs. Summative Assessment [Internet]. Pittsburgh: Carnegie Mellon University; [cited 2022 Nov 18].

November 12, 2022

Failure to Fail: Why Teachers Are Reluctant to Fail Learners and What We Can Do About It

by Katelyn Miller, PharmD, PGY1 Pharmacy Practice Resident, St. Dominic Hospital

Failure is success in progress. – Albert Einstein

The word “failure” often evokes a negative connotation, but it is a necessary part of learning and growing. However, when it comes time to address an underperforming trainee, student, or resident, many educators and preceptors find it hard to address and document the poor performance of trainees. Reports in medical literature across multiple healthcare disciplines have raised concern about this “failure to fail” phenomenon and its prevalence.1 In one survey, 18% of 1,790 nursing mentors admitted to passing an underachieving student that should have failed.2 Another survey of ten American medical schools found that 74.5% of clinical preceptors indicated it was difficult to accurately assess poorly performing students because they were unwilling to record negative evaluations.3 As health professionals and educators, we have a responsibility to our patients and our professions to accurately evaluate trainees and ensure they become competent members of healthcare teams. To determine if a learner is sufficiently prepared, here is the critical question: Would I let this person take care of my family member? If the answer is no, why is it so hard to act and deliver an accurate evaluation of an underperforming trainee’s performance?



A systematic review article recently published in the Medical Teacher examined both qualitative and quantitative studies relating to evaluators’ willingness and perceived ability to report unsatisfactory performance in health professions education.1 The authors identified six barriers that assessors face when addressing an underperforming trainee:

  1. The Burden and Risks of Failing Someone. Assessors reported that the amount of time and paperwork required to fail a trainee is a deterrent. In the health professions, preceptors and educators often have multiple responsibilities, and student evaluations are often given lower priority. Assessors also express a hesitancy to fail underperforming trainees due to fear of litigation or worries that it would negatively affect the professional reputation of the assessor.1
  2. Guilt and Self-Blame: Assessors reported an emotional toll, including feelings of guilt and self-blame, connected to failing a trainee. These feelings are increased if the assessor has developed a close relationship with the trainee. Assessors often want to avoid conflict with the trainee and feel that failing the trainee could be perceived as uncaring behavior, which is difficult in a profession dedicated to caring for others like healthcare.1
  3. Trainee Considerations. Assessors were reluctant to fail someone based on the trainee’s stage within the program. With trainees who are in the earliest stages in the curriculum, assessors indicated they were reluctant because they believed the learner could improve with time. Ironically, assessors were equally reluctant to fail trainees that were advanced in their training because they had already invested much time and money. Assessors also worried about the negative effect that failing would have on the trainee’s emotional stability, career goals, and self-esteem.1
  4. Questionable Assessments. Assessors reported a lack of confidence in their ability to accurately evaluate trainees due to feeling unprepared, a lack of training, or a lack of experience. As a result, they questioned their judgment and were willing to give underperforming trainees “the benefit of the doubt.” Assessors also reported a lack of confidence in the tools they used to assess trainees. They expressed uncertainty about what the expectations should be for trainees at different stages of training and questioned whether the evaluation tools being used were appropriate or objective.1
  5. Institutional Support. Assessors reported feeling pressured to pass students and feared they would not be supported by the institution if they failed a student. Assessors also considered the loss of financial support for the institution that would result from failing a student.1
  6. Unsatisfactory Remediation. Assessors were reluctant to fail a trainee if there was no remediation available or if they deemed the available remediation unsatisfactory. Assessors also expressed angst about the timeliness of remediation and whether remediation would be long enough to adequately address the performance problems.1

Conversely, the authors also identified three factors that enabled assessors to fail a failing trainee. These include the assessor’s sense of responsibility and duty to the profession, support from the institution, and the availability of remediation for the trainee.1

While this review of literature helps us to understand the “failure to fail” phenomenon, no quick or easy solution exists. Some experts suggest a narrative-based approach is needed in order to help assessors overcome barriers to providing corrective feedback and delivering unsatisfactory evaluations.3 Providing feedback that clearly indicates the specific areas of improvement can help guide underperforming students to address poor skills or knowledge and “shift the focus from evaluating to understanding and teaching” the learner.3 Even with a shift from quantitative to qualitative evaluation methods, several barriers will persist.

To ensure patient safety and the quality of care delivered by future health professionals, I believe all schools should institute standardized, formal training of preceptors, educators, and anyone who will be evaluating trainees. Institutions should require new assessors to complete training that teaches them how to accurately use evaluation tools, how to articulate concerns, and how to deliver difficult messages. The training program should make clear the remediation opportunities available to address performance problems and emphasize a competency-based approach to teaching and learning. Institutions should make it explicitly clear what resources are available, including the support systems available to address the assessor’s negative emotions and the mental toll that comes with failing a trainee.

I believe a mental shift in healthcare education is needed. We should acknowledge that competency is the primary goal and that everyone progresses at different paces. Not everyone will graduate at the same time, and that is okay! It is important for educators to accept their responsibility to future patients and the potential harm that could result from failing to fail underperforming trainees. 

References:

  1. Yepes-Rios M, Dudek N, Duboyce R, Curtis J, Allard RJ, Varpio L. The failure to fail underperforming trainees in health professions education: A BEME systematic review: BEME Guide No. 42. Medical Teacher. 2016;38(11):1092-1099.
  2. Brown L, Douglas V, Garrity J, Shepard CK. What influences mentors to pass or fail students. Nursing Management. 2012;19(5)16–21.
  3. McConnell M, Harms S, Saperson K. Meaningful Feedback in Medical Education: Challenging the “Failure to Fail” Using Narrative Methodology. Acad Psychiatry. 2016;40(2):377-379.