January 16, 2023

Achieving the Promise of Authentic Workplace-Based Assessments

by Sophie Durham, PharmD, PGY1 Community Pharmacy Practice Resident, Mississippi State Department of Health Pharmacy

Workplace-based assessments (WBAs) can be intimidating and burdensome for students and evaluators alike; however, these assessments pose an opportunity to use real-time direct observation to provide feedback that supports a learner’s growth and development.1 Unfortunately, students often fail to see the usefulness of feedback in clinical settings or feel that their grades might be negatively affected by observations reported through workplace-based assessments.

Throughout my Advanced Pharmacy Practice Experiences (APPEs), I craved feedback so that I could develop as a clinician and ensure that I was providing optimal patient care. I valued the feedback that I received at the midpoint and final evaluations; however, these evaluations were used to determine my final grade. As a student, I benefitted from receiving more frequent, informal feedback to improve my performance in real time. By providing students with more timely formative assessments, preceptors allow students to reflect on their experiences and make necessary corrections to improve their practices without the stress of contributing to their grades.


WBAs are used to evaluate trainees’ performance in practice and can be used by learners as relevant feedback to engage in reflection. WBAs encompass a wide range of assessment strategies that require evaluators to move away from merely assigning numbers toward a more structured format of assessment. WBAs can be used to provide feedback on trainee-patient interactions, procedural skills, case-based discussions, and multi-source feedback.2

Lauren Phinney and colleagues at the University of California San Francisco used cultural historical activity theory (CHAT) to identify feedback system elements and tensions among these elements to explore workplace-based assessment used during medical clerkships. The school introduced a WBA tool in 2019 that includes drop-down items describing the clerkship specialty, skills observed, entrustment ratings adapted from the Ottawa scale, and space for narrative comments to encourage formative feedback. Students are required to gather two WBAs per week. The research interviewed first and second-year medical students participating in core clerkship rotations.1

CHAT allows investigators to examine how tools mediate activities. An activity system is defined as the interaction between learners and tools to achieve an outcome. Tensions among these elements can promote change, create knowledge, and lead to new activity patterns.1 After interviewing students in a series of focus groups, researchers identified five tensions:

  1. Misinterpretation of WBA Feedback as Summative Assessment. Although WBAs were intended to serve formative purposes, first-year students perceived the object, or purpose, of the WBA to be for summative purposes. Formative assessments are intended to monitor student learning to provide ongoing feedback to improve teaching and learning. More specifically, formative assessments help students identify strengths and weaknesses. This allows students to target areas of improvement and help faculty pinpoint areas where students are struggling to provide assistance.3 On the other hand, the goal of summative assessments is to evaluate student learning at the end of a rotation and are often high stakes, resulting in the assignment of a grade or score. Even when second-year students correctly identified the purpose of WBAs as low-stakes feedback, students were still concerned that this feedback would be used to inform summative assessments and strategically chose to use WBAs when they anticipated positive feedback instead of opportunities for constructive feedback. Two ways to enhance the distinction between summative and formative evaluations in WBAs are to use two different platforms to complete WBAs and summative assessments and allow students to self-complete WBAs.1
  2. Cumbersome Tool Design that Delayed Feedback. WBA requests were sent via computer, so many of these requests were sent hours after feedback encounters. Students found that the distribution and completion of WBAs were delayed, which resulted in generic or untimely feedback. Utilizing QR codes on smart phones and improvements in technology facilitated supervisor engagement and rapid feedback.1
  3. Concern About Burdening Supervisors with WBA Tasks. While clerkship leaders encouraged students to seek feedback, students were concerned about interrupting workflow or interfering with patient care. Students found the assessments to be labor-intensive and redundant. Students employed strategies to streamline the process, such as recording and submitting comments that preceptors provided during the encounter with the WBA request form, which made it easier for preceptors to complete the assessments.1
  4. WBA Requirement as Checking Boxes vs. Learning Opportunity. The weekly quota of completion of two WBAs overshadowed the purpose of WBAs as a formative feedback mechanism. The authenticity and usefulness of the feedback could be jeopardized when students and supervisors focus on the rule instead of the opportunity to provide feedback. On the other hand, some students reframed this requirement to benefit them. One benefit of the requirement included the ability for students to direct their learning to meet self-identified goals and receive timely feedback to ensure that they were making progress toward achieving these goals. Another benefit of the rule was to initiate consistent feedback discussions with preceptors who did not volunteer to provide feedback.1
  5. WBA Within Clerkship-Specific Learning Culture. Supervisors’ promotion and acceptance of WBAs ultimately set the tone for WBA encounters. Students found that preceptors that actively facilitated WBA encounters provided more useful feedback, while preceptors that gave pushback created a barrier. In addition to using more convenient platforms to complete WBAs, students identified more convenient situations, logged feedback retrospectively, and bypassed tool discussion to minimize the burden on team members in settings that were not conducive to WBAs.1

In competitive cultures like medicine, it can be difficult to facilitate formative assessments. The author concluded that by incorporating learner input to make intentional changes, perceptions and utilization of WBAs can be enhanced.1

The authors provided potential solutions to the perceived problems with WBAs. There is often a disconnect between the intention and interpretation of workplace-based assessments.  Thus, we need to consider structuring their format and delivery by gathering student feedback. Through this collaboration with students, we can strive to achieve authentic workplace-based assessments that accurately reflect learners’ progress and are used to improve future performance.

While this study focused on the benefits of WBAs in student-preceptor interactions at one medical school, WBAs can be used in several ways. WBAs can be applied across multiple settings and can be separated into three different categories: observation of clinical performance, discussion of clinical cases, and feedback from peers, coworkers, and patients. These assessment tools provide insight to the trainee, assessor, and academics alike.2

In addition to getting student feedback, I believe we need to gather feedback from preceptors to determine their perceptions of WBAs. Thus, WBAs could be further improved to meet the needs of both students and preceptors. To ensure that we are providing useful and timely feedback to learners, its important to reduce the barriers to WBA use. By using QR codes, separate platforms to differentiate summative and formative assessments, and platforms that are compatible with smartphones when computers are not available, schools can establish user-friendly and time-efficient processes and ensure that WBAs that are valuable without adding substantial burden that jeopardize feedback quality.1

References:

  1. Phinney, LB, Fluet A, O’Brien BC, Seligman L, Hauer KE. Beyond checking boxes: Exploring tensions with use of a workplace-based assessment tool for formative assessment in clerkships. Acad Med 2022; 97: 1511-1520.
  2. Guraya, SY. Workplace-based assessment; Applications and educational impact. Malays J Med Sci 2015; 22: 5-10.
  3. Formative vs. Summative Assessment [Internet]. Pittsburgh: Carnegie Mellon University; [cited 2022 Nov 18].

November 12, 2022

Failure to Fail: Why Teachers Are Reluctant to Fail Learners and What We Can Do About It

by Katelyn Miller, PharmD, PGY1 Pharmacy Practice Resident, St. Dominic Hospital

Failure is success in progress. – Albert Einstein

The word “failure” often evokes a negative connotation, but it is a necessary part of learning and growing. However, when it comes time to address an underperforming trainee, student, or resident, many educators and preceptors find it hard to address and document the poor performance of trainees. Reports in medical literature across multiple healthcare disciplines have raised concern about this “failure to fail” phenomenon and its prevalence.1 In one survey, 18% of 1,790 nursing mentors admitted to passing an underachieving student that should have failed.2 Another survey of ten American medical schools found that 74.5% of clinical preceptors indicated it was difficult to accurately assess poorly performing students because they were unwilling to record negative evaluations.3 As health professionals and educators, we have a responsibility to our patients and our professions to accurately evaluate trainees and ensure they become competent members of healthcare teams. To determine if a learner is sufficiently prepared, here is the critical question: Would I let this person take care of my family member? If the answer is no, why is it so hard to act and deliver an accurate evaluation of an underperforming trainee’s performance?



A systematic review article recently published in the Medical Teacher examined both qualitative and quantitative studies relating to evaluators’ willingness and perceived ability to report unsatisfactory performance in health professions education.1 The authors identified six barriers that assessors face when addressing an underperforming trainee:

  1. The Burden and Risks of Failing Someone. Assessors reported that the amount of time and paperwork required to fail a trainee is a deterrent. In the health professions, preceptors and educators often have multiple responsibilities, and student evaluations are often given lower priority. Assessors also express a hesitancy to fail underperforming trainees due to fear of litigation or worries that it would negatively affect the professional reputation of the assessor.1
  2. Guilt and Self-Blame: Assessors reported an emotional toll, including feelings of guilt and self-blame, connected to failing a trainee. These feelings are increased if the assessor has developed a close relationship with the trainee. Assessors often want to avoid conflict with the trainee and feel that failing the trainee could be perceived as uncaring behavior, which is difficult in a profession dedicated to caring for others like healthcare.1
  3. Trainee Considerations. Assessors were reluctant to fail someone based on the trainee’s stage within the program. With trainees who are in the earliest stages in the curriculum, assessors indicated they were reluctant because they believed the learner could improve with time. Ironically, assessors were equally reluctant to fail trainees that were advanced in their training because they had already invested much time and money. Assessors also worried about the negative effect that failing would have on the trainee’s emotional stability, career goals, and self-esteem.1
  4. Questionable Assessments. Assessors reported a lack of confidence in their ability to accurately evaluate trainees due to feeling unprepared, a lack of training, or a lack of experience. As a result, they questioned their judgment and were willing to give underperforming trainees “the benefit of the doubt.” Assessors also reported a lack of confidence in the tools they used to assess trainees. They expressed uncertainty about what the expectations should be for trainees at different stages of training and questioned whether the evaluation tools being used were appropriate or objective.1
  5. Institutional Support. Assessors reported feeling pressured to pass students and feared they would not be supported by the institution if they failed a student. Assessors also considered the loss of financial support for the institution that would result from failing a student.1
  6. Unsatisfactory Remediation. Assessors were reluctant to fail a trainee if there was no remediation available or if they deemed the available remediation unsatisfactory. Assessors also expressed angst about the timeliness of remediation and whether remediation would be long enough to adequately address the performance problems.1

Conversely, the authors also identified three factors that enabled assessors to fail a failing trainee. These include the assessor’s sense of responsibility and duty to the profession, support from the institution, and the availability of remediation for the trainee.1

While this review of literature helps us to understand the “failure to fail” phenomenon, no quick or easy solution exists. Some experts suggest a narrative-based approach is needed in order to help assessors overcome barriers to providing corrective feedback and delivering unsatisfactory evaluations.3 Providing feedback that clearly indicates the specific areas of improvement can help guide underperforming students to address poor skills or knowledge and “shift the focus from evaluating to understanding and teaching” the learner.3 Even with a shift from quantitative to qualitative evaluation methods, several barriers will persist.

To ensure patient safety and the quality of care delivered by future health professionals, I believe all schools should institute standardized, formal training of preceptors, educators, and anyone who will be evaluating trainees. Institutions should require new assessors to complete training that teaches them how to accurately use evaluation tools, how to articulate concerns, and how to deliver difficult messages. The training program should make clear the remediation opportunities available to address performance problems and emphasize a competency-based approach to teaching and learning. Institutions should make it explicitly clear what resources are available, including the support systems available to address the assessor’s negative emotions and the mental toll that comes with failing a trainee.

I believe a mental shift in healthcare education is needed. We should acknowledge that competency is the primary goal and that everyone progresses at different paces. Not everyone will graduate at the same time, and that is okay! It is important for educators to accept their responsibility to future patients and the potential harm that could result from failing to fail underperforming trainees. 

References:

  1. Yepes-Rios M, Dudek N, Duboyce R, Curtis J, Allard RJ, Varpio L. The failure to fail underperforming trainees in health professions education: A BEME systematic review: BEME Guide No. 42. Medical Teacher. 2016;38(11):1092-1099.
  2. Brown L, Douglas V, Garrity J, Shepard CK. What influences mentors to pass or fail students. Nursing Management. 2012;19(5)16–21.
  3. McConnell M, Harms S, Saperson K. Meaningful Feedback in Medical Education: Challenging the “Failure to Fail” Using Narrative Methodology. Acad Psychiatry. 2016;40(2):377-379.

November 7, 2022

Gamification to Motivate Students

by Antoniya R. Holloway, PharmD, PGY1 Community Pharmacy Practice Resident, Mississippi State Department of Health

Ask anyone in my pharmacy school graduating class, and I believe they would tell you that the most anticipated part of a long therapeutics lecture was the sound of the Kahoot! theme song. Despite how glazed-over our eyes became during medicinal chemistry discussions, my classmates and I always seemed to perk up at the mention of a fun, competitive opportunity to demonstrate what we had learned. More educators are using games and other competitive activities to fuel student engagement and motivation during instruction.1 This instructional design method is termed “gamification.”


Gamifying education, aka gamification, is described in one of two ways: (1) the act of rewarding learners with gameplay after a tedious lesson, or (2) the act of infusing game elements into a lesson to make it more enjoyable.2Although using incentives to motivate learners is not a new concept, gamification of classrooms was ignited in the era of e-Learning.1 The Smithsonian Science Education Center lists five prominent benefits of gamification:2

  1. Increased level of learner engagement in classrooms
  2. Increased accessibility for students diagnosed with autism
  3. Improved cognitive development in adolescents
  4. Improved physical development in adolescents
  5. Increased opportunities for learning outside of classrooms

The question is not whether there are theoretical benefits in gamifying education, but whether there are long-term educational benefits to learners.  And whether there are specific methodological approaches to gamifying education that can be standardized and implemented in a similar fashion.

The International Journal of Educational Technology in Higher Education published a systematic review in 2017 examining 63 papers to evaluate research studies and emerging gamification trends and to identify patterns, educational contexts, and game elements.1 The results were stratified into 5 categories: educational level, academic subject, learning activity, game elements, and study outcome.

Education level

Educators must understand that although gamification can be implemented at any grade level, more sophisticated platforms that require higher levels of technique may be too complicated for younger learners to navigate. Most papers included in this review were conducted at the university level (44 papers), while fewer (seven papers) were conducted at the K-12 level. Authors propose that this disproportion is because college professors have more control over their courses than teachers following state-mandated curricula and because college students have better-developed computer skills.

Academic subjects

The systematic review included gamification studies related to over 32 academic subjects in six categories. Many papers (39%) targeted computer science/information technology (CS/IT) and multimedia and communication (12%). Although the results are inconclusive, it could be speculated that gamification is more suitable for CS/IT courses compared to other subjects.

Types of Learning Activities 

A mix of instructional activities was used in 16 studies instead of the sole activity. Half were online courses, and the other half had a web-based learning component (aka hybrid courses that included both face-to-face and online instruction). This supports the conclusion that even though some courses are traditional in nature, educators could modernize courses by incorporating an online gaming component.

Game Elements

Game design elements described in this systematic review were classified by the game dynamics, mechanics, and components. Game “dynamics” prioritize emotions and relationships while “mechanics” prioritize competition, feedback, and reward. Components are the basic levels of dynamics and mechanics using leaderboards, points, and badges. All of the studies used one or more gaming elements, but there were no standardized gaming elements nor standardized definitions of gaming elements used across all studies.

Study Outcomes

Specific learning and behavioral outcomes were also stratified into categories: knowledge acquisition, perception, behavior, engagement, motivation, and social. Because of the diversity of studies, outcome results were further stratified as (A) affective, (B) behavioral, or (C) cognitive. Educators should note that different game elements (or combinations of elements) and individual factors (personal or motivational factors) influence the outcomes of gamification.  Thus, what works for one learner may not work for others.

The authors of the included studies in the systematic review concluded that gamification produced learning gains (performance, motivation, retention, and engagement) and that learners appreciated the gamification features,1 but the validity and reliability of these claims are questionable. For example, twenty studies either had too small a sample size or too short an evaluation period. Using performance as an outcome is inconclusive, as performance can be influenced by other non-motivational factors like mental capability and prior knowledge.

Theoretical Perspective

Several papers conclude that focusing on game elements like points and rewards rather than an individual’s desire to play is not a fail-proof way to change learning outcomes. A “user-centered" approach may be more conducive as educators develop gamified content due to the wide variety of personal factors.3 One study suggested shifting from the introduction of game elements into course lessons and, instead, developing a “gameful” experience throughout the course.4 The authors of the systematic review conclude that there is insufficient understanding of the motivational mechanisms of gamification. A theoretical framework is necessary to standardize how gamification is implemented and to differentiate which mechanisms create successful outcomes.

This systematic review reinforces the observation that learners generally “like” gamified education and that gamification of learning content increases learner motivation. But it does not provide a concrete answer as to whether gamification leads to long-term improvements in outcomes. I believe educators should consider implementing gamification to increase participation and engagement for health professional students, especially during the foundational years of their professional curricula. However, educators must be aware that the lack of a standardized approach to gamification and individual learner preferences will yield variable outcomes.

References

  1. Dichev C, Dicheva D. Gamifying education: What is Known, What is Believed and What Remains Uncertain: A Critical Review. Int J Educ Technol High Educ 2017; 14 (9).
  2. Mandell B, Deese A. STEMvisions Blog. Five Benefits of Gamification. Washington, DC: Smithsonian Science Education Center. 2016 March 10 [cited 2022 Oct 10].
  3. Hansch A, Newman C, Schildhauer T. Fostering Engagement with Gamification: Review of Current Practices on Online Learning Platforms. HIIG Discussion Paper Series [Internet]. HIIG Discussion Paper Series No. 2015-4 [cited 2022 October 10].
  4. Songer RW, Miyata K. A Playful Affordances Model for Gameful Learning [Internet]. TEEM '14: Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality; 2014 October [cited 2022 October 10].

October 10, 2022

Cultivating Cultural Humility

by Amy Ly-Ha, PharmD, PGY1 Community Pharmacy Practice Resident, University of Mississippi School of Pharmacy

Growing up in the Vietnamese culture, whenever I had a minor illness, my parents engaged in the practice of cạo gió, also known as coining. The intent of the practice is to dispel negative energy from a sick individual.  Coining involves spreading medicated oil onto the skin and rubbing a coin over this area until a red abrasion mark forms. To those who are unfamiliar with the practice, these marks may look frightening and can be mistaken as abuse. As a child, I did not pay much attention to these marks on my body. Once, I came home from school feeling feverish. My mother performed coining and brought me to the doctor’s office the next day. Upon conducting a physical examination, the physician noticed the red stripes on my back. Rather than making accusations of abuse, the physician skillfully interviewed my mother and listened to her explanation. Looking back, I now recognize the significance of this encounter. Not only did the physician display a willingness to listen to my parents, but she also demonstrated an openness to my family’s cultural traditions. This physician modeled cultural humility, a concept that I believe all healthcare professionals should possess to create an environment conducive to optimal patient care.


The widespread implementation of cultural diversity training in various health professions education aligns with the growing diversity of our patient populations. There are many aspects to cultural diversity training. Commonly taught in health professions degree programs today, cultural competency embodies the ability to provide care to people with diverse values, beliefs, and behaviors.1 Cultural competency requires several skills, including recognizing the unique needs of every patient, realizing that culture impacts health beliefs, and respecting cultural differences. A culturally competent healthcare professional is able to negotiate and restructure therapeutic plans in response to a patient’s cultural beliefs and behaviors.2 And while cultural diversity training is clearly important, health professionals must also demonstrate cultural humility.

Cultural humility, a term first coined in 1988, is a lifelong process of ongoing self-reflection and self-critique.3 It emphasizes awareness of one’s possible biases and a willingness to be taught by patients. Unlike cultural competency, the goal of cultural humility involves “relinquish[ing] a provider’s role as a cultural expert and adopt[ing] patient-centered interviewing to create a mutual therapeutic alliance.”2 One barrier to teaching cultural humility includes the difficulty of assessing students’ growth in this area. Despite this, I recommend that educators implement the following elements to foster cultural humility in their students.

Element 1: Develop Culturally Relevant Curricula

A culturally relevant curriculum incorporates aspects of culture throughout a curriculum, thus valuing various cultures and encouraging intercultural understanding.4 Introducing students to different cultures throughout their education, in and outside the classroom, enables students to learn how to navigate through diversity. By embedding cultural diversity training at strategic times throughout a curriculum, educators can include reflective exercises intended to build cultural humility. 

When developing and implementing a culturally relevant curriculum, one must be aware of the potential to introduce unconscious bias in lessons and assessments. For example, a recently published research study investigated the presence of unconscious bias in student assessments at a Doctor of Pharmacy program.5 Assessing questions from the academic year of 2018 to 2019 for first-, second-, and third-year pharmacy classes, the investigators examined 3,621 questions. Only a small fraction of these questions referenced race (N=40); however, race was relevant to only two questions. The study also found that specific races were more often associated with specific health conditions. For example, in the analyzed set, the researchers found that all questions related to human immunodeficiency virus (HIV) and sexually transmitted infections (STIs) were associated with African-Americans. Thus, as this study documents, the routine use of race as a descriptor in instances where it lacks significance may propagate racial bias.5 Therefore, providing culturally relevant curricula requires educators to acknowledge their own biases, mitigate them, and display intentionality as they develop and implement instructional materials.

Element 2: Create Opportunities for Cultural Socialization

Cultural socialization is the process in which individuals learn about the customs and values of other cultures. Within the classroom, instructors can create simulations that foster cultural humility. For example, scenarios that prompt students to confront challenging situations and recognize their own biases can help facilitate cultural humility. Furthermore, instructors can create discussion boards to encourage students to share their cultural practices, values, and beliefs.

Immersive experiences outside of the classroom can reinforce direct instruction. These opportunities include community outreach events, introductory and advanced practice-based experiences, and international service trips. Placing students in these environments encourages students to go outside their comfort zone and strengthen their confidence. By creating and introducing experiences for cultural socialization, educators can broaden their students’ perspectives.

Element 3: Promote the Practice of Self-Reflection

The emphasis on introspection sets cultural humility apart from cultural competency. Instructors should encourage students to regularly reflect on and learn from their experiences. Activities that promote reflective practices include journaling and meditation. Online resources like the Implicit Association Tests can also serve as tools to help students recognize their unconscious biases.6 By encouraging reflection and providing opportunities to talk about experiences, educators are developing the habits of mind needed for learners to continue this practice throughout their careers.

Implementing the three elements can promote cultural humility within students. Fostering cultural humility and incorporating cultural competency training in health professions education is critical to achieving accessible and comprehensive healthcare for all.

 

Sources:

  1. American Hospital Association [Internet]. Becoming a Culturally Competent Health Care Organization. AHA; 2016 Jun [cited 2022 Sep 16].
  2. Rockich-Winston N, Wyatt TR. The Case for Culturally Responsive Teaching in Pharmacy Curricula. Am J Pharm Educ 2019; 83(8): Article 7425.
  3. Tervalon M, Murray-García J. Cultural Humility Versus Cultural Competence: A Critical Distinction in Defining Physician Training Outcomes in Multicultural Education. J Health Care Poor Underserved 1998; 9(2):117-25.
  4. International Bureau of Education [Internet]. Culturally Responsive Curriculum; [cited 2022 Sep 16].
  5. Rizzolo D, Kalabalik-Hoganson J, Sandifer C, Lowy N. Focusing on Cultural Humility in Pharmacy Assessment Tools. Curr Pharm Teach Learn 2022;14(6):747-50.
  6. Project Implicit [Internet]. Select a Test; [cited 2022 Sep 30].

June 25, 2022

Should Feedback be Given Verbally or in Writing?

by Mariam M Philip, PharmD, PGY1 Community Pharmacy Practice Resident, Walgreens Pharmacy

Learners thrive in a safe environment where they can freely express their thoughts and opinions. At the heart of learning is feedback.1 Feedback is critical in the classroom, in clinic, at work … indeed, anywhere learning occurs. It is crucial to knowledge acquisition, patient care, personal development, and growth. As educators, it's critically important to strive to give effective feedback.  Many agree it gets easier to provide over time. Feedback received is not always predicted, positive, effectively delivered, or correctly interpreted. Generally, the feedback provided should be based on direct observations and understood by the learner. Feedback should be provided in a safe environment where learners can discuss the feedback, express their concerns, and participate in developing an action plan.

Feedback is different from an evaluation, and it should be delivered in a conversational yet descriptive manner. When it’s done effectively and periodically, the formal evaluation (which typically occurs at the end of the course or experience) should not be a surprise.1 Evaluations are more formal and done to determine the learner’s grade (or, in the case of employees, pay raises or promotion decisions).

Feedback can take two forms: verbal or written.  Is one delivery method better than another?  The goal of feedback is to influence the learner and either motivate the continuation of their good work or point out what needs improvement, or both. One of the advantages of verbal feedback is that can lead to a “real-time” discussion and provides an opportunity for both the educator and student to elaborate more with examples. On the other hand, written feedback is often clearer, can be referenced later (e.g. when constructing the final evaluation), and (perhaps) reduces the chance of miscommunication or misinterpretation.

In 2017, a randomized controlled trial that enrolled 44 nursing students assessed the effectiveness of oral and written feedback.  The students were divided equally into two groups. The students filled out a questionnaire after the feedback to determine their reactions, perceptions, and responses to the different forms of feedback. The questionnaire showed no statistical differences between the two groups, and the results were similar. Although there was no statistical significance between the groups, the study might have been underpowered due to the small sample size. Nonetheless, the authors offer some interesting points of view.2

Based on the students’ responses to the questionnaire, 21.3% of the oral feedback group experienced negative reactions; 75.8% were classified as mild, and 24.2% were classified as severe reactions. Conversely, only 14.4% of the students in the written feedback group had a negative reaction; most were classified (92.3%) as mild and 7.7% were severe. While the difference was statically significant, the oral feedback group had a higher percentage of students who experienced negative responses such as arguing, crying, insulting, denying, and inattention. The written feedback group had a higher rate of intimidation, undue self-defensiveness, and confrontation. The satisfaction rate was higher (but not significantly so) in the written feedback group (77.1% indicated high satisfaction with the feedback vs. 50% in the oral group). Lastly, when the delivery of the feedback was assessed, the students in the oral feedback group gave it a higher delivery score.2

Additional studies conducted with medical students who received “well done” feedback showed similar satisfaction from both oral and written methods of communication.3 A similar study was conducted with medical residents from two university-based clinics. To diversify the participants and results, the residents that participated were assigned to medical clinics of various specialties. Sixty-eight internal medicine residents were randomly assigned to receive either written or “face-to-face” feedback. They were then given a questionnaire to assess their overall clinic experience, in which eight of the 19 questions asked about feedback. Six five residents completed the questionnaire. The results showed no differences in the residents’ perceptions of oral and written feedback.3

Both forms of feedback are acceptable and can be effective when delivered following best practice principles. There are advantages to each method of communication when providing feedback. Oral feedback is often less formal and more conversational, which will allow the student to feel safer expressing their concerns or participating in planning for the future. While less efficient, written feedback often promotes deeper reflection. The student can reflect on the given feedback and refer to it periodically. Thus, a teacher should focus on the quality and frequency of the feedback rather than the delivery method.4

I believe health professional students benefit from written and oral feedback in both the didactic and experiential settings. Both delivery methods serve a purpose that is important to students’ growth. The thoroughly thought-of written feedback will allow the student to digest the feedback and reflect on their own time. This will increase autonomy and promote self-assessment and planning. Meanwhile, oral feedback allows students to explain themself, ask questions, and brainstorm with the preceptor on the next steps.

A healthy balance between verbal and written feedback should exist between the two forms of communication.  Both should be used to help the student grow. I find oral informal feedback more engaging, which helps build the teacher-learner relationship. It can help shift the “formal meeting” nerves to a mentorship mindset. Periodic written feedback can reinforce verbal discussions and make constructing the end-of-course evaluations easier.

References:

  1. Jug R, Jiang X, Bean Giving and Receiving Effective Feedback: A Review Article and How-To Guide. Archives of Pathology & Laboratory Medicine 2019; 143 (2): 244–250. https://doi.org/10.5858/arpa.2018-0058-RA
  2. Tayebi V, Armat MR, Ghouchani HT, et al. Oral versus written feedback delivery to nursing students in clinical education: A randomized controlled trial. Electron Physician. 2017;9(8):5008-5014. Published 2017 Aug 25. doi:10.19082/5008
  3. Elnicki DM, Layne RD, Ogden PE, et al. Oral Versus Written Feedback in Medical Clinic. J Gen Intern Med. 1998;13(3):155-158. doi:10.1046/j.1525-1497.1998.00049.x
  4. Dobbie A, Tysinger JW. Evidence-based strategies that help office-based teachers give effective feedback. Fam Med. 2005;37(9):617-619.