January 25, 2023

Professional Identity Formation (PIF) in Health Professions Education: Doing is Different from Being

by Lauren C. McConnell, PharmD, PGY1 Pharmacy Practice Resident, Baptist North Mississippi Hospital

Professional identity formation, or PIF, is the process through which a person becomes a professional — typically from student to practicing professional. The progression of PIF is uniquely individualized and superimposed on each student’s personal identity, values, morals, and beliefs.1 The goal of forming a professional identity is to develop a resilient sense of belonging within a health profession.2 PIF goes beyond students acquiring knowledge (‘thinking’) and demonstrating professionalism (‘acting’) to support one’s perception of self (‘feeling’).

Professionalism, as defined by The White Paper on Pharmacy Student Professionalism, is “the active demonstration of the traits of a professional”.3 Health professions students are intrinsically and extrinsically motivated to join a professional community and are willing to uphold certain professional expectations, such as wearing a white coat, communicating respectfully, and being accountable.4,5 However, acting like a professional and being a professional are two different phenomena.

Interrelationship Between Professional Identity and Professionalism

Professionalism and professional identity are distinct yet related concepts, which makes the fluid relationship between the two challenging to describe (see Figure 1). Professionalism is an outward display of the conduct of a professional, while a professional identity is the internal perception of one’s role as a professional.6 Professional traits and behaviors are crucial for PIF, as ‘acting’ like a professional encourages assimilation to that role.7 Similarly, self-awareness of a professional identity is essential for developing a professional demeanor. Several stepwise models exist that have attempted to describe this relationship. Acts of professionalism are observable signs which indicate the concurrent development of professional identity.6 Therefore, my professors at Auburn University and I recently proposed a model to illustrate the infinite and undirected interplay between PIF and professionalism, the Möbius Strip.7

Figure 1: Professionalism-Professional Identity Möbius Strip

According to Moseley et al., “as the internalization process of PIF occurs, outward professional behaviors are displayed, and as one chooses to behave as a professional, their sense of identity blossoms”.7 This model aligns with the proposal that the end goal of health professions education should not just focus on ‘doing’ but also on ‘being’.8 As with all educational goals, methods for teaching and evaluating progress are essential. The conundrum is how this fluid process can be measured and supported.

PIF-Friendly Pedagogy

Obtaining a professional identity is the desired outcome in health professions education, as it is the backbone of all decisions students will make as professionals.8 However, many students (and admittedly, myself included) fail to recognize themselves as professionals early in their health education journey. For this reason, PIF has long been an elusive target amongst health professions educators. Furthermore, PIF is a non-linear process, and each student progresses toward their professional identity at a different pace, which makes it challenging to foster and evaluate progress.9 For this reason, health professions educators should incorporate PIF-friendly teaching strategies into curricula.

PIF pedagogy is the practice of teaching, facilitating, and coaching students through their PIF journey — teaching methods that support the development of an identity that aligns with the values of their profession. Educators are a fundamental component of the student’s journey. The formation of a professional identity is influenced by external factors, such as curricula, learning environments, expectations, mentorship, and feedback.5 I distinctly remember key preceptors who created positive learning environments and served as role models that positively impacted how I perceived myself as a future pharmacist. Therefore, it is important for educators to foster relationships and create experiences that are meaningful to students, as PIF is facilitated, not taught.

Self-assessment and self-reflection are two PIF-friendly strategies that educators can use in curricula to help students become more aware of their professional strengths and weakness.10 The ability of the student to be self-aware of their presence and growth within a professional community increases PIF and creates a sense of belongingness.9 Other meaningful relationships outside the formal education environment (e.g., with preceptors, other health professionals, and patients) play a similar and equally important role. To me, there is no replacing the feeling you get the first time a patient mistakenly refers to you as a pharmacist or when a physician shows appreciation by stating ‘good catch.’ Through these interactions, students gain recognition for their place on the healthcare team. Situated learning theory suggests that “learning should take place in a setting the same as where the knowledge will be used”.11 Therefore, it is no surprise that students report early introduction to their profession, direct interaction with patients, and frequent collaboration with other health professionals as key drivers of identity construction.12

Because educators are facilitators of PIF, structured evaluations (e.g., exams or performance-based assessments) are not helpful measures of student progression, particularly given that PIF does not occur at a single point in time. Experts recommend that assessments of PIF should occur longitudinally to ensure that the student’s professional identity is progressively developing over time.13 Unfortunately, there are no standardized methods for measuring PIF, and assessments rely on student understanding of who they are within a profession. I remember creating short- and long-term career goals as a first-year student pharmacist, thinking I knew exactly who I was and what pharmacy career path I wanted to pursue. But with each semester, I revisited these goals and was honestly embarrassed by what I thought I knew about who I wanted to be. 

In one study, investigators designed a Professional Self Identity Questionnaire (PSIQ) that attempts to measure the degree to which health professions students identify as a member of their profession.14 Building on this notion, faculty at Auburn University Harrison College of Pharmacy recently created a PIF instrument to encourage students to reflect on their professional identity. This instrument asks students to self-assess fourteen qualities/behaviors, such as confidence, knowledge, personality, professionalism, and communication.10 These PIF-friendly exercises, using a combination of self-assessment and self-reflection, attempt to measure what educators cannot see: how students see themselves in relation to their profession.

There are several other activities and instructional strategies that can be used to promote PIF, such as feedback, experiential education, co-curricular activities (e.g., health fairs), mentoring/role modeling, student well-being groups, and white coat ceremonies.7,15 Of course, most professional curricula already incorporate many of these pedagogical methods, but require active effort by educators to intentionally foster PIF. Reflecting on my time as a student, I now know why I have always appreciated professors who were passionate about what they taught, preceptors who encouraged autonomous work, and mentors who led by example – they intentionally helped create my professional identity. Educators should continue to purposefully use and prioritize PIF-friendly pedagogical methods, particularly early in curricula, to support the process of professional identity formation amongst their students.

References

  1. Cruess RL, Cruess SR, Steinert Y. Amending Miller's pyramid to include professional identity formation.Acad Med. 2016;91(2):180-5.
  2. Kellar J and Austin Z. The only way round is through: Professional identity in pharmacy education and practice. Can Pharm J (Ott). 2022 Aug 13;155(5):238-240.
  3. Roth MT and Zlatic TD. American College of Clinical Pharmacy. Development of student professionalism. Pharmacotherapy. 2009 Jun;29(6):749-756.
  4. Holden M, Buck E, Clark M, Szauter K, Trumble J. Professional identity formation in medical education: The convergence of multiple domains. HEC Forum. 2012 Dec;24(4):245-255.
  5. Findyartini A, Greviana N, Felaza E, et al. Professional identity formation of medical students: A mixed-methods study in a hierarchical and collectivist culture. BMC Med Educ. 2022 Jun 8;22(1):443.
  6. Forouzadeh M, Kiani M, Bazmi S. Professionalism and its role in the formation of medical professional identity. Med J Islam Repub Iran. 2018;32(1):765-8.
  7. Moseley LE, McConnell L, Garza KB, Ford CR. Exploring the evolution of professional identity formation in health professions education. New Dir Teach Learn. 2021 Dec 6;168:11-27.
  8. Snell R, Fyfe S, Fyfe G, Blackwood D, Itsiopoulos C. Development of professional identity and professional socialisation in allied health students: A scoping review. Focus on Health Prof Educ. 2020 Apr 30;21(1):29-56.
  9. Cruess SR, Cruess RL, Steinert Y. Supporting the development of a professional identity: General principles. Med Teach. 2019 Jun;41(6):641-9.
  10. Ford CR, Astle KN, Kleppinger EL, Sewell J, Hutchison A, Garza KB.Developing a self-assessment instrument to evaluate practice-readiness among student pharmacists. New Dir Teach Learn. 2021 Dec 6;168:133-145.
  11. Li LC, Grimshaw JM, Nielsen C, Judd M, Coyte PC, Graham, ID. Use of communities of practice in business and health care sectors: A systematic review. Implement Sci. 2009 May 17;4:27.
  12. Wald HS. Professional identity (trans)formation in medical education: Reflection, relationship, resilience.Acad Med. 2015 Jun;90(6):701–6.
  13. Garza KB,Moseley LE, Ford CR.Assessment of professional identity formation: Challenges and opportunities.New Dir Teach Learn. 2021 Dec 6;168:147-151.
  14. Crossley J and Vivekananda-Schmidt P. The development and evaluation of a Professional Self Identity Questionnaire to measure evolving professional self-identity in health and social care students. Med Teach. 2021 Dec;31(12):e603-7.
  15. Chandran L, Iuli RJ, Strano-Paul L, Post SG. Developing "a Way of Being": Deliberate approaches to professional identity formation in medical education.Acad Psychiatry. 2019 Oct;43(5):521–7.

January 16, 2023

Achieving the Promise of Authentic Workplace-Based Assessments

by Sophie Durham, PharmD, PGY1 Community Pharmacy Practice Resident, Mississippi State Department of Health Pharmacy

Workplace-based assessments (WBAs) can be intimidating and burdensome for students and evaluators alike; however, these assessments pose an opportunity to use real-time direct observation to provide feedback that supports a learner’s growth and development.1 Unfortunately, students often fail to see the usefulness of feedback in clinical settings or feel that their grades might be negatively affected by observations reported through workplace-based assessments.

Throughout my Advanced Pharmacy Practice Experiences (APPEs), I craved feedback so that I could develop as a clinician and ensure that I was providing optimal patient care. I valued the feedback that I received at the midpoint and final evaluations; however, these evaluations were used to determine my final grade. As a student, I benefitted from receiving more frequent, informal feedback to improve my performance in real time. By providing students with more timely formative assessments, preceptors allow students to reflect on their experiences and make necessary corrections to improve their practices without the stress of contributing to their grades.


WBAs are used to evaluate trainees’ performance in practice and can be used by learners as relevant feedback to engage in reflection. WBAs encompass a wide range of assessment strategies that require evaluators to move away from merely assigning numbers toward a more structured format of assessment. WBAs can be used to provide feedback on trainee-patient interactions, procedural skills, case-based discussions, and multi-source feedback.2

Lauren Phinney and colleagues at the University of California San Francisco used cultural historical activity theory (CHAT) to identify feedback system elements and tensions among these elements to explore workplace-based assessment used during medical clerkships. The school introduced a WBA tool in 2019 that includes drop-down items describing the clerkship specialty, skills observed, entrustment ratings adapted from the Ottawa scale, and space for narrative comments to encourage formative feedback. Students are required to gather two WBAs per week. The research interviewed first and second-year medical students participating in core clerkship rotations.1

CHAT allows investigators to examine how tools mediate activities. An activity system is defined as the interaction between learners and tools to achieve an outcome. Tensions among these elements can promote change, create knowledge, and lead to new activity patterns.1 After interviewing students in a series of focus groups, researchers identified five tensions:

  1. Misinterpretation of WBA Feedback as Summative Assessment. Although WBAs were intended to serve formative purposes, first-year students perceived the object, or purpose, of the WBA to be for summative purposes. Formative assessments are intended to monitor student learning to provide ongoing feedback to improve teaching and learning. More specifically, formative assessments help students identify strengths and weaknesses. This allows students to target areas of improvement and help faculty pinpoint areas where students are struggling to provide assistance.3 On the other hand, the goal of summative assessments is to evaluate student learning at the end of a rotation and are often high stakes, resulting in the assignment of a grade or score. Even when second-year students correctly identified the purpose of WBAs as low-stakes feedback, students were still concerned that this feedback would be used to inform summative assessments and strategically chose to use WBAs when they anticipated positive feedback instead of opportunities for constructive feedback. Two ways to enhance the distinction between summative and formative evaluations in WBAs are to use two different platforms to complete WBAs and summative assessments and allow students to self-complete WBAs.1
  2. Cumbersome Tool Design that Delayed Feedback. WBA requests were sent via computer, so many of these requests were sent hours after feedback encounters. Students found that the distribution and completion of WBAs were delayed, which resulted in generic or untimely feedback. Utilizing QR codes on smart phones and improvements in technology facilitated supervisor engagement and rapid feedback.1
  3. Concern About Burdening Supervisors with WBA Tasks. While clerkship leaders encouraged students to seek feedback, students were concerned about interrupting workflow or interfering with patient care. Students found the assessments to be labor-intensive and redundant. Students employed strategies to streamline the process, such as recording and submitting comments that preceptors provided during the encounter with the WBA request form, which made it easier for preceptors to complete the assessments.1
  4. WBA Requirement as Checking Boxes vs. Learning Opportunity. The weekly quota of completion of two WBAs overshadowed the purpose of WBAs as a formative feedback mechanism. The authenticity and usefulness of the feedback could be jeopardized when students and supervisors focus on the rule instead of the opportunity to provide feedback. On the other hand, some students reframed this requirement to benefit them. One benefit of the requirement included the ability for students to direct their learning to meet self-identified goals and receive timely feedback to ensure that they were making progress toward achieving these goals. Another benefit of the rule was to initiate consistent feedback discussions with preceptors who did not volunteer to provide feedback.1
  5. WBA Within Clerkship-Specific Learning Culture. Supervisors’ promotion and acceptance of WBAs ultimately set the tone for WBA encounters. Students found that preceptors that actively facilitated WBA encounters provided more useful feedback, while preceptors that gave pushback created a barrier. In addition to using more convenient platforms to complete WBAs, students identified more convenient situations, logged feedback retrospectively, and bypassed tool discussion to minimize the burden on team members in settings that were not conducive to WBAs.1

In competitive cultures like medicine, it can be difficult to facilitate formative assessments. The author concluded that by incorporating learner input to make intentional changes, perceptions and utilization of WBAs can be enhanced.1

The authors provided potential solutions to the perceived problems with WBAs. There is often a disconnect between the intention and interpretation of workplace-based assessments.  Thus, we need to consider structuring their format and delivery by gathering student feedback. Through this collaboration with students, we can strive to achieve authentic workplace-based assessments that accurately reflect learners’ progress and are used to improve future performance.

While this study focused on the benefits of WBAs in student-preceptor interactions at one medical school, WBAs can be used in several ways. WBAs can be applied across multiple settings and can be separated into three different categories: observation of clinical performance, discussion of clinical cases, and feedback from peers, coworkers, and patients. These assessment tools provide insight to the trainee, assessor, and academics alike.2

In addition to getting student feedback, I believe we need to gather feedback from preceptors to determine their perceptions of WBAs. Thus, WBAs could be further improved to meet the needs of both students and preceptors. To ensure that we are providing useful and timely feedback to learners, its important to reduce the barriers to WBA use. By using QR codes, separate platforms to differentiate summative and formative assessments, and platforms that are compatible with smartphones when computers are not available, schools can establish user-friendly and time-efficient processes and ensure that WBAs that are valuable without adding substantial burden that jeopardize feedback quality.1

References:

  1. Phinney, LB, Fluet A, O’Brien BC, Seligman L, Hauer KE. Beyond checking boxes: Exploring tensions with use of a workplace-based assessment tool for formative assessment in clerkships. Acad Med 2022; 97: 1511-1520.
  2. Guraya, SY. Workplace-based assessment; Applications and educational impact. Malays J Med Sci 2015; 22: 5-10.
  3. Formative vs. Summative Assessment [Internet]. Pittsburgh: Carnegie Mellon University; [cited 2022 Nov 18].

November 12, 2022

Failure to Fail: Why Teachers Are Reluctant to Fail Learners and What We Can Do About It

by Katelyn Miller, PharmD, PGY1 Pharmacy Practice Resident, St. Dominic Hospital

Failure is success in progress. – Albert Einstein

The word “failure” often evokes a negative connotation, but it is a necessary part of learning and growing. However, when it comes time to address an underperforming trainee, student, or resident, many educators and preceptors find it hard to address and document the poor performance of trainees. Reports in medical literature across multiple healthcare disciplines have raised concern about this “failure to fail” phenomenon and its prevalence.1 In one survey, 18% of 1,790 nursing mentors admitted to passing an underachieving student that should have failed.2 Another survey of ten American medical schools found that 74.5% of clinical preceptors indicated it was difficult to accurately assess poorly performing students because they were unwilling to record negative evaluations.3 As health professionals and educators, we have a responsibility to our patients and our professions to accurately evaluate trainees and ensure they become competent members of healthcare teams. To determine if a learner is sufficiently prepared, here is the critical question: Would I let this person take care of my family member? If the answer is no, why is it so hard to act and deliver an accurate evaluation of an underperforming trainee’s performance?



A systematic review article recently published in the Medical Teacher examined both qualitative and quantitative studies relating to evaluators’ willingness and perceived ability to report unsatisfactory performance in health professions education.1 The authors identified six barriers that assessors face when addressing an underperforming trainee:

  1. The Burden and Risks of Failing Someone. Assessors reported that the amount of time and paperwork required to fail a trainee is a deterrent. In the health professions, preceptors and educators often have multiple responsibilities, and student evaluations are often given lower priority. Assessors also express a hesitancy to fail underperforming trainees due to fear of litigation or worries that it would negatively affect the professional reputation of the assessor.1
  2. Guilt and Self-Blame: Assessors reported an emotional toll, including feelings of guilt and self-blame, connected to failing a trainee. These feelings are increased if the assessor has developed a close relationship with the trainee. Assessors often want to avoid conflict with the trainee and feel that failing the trainee could be perceived as uncaring behavior, which is difficult in a profession dedicated to caring for others like healthcare.1
  3. Trainee Considerations. Assessors were reluctant to fail someone based on the trainee’s stage within the program. With trainees who are in the earliest stages in the curriculum, assessors indicated they were reluctant because they believed the learner could improve with time. Ironically, assessors were equally reluctant to fail trainees that were advanced in their training because they had already invested much time and money. Assessors also worried about the negative effect that failing would have on the trainee’s emotional stability, career goals, and self-esteem.1
  4. Questionable Assessments. Assessors reported a lack of confidence in their ability to accurately evaluate trainees due to feeling unprepared, a lack of training, or a lack of experience. As a result, they questioned their judgment and were willing to give underperforming trainees “the benefit of the doubt.” Assessors also reported a lack of confidence in the tools they used to assess trainees. They expressed uncertainty about what the expectations should be for trainees at different stages of training and questioned whether the evaluation tools being used were appropriate or objective.1
  5. Institutional Support. Assessors reported feeling pressured to pass students and feared they would not be supported by the institution if they failed a student. Assessors also considered the loss of financial support for the institution that would result from failing a student.1
  6. Unsatisfactory Remediation. Assessors were reluctant to fail a trainee if there was no remediation available or if they deemed the available remediation unsatisfactory. Assessors also expressed angst about the timeliness of remediation and whether remediation would be long enough to adequately address the performance problems.1

Conversely, the authors also identified three factors that enabled assessors to fail a failing trainee. These include the assessor’s sense of responsibility and duty to the profession, support from the institution, and the availability of remediation for the trainee.1

While this review of literature helps us to understand the “failure to fail” phenomenon, no quick or easy solution exists. Some experts suggest a narrative-based approach is needed in order to help assessors overcome barriers to providing corrective feedback and delivering unsatisfactory evaluations.3 Providing feedback that clearly indicates the specific areas of improvement can help guide underperforming students to address poor skills or knowledge and “shift the focus from evaluating to understanding and teaching” the learner.3 Even with a shift from quantitative to qualitative evaluation methods, several barriers will persist.

To ensure patient safety and the quality of care delivered by future health professionals, I believe all schools should institute standardized, formal training of preceptors, educators, and anyone who will be evaluating trainees. Institutions should require new assessors to complete training that teaches them how to accurately use evaluation tools, how to articulate concerns, and how to deliver difficult messages. The training program should make clear the remediation opportunities available to address performance problems and emphasize a competency-based approach to teaching and learning. Institutions should make it explicitly clear what resources are available, including the support systems available to address the assessor’s negative emotions and the mental toll that comes with failing a trainee.

I believe a mental shift in healthcare education is needed. We should acknowledge that competency is the primary goal and that everyone progresses at different paces. Not everyone will graduate at the same time, and that is okay! It is important for educators to accept their responsibility to future patients and the potential harm that could result from failing to fail underperforming trainees. 

References:

  1. Yepes-Rios M, Dudek N, Duboyce R, Curtis J, Allard RJ, Varpio L. The failure to fail underperforming trainees in health professions education: A BEME systematic review: BEME Guide No. 42. Medical Teacher. 2016;38(11):1092-1099.
  2. Brown L, Douglas V, Garrity J, Shepard CK. What influences mentors to pass or fail students. Nursing Management. 2012;19(5)16–21.
  3. McConnell M, Harms S, Saperson K. Meaningful Feedback in Medical Education: Challenging the “Failure to Fail” Using Narrative Methodology. Acad Psychiatry. 2016;40(2):377-379.

November 7, 2022

Gamification to Motivate Students

by Antoniya R. Holloway, PharmD, PGY1 Community Pharmacy Practice Resident, Mississippi State Department of Health

Ask anyone in my pharmacy school graduating class, and I believe they would tell you that the most anticipated part of a long therapeutics lecture was the sound of the Kahoot! theme song. Despite how glazed-over our eyes became during medicinal chemistry discussions, my classmates and I always seemed to perk up at the mention of a fun, competitive opportunity to demonstrate what we had learned. More educators are using games and other competitive activities to fuel student engagement and motivation during instruction.1 This instructional design method is termed “gamification.”


Gamifying education, aka gamification, is described in one of two ways: (1) the act of rewarding learners with gameplay after a tedious lesson, or (2) the act of infusing game elements into a lesson to make it more enjoyable.2Although using incentives to motivate learners is not a new concept, gamification of classrooms was ignited in the era of e-Learning.1 The Smithsonian Science Education Center lists five prominent benefits of gamification:2

  1. Increased level of learner engagement in classrooms
  2. Increased accessibility for students diagnosed with autism
  3. Improved cognitive development in adolescents
  4. Improved physical development in adolescents
  5. Increased opportunities for learning outside of classrooms

The question is not whether there are theoretical benefits in gamifying education, but whether there are long-term educational benefits to learners.  And whether there are specific methodological approaches to gamifying education that can be standardized and implemented in a similar fashion.

The International Journal of Educational Technology in Higher Education published a systematic review in 2017 examining 63 papers to evaluate research studies and emerging gamification trends and to identify patterns, educational contexts, and game elements.1 The results were stratified into 5 categories: educational level, academic subject, learning activity, game elements, and study outcome.

Education level

Educators must understand that although gamification can be implemented at any grade level, more sophisticated platforms that require higher levels of technique may be too complicated for younger learners to navigate. Most papers included in this review were conducted at the university level (44 papers), while fewer (seven papers) were conducted at the K-12 level. Authors propose that this disproportion is because college professors have more control over their courses than teachers following state-mandated curricula and because college students have better-developed computer skills.

Academic subjects

The systematic review included gamification studies related to over 32 academic subjects in six categories. Many papers (39%) targeted computer science/information technology (CS/IT) and multimedia and communication (12%). Although the results are inconclusive, it could be speculated that gamification is more suitable for CS/IT courses compared to other subjects.

Types of Learning Activities 

A mix of instructional activities was used in 16 studies instead of the sole activity. Half were online courses, and the other half had a web-based learning component (aka hybrid courses that included both face-to-face and online instruction). This supports the conclusion that even though some courses are traditional in nature, educators could modernize courses by incorporating an online gaming component.

Game Elements

Game design elements described in this systematic review were classified by the game dynamics, mechanics, and components. Game “dynamics” prioritize emotions and relationships while “mechanics” prioritize competition, feedback, and reward. Components are the basic levels of dynamics and mechanics using leaderboards, points, and badges. All of the studies used one or more gaming elements, but there were no standardized gaming elements nor standardized definitions of gaming elements used across all studies.

Study Outcomes

Specific learning and behavioral outcomes were also stratified into categories: knowledge acquisition, perception, behavior, engagement, motivation, and social. Because of the diversity of studies, outcome results were further stratified as (A) affective, (B) behavioral, or (C) cognitive. Educators should note that different game elements (or combinations of elements) and individual factors (personal or motivational factors) influence the outcomes of gamification.  Thus, what works for one learner may not work for others.

The authors of the included studies in the systematic review concluded that gamification produced learning gains (performance, motivation, retention, and engagement) and that learners appreciated the gamification features,1 but the validity and reliability of these claims are questionable. For example, twenty studies either had too small a sample size or too short an evaluation period. Using performance as an outcome is inconclusive, as performance can be influenced by other non-motivational factors like mental capability and prior knowledge.

Theoretical Perspective

Several papers conclude that focusing on game elements like points and rewards rather than an individual’s desire to play is not a fail-proof way to change learning outcomes. A “user-centered" approach may be more conducive as educators develop gamified content due to the wide variety of personal factors.3 One study suggested shifting from the introduction of game elements into course lessons and, instead, developing a “gameful” experience throughout the course.4 The authors of the systematic review conclude that there is insufficient understanding of the motivational mechanisms of gamification. A theoretical framework is necessary to standardize how gamification is implemented and to differentiate which mechanisms create successful outcomes.

This systematic review reinforces the observation that learners generally “like” gamified education and that gamification of learning content increases learner motivation. But it does not provide a concrete answer as to whether gamification leads to long-term improvements in outcomes. I believe educators should consider implementing gamification to increase participation and engagement for health professional students, especially during the foundational years of their professional curricula. However, educators must be aware that the lack of a standardized approach to gamification and individual learner preferences will yield variable outcomes.

References

  1. Dichev C, Dicheva D. Gamifying education: What is Known, What is Believed and What Remains Uncertain: A Critical Review. Int J Educ Technol High Educ 2017; 14 (9).
  2. Mandell B, Deese A. STEMvisions Blog. Five Benefits of Gamification. Washington, DC: Smithsonian Science Education Center. 2016 March 10 [cited 2022 Oct 10].
  3. Hansch A, Newman C, Schildhauer T. Fostering Engagement with Gamification: Review of Current Practices on Online Learning Platforms. HIIG Discussion Paper Series [Internet]. HIIG Discussion Paper Series No. 2015-4 [cited 2022 October 10].
  4. Songer RW, Miyata K. A Playful Affordances Model for Gameful Learning [Internet]. TEEM '14: Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality; 2014 October [cited 2022 October 10].

October 10, 2022

Cultivating Cultural Humility

by Amy Ly-Ha, PharmD, PGY1 Community Pharmacy Practice Resident, University of Mississippi School of Pharmacy

Growing up in the Vietnamese culture, whenever I had a minor illness, my parents engaged in the practice of cạo gió, also known as coining. The intent of the practice is to dispel negative energy from a sick individual.  Coining involves spreading medicated oil onto the skin and rubbing a coin over this area until a red abrasion mark forms. To those who are unfamiliar with the practice, these marks may look frightening and can be mistaken as abuse. As a child, I did not pay much attention to these marks on my body. Once, I came home from school feeling feverish. My mother performed coining and brought me to the doctor’s office the next day. Upon conducting a physical examination, the physician noticed the red stripes on my back. Rather than making accusations of abuse, the physician skillfully interviewed my mother and listened to her explanation. Looking back, I now recognize the significance of this encounter. Not only did the physician display a willingness to listen to my parents, but she also demonstrated an openness to my family’s cultural traditions. This physician modeled cultural humility, a concept that I believe all healthcare professionals should possess to create an environment conducive to optimal patient care.


The widespread implementation of cultural diversity training in various health professions education aligns with the growing diversity of our patient populations. There are many aspects to cultural diversity training. Commonly taught in health professions degree programs today, cultural competency embodies the ability to provide care to people with diverse values, beliefs, and behaviors.1 Cultural competency requires several skills, including recognizing the unique needs of every patient, realizing that culture impacts health beliefs, and respecting cultural differences. A culturally competent healthcare professional is able to negotiate and restructure therapeutic plans in response to a patient’s cultural beliefs and behaviors.2 And while cultural diversity training is clearly important, health professionals must also demonstrate cultural humility.

Cultural humility, a term first coined in 1988, is a lifelong process of ongoing self-reflection and self-critique.3 It emphasizes awareness of one’s possible biases and a willingness to be taught by patients. Unlike cultural competency, the goal of cultural humility involves “relinquish[ing] a provider’s role as a cultural expert and adopt[ing] patient-centered interviewing to create a mutual therapeutic alliance.”2 One barrier to teaching cultural humility includes the difficulty of assessing students’ growth in this area. Despite this, I recommend that educators implement the following elements to foster cultural humility in their students.

Element 1: Develop Culturally Relevant Curricula

A culturally relevant curriculum incorporates aspects of culture throughout a curriculum, thus valuing various cultures and encouraging intercultural understanding.4 Introducing students to different cultures throughout their education, in and outside the classroom, enables students to learn how to navigate through diversity. By embedding cultural diversity training at strategic times throughout a curriculum, educators can include reflective exercises intended to build cultural humility. 

When developing and implementing a culturally relevant curriculum, one must be aware of the potential to introduce unconscious bias in lessons and assessments. For example, a recently published research study investigated the presence of unconscious bias in student assessments at a Doctor of Pharmacy program.5 Assessing questions from the academic year of 2018 to 2019 for first-, second-, and third-year pharmacy classes, the investigators examined 3,621 questions. Only a small fraction of these questions referenced race (N=40); however, race was relevant to only two questions. The study also found that specific races were more often associated with specific health conditions. For example, in the analyzed set, the researchers found that all questions related to human immunodeficiency virus (HIV) and sexually transmitted infections (STIs) were associated with African-Americans. Thus, as this study documents, the routine use of race as a descriptor in instances where it lacks significance may propagate racial bias.5 Therefore, providing culturally relevant curricula requires educators to acknowledge their own biases, mitigate them, and display intentionality as they develop and implement instructional materials.

Element 2: Create Opportunities for Cultural Socialization

Cultural socialization is the process in which individuals learn about the customs and values of other cultures. Within the classroom, instructors can create simulations that foster cultural humility. For example, scenarios that prompt students to confront challenging situations and recognize their own biases can help facilitate cultural humility. Furthermore, instructors can create discussion boards to encourage students to share their cultural practices, values, and beliefs.

Immersive experiences outside of the classroom can reinforce direct instruction. These opportunities include community outreach events, introductory and advanced practice-based experiences, and international service trips. Placing students in these environments encourages students to go outside their comfort zone and strengthen their confidence. By creating and introducing experiences for cultural socialization, educators can broaden their students’ perspectives.

Element 3: Promote the Practice of Self-Reflection

The emphasis on introspection sets cultural humility apart from cultural competency. Instructors should encourage students to regularly reflect on and learn from their experiences. Activities that promote reflective practices include journaling and meditation. Online resources like the Implicit Association Tests can also serve as tools to help students recognize their unconscious biases.6 By encouraging reflection and providing opportunities to talk about experiences, educators are developing the habits of mind needed for learners to continue this practice throughout their careers.

Implementing the three elements can promote cultural humility within students. Fostering cultural humility and incorporating cultural competency training in health professions education is critical to achieving accessible and comprehensive healthcare for all.

 

Sources:

  1. American Hospital Association [Internet]. Becoming a Culturally Competent Health Care Organization. AHA; 2016 Jun [cited 2022 Sep 16].
  2. Rockich-Winston N, Wyatt TR. The Case for Culturally Responsive Teaching in Pharmacy Curricula. Am J Pharm Educ 2019; 83(8): Article 7425.
  3. Tervalon M, Murray-García J. Cultural Humility Versus Cultural Competence: A Critical Distinction in Defining Physician Training Outcomes in Multicultural Education. J Health Care Poor Underserved 1998; 9(2):117-25.
  4. International Bureau of Education [Internet]. Culturally Responsive Curriculum; [cited 2022 Sep 16].
  5. Rizzolo D, Kalabalik-Hoganson J, Sandifer C, Lowy N. Focusing on Cultural Humility in Pharmacy Assessment Tools. Curr Pharm Teach Learn 2022;14(6):747-50.
  6. Project Implicit [Internet]. Select a Test; [cited 2022 Sep 30].

June 25, 2022

Should Feedback be Given Verbally or in Writing?

by Mariam M Philip, PharmD, PGY1 Community Pharmacy Practice Resident, Walgreens Pharmacy

Learners thrive in a safe environment where they can freely express their thoughts and opinions. At the heart of learning is feedback.1 Feedback is critical in the classroom, in clinic, at work … indeed, anywhere learning occurs. It is crucial to knowledge acquisition, patient care, personal development, and growth. As educators, it's critically important to strive to give effective feedback.  Many agree it gets easier to provide over time. Feedback received is not always predicted, positive, effectively delivered, or correctly interpreted. Generally, the feedback provided should be based on direct observations and understood by the learner. Feedback should be provided in a safe environment where learners can discuss the feedback, express their concerns, and participate in developing an action plan.

Feedback is different from an evaluation, and it should be delivered in a conversational yet descriptive manner. When it’s done effectively and periodically, the formal evaluation (which typically occurs at the end of the course or experience) should not be a surprise.1 Evaluations are more formal and done to determine the learner’s grade (or, in the case of employees, pay raises or promotion decisions).

Feedback can take two forms: verbal or written.  Is one delivery method better than another?  The goal of feedback is to influence the learner and either motivate the continuation of their good work or point out what needs improvement, or both. One of the advantages of verbal feedback is that can lead to a “real-time” discussion and provides an opportunity for both the educator and student to elaborate more with examples. On the other hand, written feedback is often clearer, can be referenced later (e.g. when constructing the final evaluation), and (perhaps) reduces the chance of miscommunication or misinterpretation.

In 2017, a randomized controlled trial that enrolled 44 nursing students assessed the effectiveness of oral and written feedback.  The students were divided equally into two groups. The students filled out a questionnaire after the feedback to determine their reactions, perceptions, and responses to the different forms of feedback. The questionnaire showed no statistical differences between the two groups, and the results were similar. Although there was no statistical significance between the groups, the study might have been underpowered due to the small sample size. Nonetheless, the authors offer some interesting points of view.2

Based on the students’ responses to the questionnaire, 21.3% of the oral feedback group experienced negative reactions; 75.8% were classified as mild, and 24.2% were classified as severe reactions. Conversely, only 14.4% of the students in the written feedback group had a negative reaction; most were classified (92.3%) as mild and 7.7% were severe. While the difference was statically significant, the oral feedback group had a higher percentage of students who experienced negative responses such as arguing, crying, insulting, denying, and inattention. The written feedback group had a higher rate of intimidation, undue self-defensiveness, and confrontation. The satisfaction rate was higher (but not significantly so) in the written feedback group (77.1% indicated high satisfaction with the feedback vs. 50% in the oral group). Lastly, when the delivery of the feedback was assessed, the students in the oral feedback group gave it a higher delivery score.2

Additional studies conducted with medical students who received “well done” feedback showed similar satisfaction from both oral and written methods of communication.3 A similar study was conducted with medical residents from two university-based clinics. To diversify the participants and results, the residents that participated were assigned to medical clinics of various specialties. Sixty-eight internal medicine residents were randomly assigned to receive either written or “face-to-face” feedback. They were then given a questionnaire to assess their overall clinic experience, in which eight of the 19 questions asked about feedback. Six five residents completed the questionnaire. The results showed no differences in the residents’ perceptions of oral and written feedback.3

Both forms of feedback are acceptable and can be effective when delivered following best practice principles. There are advantages to each method of communication when providing feedback. Oral feedback is often less formal and more conversational, which will allow the student to feel safer expressing their concerns or participating in planning for the future. While less efficient, written feedback often promotes deeper reflection. The student can reflect on the given feedback and refer to it periodically. Thus, a teacher should focus on the quality and frequency of the feedback rather than the delivery method.4

I believe health professional students benefit from written and oral feedback in both the didactic and experiential settings. Both delivery methods serve a purpose that is important to students’ growth. The thoroughly thought-of written feedback will allow the student to digest the feedback and reflect on their own time. This will increase autonomy and promote self-assessment and planning. Meanwhile, oral feedback allows students to explain themself, ask questions, and brainstorm with the preceptor on the next steps.

A healthy balance between verbal and written feedback should exist between the two forms of communication.  Both should be used to help the student grow. I find oral informal feedback more engaging, which helps build the teacher-learner relationship. It can help shift the “formal meeting” nerves to a mentorship mindset. Periodic written feedback can reinforce verbal discussions and make constructing the end-of-course evaluations easier.

References:

  1. Jug R, Jiang X, Bean Giving and Receiving Effective Feedback: A Review Article and How-To Guide. Archives of Pathology & Laboratory Medicine 2019; 143 (2): 244–250. https://doi.org/10.5858/arpa.2018-0058-RA
  2. Tayebi V, Armat MR, Ghouchani HT, et al. Oral versus written feedback delivery to nursing students in clinical education: A randomized controlled trial. Electron Physician. 2017;9(8):5008-5014. Published 2017 Aug 25. doi:10.19082/5008
  3. Elnicki DM, Layne RD, Ogden PE, et al. Oral Versus Written Feedback in Medical Clinic. J Gen Intern Med. 1998;13(3):155-158. doi:10.1046/j.1525-1497.1998.00049.x
  4. Dobbie A, Tysinger JW. Evidence-based strategies that help office-based teachers give effective feedback. Fam Med. 2005;37(9):617-619.

May 23, 2022

"Blended Learning” Models and Their Effectiveness

by Hannah Black, PharmD, PGY1 Pharmacy Practice Resident, Baptist Memorial Health-North Mississippi

Many of us are familiar with the term, “blended learning.” While it is easy to assume that this teaching model simply involves a combination of in-class and online instruction, there are lots of different ways of accomplishing it. Although blended learning models are now commonplace (thank you COVID-19), there has been a lot of research published in medical education journals over the last 4 decades.1 Many studies have documented the effectiveness of blended learning in health professions education but given that blended learning methods vary very substantially, what strategies are most effective?


The Journal of Medical Internet Research published a systematic review and meta-analysis examining the effectiveness of blended learning compared to traditional learning in health professions education.1 Blended learning was stratified into different types of learning support, defined as follows:

  • Offline Learning - the use of personal computers to assist in delivering stand-alone multimedia materials without the need for internet.
    • Videos and audio-visual learning materials (as long as the learning activities did not rely on internet connection)
  • Online Support – all online materials used in learning courses.
    • Educational platforms (learning management system, LMS like Blackboard)
  • Digital Education – a wide range of teaching and learning strategies exclusively based on the use of electronic media and devices
    • Facilitates remote learning for training purposes
  • Computer-Assisted Instruction – the use of audio-visual material to augment instruction.
    • Multimedia presentations, live synchronous virtual sessions via a web-based learning platform, synchronous or asynchronous discussion forums
  • Virtual Patients – interactive computers simulations of real-life clinical scenarios

The primary outcome of this study was to evaluate the effectiveness of blended learning to achieve knowledge outcomes compared with traditional teaching strategies.1 Of the 3,389 articles identified in MEDLINE, 56 studies met the inclusion criteria with a total of 9,943 participants. Most of the participants were students in medical schools. Other participant subgroups included nursing, pharmacy, physiotherapy, dentistry, and interprofessional education.

Offline Blended Learning vs Traditional Learning

Some benefits of offline learning have been suggested, such as unrestricted knowledge transfer and enhanced accessibility. This type of learning gives students more flexibility to learn at a convenient pace, place, and time, which can improve retention of content. However, this study showed no significant difference in knowledge outcomes when compared to traditional teaching methods. It was noted that the majority of studies in this group were in nursing. These results were consistent with a previous meta-analysis on offline digital instruction.2

Online Blended Learning vs Traditional Learning

Online blended learning gives students more experience building competency in things that require repeated practice, such as EKG and imaging interpretation. The internet has provided students with an abundance of resources that can be used with the click of a button, so why not use it to the learner’s advantage? As expected, this study did show a significant advantage in knowledge outcomes of online blended learning versus traditional learning alone. Using the internet to deliver instruction does not come without challenges. Learning is highly dependent on the student’s ability to cope with technical difficulties and comfort using computers and navigating the internet.

Digital Learning vs Traditional Learning

Digital learning, or “eLearning,” is being used increasingly in health professional education for improvement of access to training and communication.3 However, the pooled effect for knowledge outcomes in this study suggests no significant difference.1 This study was broken into subgroups, and the medicine subgroup showed digital learning had a positive effect when compared to the control group.1 I feel this concept is not one to ignore because it facilitates remote learning, which could help in addressing the shortage of health professionals in settings with limited resources.1

Computer-Assisted Instruction Blended Learning vs Traditional Learning

Computer-assisted instruction can provide students with innovative methods of instruction for things like physical examination techniques.8 The pooled effect for knowledge outcomes in this study suggested a significant improvement. Participants in one study reported difficulties accessing the course due to problems with the university’s internet, so the online discussion board was not used to its full potential.5 One could argue that similar problems could have emerged even in the traditional learning setting where students may choose not to or feel intimidated to engage in discussion. 

Virtual Patient Blended Learning vs Traditional Learning

Virtual patients are widely used in simulation-based instruction. These simulations can be used as a precursor to bedside learning, or to be used when direct patient contact is not possible. The groups with supplementary virtual patient learning support showed a significant improvement in knowledge outcomes compared to traditional learning.1 These results reinforce the results found in a similar meta-analysis, showing that virtual patients have a positive impact in terms of skill development and problem solving.3

When combining all 56 studies, the pooled effect size reflected a significantly positive effect on knowledge acquisition in favor of blended learning versus traditional learning in health professions education.1 A possible explanation could be that blended learning allows students to review materials at their own pace and as often as necessary. This reinforces the belief that the outcomes of blended learning is most dependent on student characteristics and motivation, rather than the instructional deliver method.

In my opinion, one of the most interesting findings from this study comes from the subgroup analysis. For the top 3 subgroups, the pooled effect difference in the medicine subgroup was 0.91, nursing studies was 0.75, and dentistry studies was 0.35.1 This reiterates that the effectiveness of blended learning is complex and dependent on the learner characteristics and needs of the student population. One tool that can be used to develop and implement a personalized blended learning curriculum is the six step Kern cycle6, described below:

  1. Problem identification – The first step begins with the identification and analysis of a specific healthcare need or group of needs. It could relate to the needs of the provider, or the needs of society in general.
  2. Targeted needs assessment – The second step involves assessing the needs of your group of health professional students, which may differ from the needs of providers or society in general.
  3. Formulating goals and learning objectives – Once the needs have been clearly identified, goals and objectives should be written starting with broad goals, then moving to specific, measurable objectives.
  4. Selecting educational strategies – After objectives have been finalized, the content and methods can be selected that will help to achieve the educational objectives.
  5. Implementation – In this step the finalized curriculum is implemented.
  6. Evaluation and feedback – This final step is important to help continuously improve the curriculum and gain support to drive the ongoing learning of participants.

 Overall, this meta-analysis reinforces the notion that blended learning has a positive effect on knowledge outcomes in healthcare education. However, it also indicates that different methods of conducting blended courses could demonstrate differing effectiveness based on the student population, their needs, and the learning objectives.1 When strategically developed and implemented, I believe blended learning enhances outcomes.

References

  1. Vallée A, Blacher J, Cariou A, Sorbets E. Blended learning compared to traditional learning in medical education: Systematic Review and meta-analysis. Journal of Medical Internet Research. 2020;22(8): e16504.
  2. Posadzki P, Bala MM, Kyaw BM, et al. Offline Digital Education for Postregistration Health Professions: Systematic review and meta-analysis by the Digital Health Education Collaboration. Journal of Medical Internet Research. 2019;21(4): e20316.
  3. Kononowicz AA, Woodham LA, Edelbring S, et al. Virtual patient simulations in Health Professions Education: Systematic Review and meta-analysis by the Digital Health Education Collaboration. Journal of Medical Internet Research. 2019;21(7): e14676.
  4. Song L, Singleton ES, Hill JR, Koh MH. Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education. 2004;7(1):59–70.
  5. Al-Riyami S, Moles DR, Leeson R, Cunningham SJ. Comparison of the instructional efficacy of an internet-based temporomandibular joint (TMJ) tutorial with a traditional seminar. British Dental Journal. 2010;209(11):571–6.
  6. Kern D. Curriculum Development for Medical Education: A Six-step Approach. Baltimore, MD: Johns Hopkins University Press, 2022.
  7. George PP, Papachristou N, Belisario JM, et al. Online elearning for undergraduates in Health Professions: A systematic review of the impact on knowledge, skills, attitudes and satisfaction. Journal of Global Health. 2014;4(1).
  8. Tomesko J, Touger-Decker R, Dreker M, Zelig R, Parrott JS. The effectiveness of computer-assisted instruction to teach physical examination to students and trainees in the Health Sciences Professions: A systematic review and meta-analysis. Journal of Medical Education and Curricular Development. 2017 Jul 14;4:2382120517720428

May 4, 2022

Portraying Social Constructs that Influence Health in Patient Cases

by Jewlyus Grigsby PharmD, PGY1 Community Pharmacy Practice Resident, University of Mississippi School of Pharmacy

One of the most common ways health profession programs assess students’ knowledge is through patient cases intended to mirror real-life practice scenarios. These cases are meant to place students in a “what would you do?” simulation and facilitate the development of their critical thinking and clinical skills. These cases are used during in-class discussions, on exams, in clinical skills competitions, in interviews, and for professional development. When designing these cases, faculty consider a variety of factors such as the severity of the patient’s symptoms, lab values, comorbidities, allergies and intolerances, and even family history. One set of factors that must be carefully considered when creating a case is the patient’s race, ethnicity, nationality, and socioeconomic status. These factors are social constructs, and therefore influence perception, decision making, and (all too often) health outcomes. In August 2021, the American Medical Association published updated guidelines about how to report race and ethnicity in the medical literature. These guidelines state that the words and terms used must be, “accurate, clear, and precise and must reflect fairness, equity, and consistency.”1 Furthermore it also provides guidance on how to address sex and gender, sexual orientation, age, and socioeconomic status in research reports, review articles, and case reports. The goal of these guidelines is to reduce unintentional bias within the medical and scientific literature. However, despite now having a guideline instructing health care researchers and educators on how best to include these social constructs, how should this be done in the classroom setting and during experiential courses?

Ensuring the appropriate portrayal of diversity in patient cases should start with a careful reflection on the objectives of the lecture or topic being taught. This is especially important because test questions are often developed from the learning objectives. When writing learning objectives, one must ask what participants should be able to do as a result of the lecture, what the audience needs to know, and communicate the take-home message. By including objectives that relate to the social determinants of health, diversity can be introduced into the patient cases, and assist students in practicing disease state management with patients from diverse backgrounds. Here are three examples of how to structure objectives that include some of these social factors:

  1. Create a treatment plan for patients within the confines of the state’s Medicaid medication formulary.
  2. Design a medication regimen that accounts for and is consistent with a patient’s religious beliefs and practices.
  3. Compare and contrast the prevalence of medication allergies and intolerances present in specific racial and ethnic groups.

These objectives challenge students to analyze a patient’s financial status, religious beliefs, and race/ethnicity in the context of the treatment regimen and medication characteristics.

After establishing the objectives for a presentation and determining whether specific social factors should be incorporated, the next step is to design the cases that will be used during the in-class activities and on exams. The cases should highlight the medical conditions under consideration but also highlight how political, economic, and social factors contribute to the patient’s o vernal health outcomes. It is also important to ensure the case does not reinforce biases and avoids stereotypes. This can be challenging because there is a fine line between something that might be more common in a particular population and a stereotypical patient presentation. For example, psoriasis is more common in Caucasian patients and diabetes is more common in African Americans. However, not all diabetes-related cases should be about an African American patient, and not all psoriasis cases should feature a Northern European! These diseases occur in people of all racial and ethnic backgrounds, but there may be some differences in presentation, clinical features, and severity that can be explored by featuring patients from various backgrounds.

One group, a non-profit organization, produces cases for courses in medical schools in the United States. They design their cases using an approach called “structural competency” defined as: 

the trained ability to discern how a host of issues defined clinically as symptoms, attitudes, or diseases also represent the downstream implications of a number of upstream decisions about such matters as health care and food delivery systems, zoning laws, urban and rural infrastructures, medicalization, or even about the very definitions of illness and health.2

Based on this definition, the group produced a guide to assist educators in the implementation of the cases and how to discuss race and culture in the classroom.2

Using our learning objectives above, we could construct a patient case to explore a range of issues.  Here is an example of a patient case that a teacher might create:

RS is a 30 YO bisexual African American male with type 2 diabetes, hypertension, and dyslipidemia. He is coming to clinic for the first time since being hospitalized due to diabetic ketoacidosis. His diabetes is uncontrolled and he probably doesn’t have health insurance. His family history includes type 2 diabetes, stroke, and heart failure. He states that he drinks very little water and because he works all the time in a factory, he eats a typical Southern diet: high calorie and high carbohydrates with little to no vegetables. What medication regimen would you recommend in this case? What are some non-pharmacological interventions would you suggest?

This is a suitable case to evaluate a patient newly diagnosed with diabetes however, it does perpetuate stereotypes and can reinforce some implicit biases that many practitioners have. First, in the introductory sentence, it states the patient’s sexual orientation. This information really isn’t necessary to answer the key questions. Nonetheless, patients sometimes disclose personal information during a clinic visit or hospital stay. Although it does not contribute information that is useful when addressing the key questions in the case, it might be an opportunity to introduce students to a patient whose sexual orientation may be different than their own. However, the manner in which the patient’s sexual orientation is included doesn’t flow with the narrative of the case. Also, the case alludes to the possibility that this patient is uninsured, but based on the objectives, we should indicate that the patient is on Medicaid. Lastly, the patient’s diet is described in a stereotypical manner. Instead of labeling this a "southern diet" that all African Americans in the south consume, it would be better to describe the patient’s diet without ascribing it to the patient’s race or ethnicity. So here’s a way to change the case without perpetuating these biases and stereotypes:

RS is a 30 YO African American male with type 2 diabetes, hypertension, and dyslipidemia. He is coming to the clinic for the first time after being hospitalized for diabetic ketoacidosis. He has trouble getting his medications because his Medicaid plan’s limited formulary and normally his boyfriend helps him pay for his medications. His family history includes type 2 diabetes, stroke, and heart failure. When ask about what he has eaten over the past 24-hours, he indicates he did not eat breakfast, he ate a chicken sandwich meal from Chick fil A for lunch, and had fried chicken with bread for dinner. What medication regimen would you recommend in this case? What are some non-pharmacological interventions you would suggest?

The new case removes the patient’s sexual orientation from the introductory statement but its still alluded to it later in the case.  The case introduces access to medications as a potential problem. Also, there is specific information about the patient’s eating habits, rather than sweeping generalizations. These changes do not alter the case entirely, but they do remove some of the stereotypical elements and biases. In order to introduce students to the social determinants of health, social constructs need to be included in patient cases but must be constructed in such a way to reduce biases while reflecting the diversity in the patients we serve.

References

  1. Flanagin, A., Frey, T. and Christiansen, S., 2021. Updated Guidance on the Reporting of Race and Ethnicity in Medical and Science Journals. JAMA 2021; 326(7): 621. Available at: <https://jamanetwork.com/journals/jama/article-abstract/2783090> [Accessed 28 April 2022].
  2. Krishnan A, Rabinowitz M, Ziminsky A, Scott S, and Chretien K. Addressing Race, Culture, and Structural Inequality in Medical EducationAcademic Medicine 2019; 94(4): 550-555.

April 1, 2022

Cased-based Learning From Two Perspectives: Learner and Teacher

by Madison Parker, PharmD, PGY-1 Pharmacy Practice Resident, University of Mississippi Medical Center

Who enjoys being proved wrong or having to learn the hard way? The rhetorical answer is no one! However, in the last couple of months, it has happened to me time and time again. I recently graduated from pharmacy school. I matched for a PGY-1 pharmacy residency at the medical center associated with my alma mater. Wanting to be a well-rounded pharmacist and a successful preceptor, I decided to participate in an elective academia rotation. I quickly learned how different things are on the “other side.” As a student, I never understood the time commitment and detail that went into teaching a class and developing cases.

As a student, I did not enjoy the “case-based approach.” I did not understand why we were going to school if we were essentially just teaching ourselves. What I didn’t realize at the time was how much I was learning and growing as a health professional by grappling with cases. Hindsight always seems to be 20/20! Case-based learning made me dig far deeper than typical lectures ever did during pharmacy school. I was no longer just memorizing a drug side effect to regurgitate it back on a multiple-choice test. It was challenging, and it made me think well beyond “the right answer.” I had to learn how to pivot when a treatment was contraindicated or what to do next if a patient suffered a side effect.  Essentially, I learned how to contingency plans to better take care of my future patients.

I have also learned about Bloom’s Taxonomy during my teaching experience and the “cognitive skills” that case-based learning requires. Lectures rely on regurgitating information and the goal is to have students “remember” and “understand” whereas case-based learning requires the student to “analyze,” “evaluate,” and “create.”1

During case-based learning, the student is provided a detailed clinical case or scenario that they need to work through and discuss. This typically involves a small group rather than a large lecture hall. Case-based learning, like typical lectures, should still include learning objectives, but the teacher won’t always disclose all of the objectives before the case discussion occurs.1 This non-disclosure allows the learner to think for themselves. Case-based teaching dates back to the early 1900s. Dr. James Lorrain, a professor at the University of Edinburgh, was thought to be the first teacher to use case-based teaching during his pathology course. 

One study surveyed health professional students about their opinions toward case-based learning. There were 520 students invited to participate. These students were from various professional schools including medicine, pharmacy, nursing, and social work.  Students were required to work through the cases as teams during the course. Students were given a nine-item survey that asked about their satisfaction with small-group, case-based learning format using a 7-point semantic differential scale. The students were asked to rank each point from 1-to 5 where 1 indicated ‘strongly disagree’ and 5 indicated ‘strongly agree.’ Ratings were reported as a mean: e-learning discussions (3.54 ± 0.99), small group learning experiences (3.94 ± 0.88), and panel discussions (3.76 ± 0.91). Based on student satisfaction scores, one can infer that case-based learning can be challenging but also rewarding for the learner.3

Another study examined medical students in their pre-clinical years from 2015-2018 at Stanford who chose to enroll in an optional case-based learning course. This course was led by a facilitator and involved a small group of students who would discuss a prospective patient case. At the end of the course, the medical students were asked to participate in a pre-and post-intervention study reflecting on their clinical skills. The control sample included medical students who did not participate in the course. Non-participants were encouraged to participate in pre-and post-intervention surveys as well. A 14 item survey was given to assess participants’ self-reported skills including the ability to report, interpret, manage, educate, and course-specific skills and objectives. A 5-point Likert scale was utilized with 1 indicating ‘strongly disagree’ whereas 5 indicated ‘strongly agree.’ Two surveys were administered; the first survey was completed within two weeks before the first session of the optional course, and the second survey was completed within 2 weeks after the final session. The difference between the post-intervention score and pre-intervention score was calculated. The intervention group resulted in a more positive change in the following categories: understanding how clinicians arrive at a diagnosis, using a step-by-step approach in a longitudinal primary care setting, and how to ultimately share information with their patients.4

There are many benefits associated with case-based teaching. It challenges health professional students to use their problem-solving skills before encountering real patients in their clinical years. This in turn allows students to practice and sharpen their skills so that they know how to grapple with real problems and challenges using the same resources that practitioners use when faced with the unknown.4 As a future preceptor, I am a big fan of case-based teaching!

In my opinion, cases should be created by experts in the field of practice. Cases should be constructed in a way that they spark students’ interest when hearing about “the real world.” Case-based learning should be facilitated, but by whom? In my experience, it doesn’t have be an expert in the field, just someone with a general knowledge of the subject matter. However, it is helpful for facilitators to have a guide created by the case author.  The case guide should clearly state the objectives the students should acheive and give “tips for success” in the written matters.

In summary, case-based teaching is effective and encourages higher order thinking. It is particularly effectives in health professions education, giving students a chance to practice in a safe environment where “no harm” will arise from a poorly conceived or ill-informed decision. Case-based learning should be extensively used in every health-related curriculum as the benefits and positive results are well established.

References:

  1. Armstrong, P. Bloom’s Taxonomy. Vanderbilt University Center for Teaching. 2010.
  2. McLean SF. Case-Based Learning and its Application in Medical and Health-Care Fields: A Review of Worldwide Literature. Journal of Medical Education and Curricular Development. 2016;3:JMECD.S20377.
  3. Curran VR, Sharpe D, Forristall J, Flynn K. Student satisfaction and perceptions of small group process in case-based interprofessional learning. Medical Teacher. 2008;30(4):431-433.
  4. Waliany S, Caceres W, Merrell SB, Thadaney S, Johnstone N, Osterberg L. Preclinical curriculum of prospective case-based teaching with faculty- and student-blinded approach. BMC Med Educ. 2019;19(1):31.

March 25, 2022

Assisting Students with Disabilities During Experiential Education

by George Lamare Haines, PharmD, PGY1 Community Pharmacy Resident, The University of Mississippi School of Pharmacy

There is only one way to look at things until someone shows us how to look at them with different eyes.

—Pablo Picasso

At times it is hard to see problems that face others. Often, when a problem doesn’t affect a person, they don’t perceive it as a problem or that it exists because they don’t have to deal with it. This is certainly true when it comes to people with disabilities. There are many things that an able-bodied person takes for granted and never even considers. When it comes to students in college, title II of the Americans with Disabilities Act (ADA) protects people with disabilities from discrimination by universities, community colleges, and vocational schools.1 Most of us are at least somewhat familiar with accommodations for students with disabilities in the classroom setting, but it is far less common to see these considerations in experiential learning environments.

Every educator tries their best to determine the most appropriate teaching methods for the largest number of students. For most programs, there are special accommodations made for students with learning disabilities in the classroom, like providing extra time during testing or having someone read the exam questions aloud. When students with disabilities enter professional programs, they will be required to participate in experiential education that places them in environments similar to those that they will work in after completion of their program. These “non-academic” settings, which are not under the control of the university or college/school, can be challenging for students with disabilities.

When the University of Colorado School of Medicine was faced with this, they took steps to ensure that their students were set up for success. To illustrate, the school made special accommodations for a third-year medical (M3) student who uses a wheelchair. The student was scheduled to start an Operative/Perioperative clerkship. Before the start of the student’s M3 year, the student met with the medical school dean to discuss requirements, barriers, and reasonable accommodations for the clerkship. The dean then met with preceptors for the clerkship to inform them of the student’s disabilities and to develop a plan for an optimal experience, which included selecting clerkships that would allow for maximal physical access and participation. By putting in this extra effort, the student was able to fully participate in all required clerkships and went on to complete the degree with honors.2

Due to the student’s proactive behavior, there was effective communication and reasonable accommodations made so that they could complete their clerkship. Early communication is the key here. As with most issues, if they are addressed as early as possible, the issue can be addressed before it causes real problems. Often administrators have to do the groundwork to ensure that learners with disabilities are able to complete the requirements of an experience. These steps are important for both physical and learning disabilities. Students with learning disabilities are often hesitant to report these since there is often stigma and shame. Or they may not understand the impact of their disability and the potential benefits of sharing the information with their preceptors.3

Preceptors and faculty in experiential education administration can determine reasonable accommodations for students if they are given adequate time, resources, and knowledge of the disability.4 There are five basic principles that should guide institutions to ensure that reasonable accommodations are provided. The accommodations should be based on a reliable diagnosis; they must mitigate factors of the disability that affect student competencies; it should be tailored to each experiential site; they must ensure collaboration and communication occurs between the students, staff, preceptors, and administration; and most importantly, it must uphold privacy. If the accommodation takes away from any of these, it can not be considered reasonable.4 Often, accommodations for a student with a learning disability can be made by minor adjustments to the environments, policies, and procedures. Students with physical disabilities may require significant adjustments in the environment.  By having proactive policies and procedures in place, preparing preceptors for what to expect, and monitoring student learning outcomes, students with disabilities have the best chance for success during experiential education.4

A recent commentary published in the American Journal of Pharmaceutical Education provides a stepwise approach to addressing these needs.5 The first step is to create a system for students to submit a request when entering the experiential program. Once the student has submitted the request, the program is then responsible for exploring accommodation options and sites that either already meet the requirments of the accommodation or that can reasonably accommodate the request. The next step would be applying and fully implementing these accommodations. This will look different for different locations and will depend on the needs of the student. For example, a student who does not have sufficient strength may be accommodated by shortening the length of the rotation day but extending the total number of days in order to meet the required number of experiential hours. Another example would be to avoid rapid-fire questioning for a student that struggles with processing information.4 A practice walkthrough by both the student and preceptor may also be useful before the start of the rotation to allow the student to familiarize themselves with the environment and what to expect when they start the experience. The final, and possibly most important, step is to monitor the effectiveness of the accommodation. Continued communication between the preceptor, student, and experiential program director is essential to quickly address oversights and ensuring the accommodation is effective.5

When we start looking at these required experiences from the student with a disability perspective, we see problems that we didn’t know were there. It takes students with courage to tell you what their needs are. Open, honest communication seems to be the key to addressing the needs of students with disabilities, especially in experiential education.

References:

  1. Americans with Disabilities Act of 1990; 42, USC §§ 12101 et seq.
  2. Malloy-Post R, Jones TS, Montero P, et al. Perioperative Clerkship Design for Students With Physical Disabilities: A Model for Implementation. Journal of Surgical Education. 2022; 79(2): 290-94.
  3. Vos S, Kooyman C, Feudo D, et al. When Experiential Education Intersects with Learning Disabilities. Am J Pharm Educ 2019; 83(8): Article 7468.
  4. Vos S, Sandler L, Chavez R, et al. Help! Accommodating learners with disabilities during practice-based activities. J Am Coll Clin Pharm; 2021; 4(6): 730-37.
  5. Kieser M, Feudo D, Legg J, et al. Accommodating Pharmacy Students with Physical Disabilities During the Experiential Learning Curricula. Am J Pharm Educ 2022; 86(1): Article 8426.

March 24, 2022

Should We Adopt a Two-tier Grading System in Health Professions Education? Benefits and Practical Considerations

by Mary Kathryn Vance, PharmD, PGY-1 Pharmacy Practice Resident, University of Mississippi Medical Center

Grades have long been a cornerstone of educational systems, giving students and educators a way to measure the achievement of learning objectives within courses. Grades were first instituted in the 1700s in Europe to assign a rank order among students. By the late 1800s, several American universities had adopted a grading system with “passing” rates ranging from 26-75%. Eventually, this transitioned to the tiered grading system we recognize today, where an A generally means the student has scored at least 90% on the assessment (or received >90% of available points in the course), a B means 80-90%, a C means someone scored in the 70s, and so forth. Grades typically are attached to a descriptor.  For instance, an A might signify an exceptional level of achievement, a B good but not outstanding level of performance, a C a fair level, and a D signifies significant performance deficiencies but still passing.1 While this is still the system widely employed by the majority of Doctor of Pharmacy programs in the United States, some programs have adopted a pass/fail or two-level grading system.

Several studies have shown that students in health professions programs, including pharmacy students, experience anxiety, depression, and stress at higher rates than their peers. This places students at a higher risk of developing burnout, which is characterized by exhaustion and a diminished sense of accomplishment.2,3 Moreover, multitiered grading systems can foster unhealthy competitive environments among students. Two-level grading systems have been proposed as a potential way to mitigate stress, reduce competition, and increase students’ well-being. A survey with nearly 1200 first- and second-year medical student respondents found that students in schools using grading scales with three or more categories had higher levels of stress, emotional exhaustion, and depersonalization when compared to students in schools using two-level grading systems. Students in schools with multi-tier grades were also more likely to have seriously considered dropping out of school.4 Another study conducted at Mayo Medical School compared students from classes before and after implementation of a two-level grading system. Students graded with the two-level system had less perceived stress and greater group cohesion than their multilevel peers.5

One concern that educators express about two-level grading systems is that they can negatively impact academic performance. Students’ motivation to learn the material might be decreased because they may not have to understand the concepts as deeply to get a passing grade. Some evidence suggests this concern is more theoretical than true. At the University of Virginia School of Medicine, the first two years of the curriculum were changed from graded to pass/fail. When student performance was compared before and after the change, no differences were observed in subsequent course grads, grades during clerkships, or scores on the United States Medical Licensing Examination (USMLE) Steps 1 and 2 Clinical Knowledge boards.6 Similar results were seen at the Mayo Medical School — there was no difference in USMLE Step 1 board scores before and after changing from a multilevel to a two-tier grading system.5

While they do not appear to reduce students’ achievement during school, two-level systems may better position students to become self-regulated learners. Health professionals are expected to engage in a process of continuous learning throughout their careers. This may be difficult for some students after transitioning from a system with strong extrinsic motivators (i.e. grades) to professional life where the individual must muster the internal motivation to figure out what, how, and when to learn. Helping students develop into self-regulated learners while still in school lays the foundation for this to continue throughout their careers and ultimately increases their knowledge and skills to provide better patient care.7

Another potential disadvantage of two-level systems is a decreased probability for students to match with residency programs. The American Society of Health-System Pharmacists (ASHP), the organization that is responsible for pharmacy residency program accreditation, will soon be requiring that all accredited pharmacy residency programs develop procedures on how to evaluate the academic performance of applicants from pass/fail (two-tier grading) institutions.8 There is still the potential that students from institutions that have two-tier grading systems could be seen as less desirable or competitive. However, this effect was not seen in a study that examined the effect pass/fail grading on advanced pharmacy practice experiences (APPEs) had on residency match rates at 100 pharmacy schools in the United States over the course of 3 years.9 Unadjusted analyses showed that there was no difference in match rates between students from schools with multilevel and two-level grading systems. After adjusting for potential confounders, two-level grading was actually associated with higher match rates during one of the three years.9 Similar rates of success in residency placement were also seen in the study conducted at the University of Virginia School of Medicine before and after their transition to a two-tier grading system.6

Despite the potential benefits, two-tier grading systems have not been widely implemented in pharmacy education and when it has been implemented, they are some inconsistencies. A study examining the implementation of two-tier grading systems within Doctor of Pharmacy programs found that the programs varied in the terminology used to describe student achievement levels, minimum pass levels, and whether a class rank or GPA was calculated, among other factors.10 This lack of uniformity leads to questions as to how best to implement two-tier grading systems.

Experiential courses such as introductory and advanced pharmacy practice experiences would seem to lend themselves well to a two-tier grading system. These types of courses tend to vary in their rigor and requirements based on the practice site. This can make interpreting and interpreting letter grades assigned to a student’s performance is already difficult. There are a variety of labels that could be used in a two-tier system, such as pass/fail, pass/no pass, or satisfactory/unsatisfactory. These labels haven’t been evaluated, but the connotations with “fail” and “unsatisfactory” would seem to be more negative than “no pass.”

Converting non-experiential courses to a two-level system is controversial. In schools where this has been done, numerical grades given to assignments and assessments are used to calculate a student’s class rank. This could allow high achievers to be rewarded and give residency programs a way to compare applicants. We clearly need additional studies about two-tier grading systems to determine their benefits and risks and how to best execute them.

References

  1. Cain J, Medina M, Romanelli F, Persky A. Deficiencies of Traditional Grading Systems and Recommendations for the Future. Am J Pharm Educ 2022; 86 (2): Article 8850.
  2. Brazeau CMLR, Shanafelt T, Durning SJ, et al. Distress Among Matriculating Medical Students Relative to the General Population. Academic Medicine. 2014;89(11):1520-1525.
  3. Geslani GP, Gaebelein CJ. Perceived Stress, Stressors, and Mental Distress Among Doctor of Pharmacy Students. Social Behavior and Personality: an international journal. 2013;41(9):1457-1468.
  4. Reed DA, Shanafelt TD, Satele DW, et al. Relationship of Pass/Fail Grading and Curriculum Structure With Well-Being Among Preclinical Medical Students: A Multi-Institutional Study. Academic Medicine. 2011;86(11):1367-1373.
  5. Rohe DE, Barrier PA, Clark MM, et al. The Benefits of Pass-Fail Grading on Stress, Mood, and Group Cohesion in Medical Students. Mayo Clinic Proceedings. 2006;81(11):1443-1448.
  6. Bloodgood RA, Short JG, Jackson JM, Martindale JR. A Change to Pass/Fail Grading in the First Two Years at One Medical School Results in Improved Psychological Well-Being. Academic Medicine. 2009;84(5):655-662.
  7. White CB, Fantone JC. Pass–fail grading: laying the foundation for self-regulated learning. Adv in Health Sci Educ. 2010;15(4):469-477.
  8. American Society of Health-System Pharmacists. (2021). ASHP Accreditation Standard for Prost Graduate Residency Programs Draft Guidance.
  9. Pincus K, Hammond AD, Reed BN, Feemster AA. Effect of Advanced Pharmacy Practice Experience Grading Scheme on Residency Match Rates. Am J Pharm Educ 2019; 83(4): Article 6735
  10. Spiess JP, Walcheske E, MacKinnon GE, MacKinnon KJ. Survey of Pass/Fail Grading Systems in US Doctor of Pharmacy Degree Programs. Am J Pharm Educ. 2022;86(1): April 8520.