One thousand, four hundred and forty. That’s the minimum number of Advanced Pharmacy Practice Experience (APPE) hours the Accreditation Council for Pharmacy Education (ACPE) mandates a pharmacy student receive before they graduate.1 Over the course of at least thirty-six weeks (and sometimes more), these students are exposed to a variety of practice settings, institutions, and situations designed to support the development of their knowledge, skills, and abilities. Essentially, their competence to practice as a pharmacist following graduation. The ACPE Standards state that the APPEs should be designed to “hone practice skills, professional judgment, behaviors, attitudes and values, confidence, and sense of personal and professional responsibility.” Schools are also required to have a formal assessment of the achievement of APPE competencies using validated assessments. Student performance must be documented at key time points throughout these experiences.
https://pharmacy.uiowa.edu/students/academic-programs/doctor-pharmacy/pep/appe |
The CAPE educational outcomes developed by the American
Association of Colleges of Pharmacy (AACP) are a set of goals that all pharmacy
curriculums should be focused on achieving. These educational outcomes are
linked to Entrustable Professional Activities (EPAs) that all pharmacy
graduates should be able to perform.2 (For more on EntrustableProfessional Activities, see the post by Andrew Mays) Students should have
ample opportunity to practice these activities become proficient, and
demonstrate mastery before becoming licensed pharmacists. This can be
problematic in that the heavily didactic nature of the first three years of the
pharmacy curriculum results in few opportunities to practice these EPAs. Too often students are often being assessed before
they have an opportunity to master the EPAs during APPE rotations.
Reflecting back on my experiences as a pharmacy student, the assessment
of learning on APPE rotations involved a series of assignments that had to be
completed by the end of the rotation. An
example of this would be a set of questions asking you to reflect on interprofessional
teams and the benefits of working with different professions at your current
practice site. While most assignments were site-specific, several (like the one
above) were repeated for multiple rotations. Additionally, specific objectives
were set forth and students were asked to provide evidence of assignments or
activities they completed that enabled them to meet those objectives. An
example of this would be to “evaluate and interpret patient data.” A student could
then provide details of working patients up, reviewing medical records, or conducting
patient interviews. This gave students an opportunity for reflection while
providing concrete examples of progress that the APPE preceptors could then
base their end-of-rotation evaluations on. However, completing these assignments
and documenting these examples was often time-consuming. By the final APPE, they
felt cumbersome, especially the assignments that we had to repeatedly do on
multiple rotations.
This process of assessing student performance
raises several questions. First, how do we ensure each student meets the
required competencies for each rotation? With practices settings and sites
varying significantly, assessing each student on basic competencies can be
difficult. Moreover, different preceptors have different expectations. All this variability makes it very difficult
to create a consistent assessment process that is not dependent on the student’s
learning experiences. Second, how do we assure an assessment tool can be
applied in a variety of APPE rotations without omissions or redundancy? Requiring
the same assignments and reassessing the same set of skills for a student who is
taking two community rotations puts a strain on the student and preceptor. But we must find a way to ensure the student
is developing on each rotation. Finally, how do we measure competency, such as
the EPAs? Should be rated “acceptable”? Or “completed”? Should a student be required
to “complete” them by the end of each APPE or by the end of all APPE
experiences?
Several institutions have tried to
address these questions. The System of Universal Clinical Competency Evaluation
in the Sunshine State (SUCCESS), is an internet-based APPE assessment tool
created by the colleges/schools of pharmacy in Florida.3 Under this
system, preceptors rate students as “excellent”, “competent”, or “deficient”
for each competency at the end of each APPE. They are also allowed to select
“no opportunity” if not observed. These ratings were then converted by the
school to determine the student’s grade.
There was a correction factor for students that were earlier along
during their APPE schedule. It also allowed preceptors to weigh each competency
based on importance and frequency in the practice setting / site. This weight
provides preceptors the ability to focus on the learning goals that are most
relevant. Another such tool was created by faculty at the University of
Colorado Skaggs School of Pharmacy after the addition of 14 ability-based
outcomes to their curriculum.4 By polling current preceptors, they
were able to determine which competencies and outcomes were frequently observed
and how important they are to the success of students on each APPE. These
responses were used to create APPE-specific tools to ensure students met
rotation goals that aligned with the ability-based outcomes of the curriculum.
It’s clear that assessing the
performance of APPE students is a crucial, yet complex, task. Based on the two
methods documented above, implementing an effective evaluation method requires
the active participation of preceptors in developing a tool that is specific to
each APPE experience. Preceptor evaluations of students need to be specific to
the setting and site but must also relate to the overarching ACPE standards and
ACCP outcomes.
I believe that monthly preceptor
evaluations of students and their progress toward or achievement of learning
objectives are necessary to ensure each APPE experience is helping to develop the
student’s competence. However, rather than completing a series of monthly (and
sometimes redundant) assignments, a series of unique assignments completed over
the ENTIRE year coupled with specific ability-based assessments might be a better
strategy. This can reduce assignment
fatigue and still provide appropriate documentation that each student can
competently perform the EPAs and other educational outcomes before they
graduate. It would great to see some research to determine the validity of this
approach.
References
- Accreditation Council for Pharmacy Education. Accreditation Standards and Key Elements for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree (Standards 2016). 2015.
- Haines ST, Pittenger AL, Stolte SK, et al. Core Entrustable Professional Activities for New Pharmacy Graduates. Am J Pharm Educ. 2017; 81(1): Article S2.
- Reid LD, Nemire R, Doty R, et al. An Automated Competency-Based Student Performance Assessment Program for Advanced Pharmacy Practice Experiential Programs. Am J Pharm Educ. 2007; 71(6): Article 128.
- Gilliam EH, Brunner JM, Nuffer W, et al. Design and Content Validation of Setting-Specific Assessment Tools for Advanced Pharmacy Practice Experience Rotations. [published online ahead of print March 6, 2019] Am J Pharm Educ. Article 7067.