April 22, 2013

Grades – How Important Are They?


by Justine Beck, Pharm.D., PGY1 Pharmacy Practice Resident, Walter Reed National Military Medical Center

The type of evaluation system used by an academic institution, pass/fail versus assignment of grades, has been a point of controversy for decades.  I hadn’t put much thought into this issue, since all of my education was completed at institutions that utilized a traditional grading system and where the overall performance was determined by calculating a grade point average (GPA).  However, this year I was no longer an applicant but rather a participant in the residency selection process.  When reviewing and compiling the information on the residency applications, I came across a few pharmacy schools that use a pass/fail evaluation system and, therefore, do not report a GPA.  At first I was taken aback, unsure how to compare the academic performance of the applicants from these schools to applicants who were graduating from more traditional programs.  My natural instinct was to question whether an applicant who ‘passed’ pharmacy school would perform the same in a residency program as an applicant who had a numeric GPA.


With an overwhelming number of applicants to pharmacy residency programs
1, an applicant from a program that uses the pass/fail grading system may be at a disadvantage when competing against applicants who have a GPA.  Admittedly, the most important criterion used when making selection decisions for residency programs is the personal interview.  However, there are several pre-screening hurdles that applicants must jump over before an interview is offered.

While there is a paucity of literature available specific to pharmacy regarding the impact of pass/fail grading, there is some data related to medical residency programs.  Dietrick et al. polled general surgery residency program directors to determine whether pass/fail versus competitive grading systems affected an applicant’s ability to compete for a residency training position.  The results demonstrated that 89% of program directors in general surgery preferred a transcript with grades over a pass/fail evaluation system. Also, 81% of the survey respondents thought that the medical students’ ability to compete for a residency position was adversely influenced by the pass/fail method of evaluation. Interestingly, 72% of the respondents stated letters of recommendation most frequently misled them in choosing a candidate for a residency position.2

Another survey conducted in Ontario found that 66% of program directors felt that students applying to their program from a school that used a pass/fail system would be disadvantaged.3   Moss et al. reported that the application performance index of residents from medical schools that reported grades performed significantly better than those from schools that used a pass/fail system.  Additionally, no residents from a school that used a pass/fail system ranked above the 87th percentile, and 82% of those who ranked below the 15th percentile came from pass/fail schools.4

Advocates for a pass/fail grading system reason that grades discourage collaboration and rely too heavily on external motivation.  Intrinsic motivation is learning prompted by true interest and enjoyment, whereas extrinsic motivation is based on external rewards, such as grades and honor society inductions.  Further, they argue, pass/fail grading systems improve student well-being by reducing stress, anxiety, and depression. Interestingly, over the years, many schools that adopted the pass/fail grading system have reverted back to multi-tiered grading systems (i.e. pass/fail/honors/high honors).5   Despite the potential benefits of pass/fail grading, it seems that the preference is an evaluation system that can differentiate students.

Inevitably, grades matter.  The much maligned GPA is the only way to sum up a student’s academic achievement in a quantifiable form.  Peter Filene wrote in The Joy of Teaching that, “grades can be used as a pedagogical whip to reinforce the mentality of working-to-get-a-grade, or they can be used in creative ways as carrots to encourage learning.” 6 I believe the real challenge is finding ways to use grades as a means to stimulate learning rather than a quantifiable measure of success or failure.  Students need feedback to help stimulate self-improvement.  Developing unique and creative ways to evaluate students would help achieve the dual aims of differentiating performance while cultivating intrinsic motivation to learn.

References
1. National Matching Services Inc. 2012. ASHP Resident Matching Program, Match Statistics. Accessed March 17, 2013.
2. Dietrick JA, Weaver MT, Merrick HW.  Pass/fail grading: a disadvantage for
students applying for residency. Am J Surg 1991;162(1):63-66.
4. Moss TJ, Deland EC, Maloney JV Jr. Selection of medical students for graduate training: pass/fail versus grades. N Engl J Med 1978;299(1):25-7.
5. Spring L, Robillard D, Gehlbach L, Moores Simas TA. Impact of pass/fail grading on medical students’ well-being and academic outcomes. Med Educ 2011;45:867-877.
6. Filene P. The Joy of Teaching: A Practical Guide for New College Instructors. Chapel Hill: University of North Carolina Press, 2005. Chapter 8, Evaluating and grading; p.93-111.

Open-note vs. Closed-book Exams


by Bonnie Li, Doctor of Pharmacy Candidate, University of Maryland School of Pharmacy

Open-note, cheat sheet, or closed-book exams---which test format is best for students? In December 2012, Neal Conan from NPR’s Talk of the Nation spoke with associate professors of psychology Afshin Gharib and William Phillips from Dominican University of California about an experiment they conducted.  It all started over an argument about what kind of exam is best.1 One professor preferred administering open-note tests while the other let his students use a “cheat sheet.” During the experiment, students were given either an open-book, a cheat sheet, or a closed-book exam in an introductory psychology course.2  An unannounced closed-book quiz was given two weeks later to test retention of the content. The students were also asked about their anxiety level before each exam. The results found that while initial grades were higher in the open-book group, the retention scores across all three exam formats were not statistically different.  Additionally, the researchers found that while the students’ level of organization on their cheat sheet correlated with higher initial test scores, it did not correlate with higher scores on the follow-up pop quiz. Good students still out performed poor ones regardless of exam type. This might be because weaker students spent more time looking up notes and reading rather than actively completing the test. The findings confirm the results from an older experiment by Agarwal that found no real differences in retention a week after administering either an open- or closed-book exam.3  While the results of these studies are prone to type II error due to their small sample sizes, none-the-less, they raise interesting questions about the benefits of open-note and “cheat sheets” permitted exams. Why not alleviate student anxiety by allowing open-notes if the results are not significantly different from closed-note tests?

Benefits and Pitfalls

There are some pitfalls to open-note exams.  Because students are allowed to use class notes and textbook resources during the exam, students might use them to look for pre-built answers like a “scavenger hunt” rather than synthesizing concepts from class.4  This kind of behavior can be prevented if instructors write complex or scenario-specific questions. Another disadvantage is that teachers would likely need to spend more time grading exams and writing complex questions every year.  Allowing students to use computers with internet access during a test also runs the risk of students communicating with one another to obtain answers.

Liska and Simonson from the University of Wisconsin described the positive results from using open-textbook and open-note exams in a business statistics class.5 The authors found it helpful that teachers were challenged to write questions that required interpretation, analysis, and critical thinking. The students were less anxious, even though open-note exams were not easier than closed-books exams. Open-note exams emphasized applying concepts and critical thinking.

There can also be benefits to using “cheat sheets” during exams.  Unlike open-note exams, cheat sheets contain a limited amount of information. Maryellen Weimer contends that that these condensed versions of their notes force students to prioritize and organize content from the class.6  In creating these sheets in preparation for the test, students may engage in discussions with each other (and possibly the professor) about the material, thereby further solidifying their understanding of the course content.  In other words, the creation of cheat sheets may enhance learning!

Anecdotal Experience

Throughout pharmacy school, I have taken closed-book, open-note, cheat sheet, take-home, and even group exams. In classes with open-note exams, I felt less pressure in class to vigorously jot down notes, and I spent more time actively listening to the presenter.  I felt I was applying information and learning during an open-note exam.  Indeed, knowing how to apply information actually felt more enjoyable than regurgitating memorized facts.  My self-esteem was better if I did well on an open-book verses closed book exam too.

There are teachers who believe that an exam should assess students’ knowledge – the stuff stored away in their heads – and no more.   In these circumstances, the closed-book exam is probably best.  For example, it’s important to know the pathophysiology of atrial fibrillation and the mechanism of action of beta blockers, so a closed-book exam would require me to memorize this information. However, when it comes to therapeutics, for which the answer is not always black and white, an open note exam might be a better option. As a future pharmacist, I will need the skills to discriminate one side effect from a list of ten on Micromedex, or which drug-drug interactions really need to addressed in a patient on multiple drugs. In a world where content is readily available, the most important skill health professionals must possess is the ability to find and interpret information from the right sources. I understand that there will always be situations where we have to think on our feet, but I also see the need to ask the right questions and double-check information from the right places. I think that open-note exams can help prepare pharmacy students by encouraging them to find and analyze information in a time sensitive manner.

So the next time you think about creating a test, consider your objectives. Do you want to test students on their recall of content or its application?

References

1. Cheat Sheet Or Open Book: Putting Tests To The Test: NPR [Internet]. Talk of the Nation. National Public Radio, 2012.  Accessed April 17, 2013.
2. Gharib A, Phillips W, Mathew N. Cheat Sheet or Open-Book? A Comparison of the Effects of Exam Types on Performance, Retention, and Anxiety. Psychology Research. 2012;2(8):469–478. 
3. Agarwal PK, Karpicke JD, Kang SHK, Roediger HL, McDermott KB. Examining the testing effect with open- and closed-book tests. Applied Cognitive Psychology 2008;22(7):861–876.
4. Golub E. PCs in the Classroom and Open Book Exams. Ubiquity 2005;6(9):1–4.
5. Liska T, Simonson J. Open-Text and Open-Note Exams. The Best of The Teaching Professor. Madison, WI: Magna Publications; 2005. p. 83.
6. Weimer M. Crib Sheets Help Students Prioritize and Organize Course Content.  Faculty  Focus [Internet], 2013. Accessed April 17, 2013.

Encouraging Participation Across Campuses


by Kalin Clifford, PharmD, PGY-2 Geriatric Pharmacy Practice Resident, University of Maryland School of Pharmacy

Distance learning is becoming the norm as improvements in technology allow students to attend, across hundreds of miles, lectures synchronously delivered from the main campus.  However, student participation in classroom discussions across distant campuses can be very challenging.  How can educators engage not only the students in the physical classroom, but those students who are attending hundreds of miles away?  Several tools have been evaluated that purport to increase the effectiveness of the educator and enhance learner participation in synchronous distance campuses.

Recently I began teaching in an elective course that utilizes video-teleconferencing technology and links students on two campuses. One of my main concerns is effectively engaging students at the distant campus.  After all, they are a part of the same class and active participation in the class discussions is an essential component of the learning.  After my first class session in this course, I became aware that I’d failed to involve the distance campus during the discussion.  My pharmacy school did not have any distance campuses and all didactic instruction was delivered at one site.  This method of delivery – via video-teleconferencing - was completely new to me.  After talking with some of the guest lecturers in the course, many of them stated that it was difficult to connect with the students at the distant campus. Clearly there is a need for proven methods that can increase student participation and engagement across campuses.

Distance education has emerged as an alternative to “traditional” methods of delivering instruction because it increases flexibility and provides access to more students.Distant campuses must provide the same quality of instruction for students. Therefore, new methods and techniques are required to increase participation not only within a classroom, but also with distant classrooms.2  Several methods to increase learner participation across distant campuses during synchronous exercises activities have been described, including:
·  Using Audience Response Systems (ARS)
·  Creating a Randomized Online Discussion Registration Process
·  Learning Management Systems with voice over internet protocol (VOIP)

The first method that may be useful in increasing student engagement would be the audience response systems (ARS).  Using response devices (aka "clickers") which students were required to purchase, Clauson, et al. found that 81.3% of students believed that the clicker improved the overall class experience.3  Students felt the ARS encouraged greater participation (89.3%) and improved the clarity of subject matter (71.1%) but it also lead to greater anonymity (89.8%).3  The majority of students (85.3%) thought the ARS increased ease of participation and student focus when lectures were delivered synchronously from a different campus.3  ARS may be an option for those campuses considering distance education.

A second technique is utilization of a web-based program to randomly select students to participate using an online class registration log.  This system allows student to register when they are present for class and each is assign a number for the day. The instructor clicks on the screen when they ask a question to the audience and the program randomly selects a participant (associated with a number) to respond to the question.  Students receive participation credit as an incentive to both attend and participate in each class.  Mehvar found that 75-90% of students believed they were more prepared for class, more likely to attend class, and more attentive during the lecture.4  Approximately 80% of students from both campuses agreed that by requiring participation improved overall learning. It is important to note that students received credit for participation, they were not scored based on the correctness of their answer.4  This can be a system used in larger classrooms where most students are reluctant to raise their hand.

The third option is a learning management system with VOIP.  In this method, students “log on” to a secure online classroom.  The students view the lecture simultaneously as its presented to other students on the main campus.  This system also provides a “chat-log” for distant students to type in a question for the instructor, and the instructor can answer questions when they have a break, or pause, during the lecture. Henriksen and Roche found that students did not raise their hands to contribute classroom discussions, even when strongly encouraged to do so by faculty. Distant and campus-based students were more likely to utilize the “chat-log” apparently because they could anonymously submit their questions.

Many of these options are great techniques that have been proven effective with students on distant campuses; however, the question of cost should be considered. More often than not the university would need to provide support and funding for these systems.

Several methods to increase student engagement across multiple campuses have been reported in the literature. These techniques are effective; however, not all universities may have the financial resources to acquire these tools.  For any program that is considering distant campuses, these tools need to be evaluated to improve overall student learning and development.

References:
1.  Hussain I. A study of learners’ reflection onandrogogical skills of distance education tutors. International Journal of Instruction. 2013;6(1):123-38.2.  Stewart DW, Brown SD, Clavier CW, Wyatt J. Active-learning processes used in US pharmacy education. Am J Pharm Educ. 2011;75(4):Article 68.
3.  Clauson KA, Alkhateeb FM, Signh-Franco D. Concurrent use of an audience response system at a multi-campus college of pharmacy. Am J Pharm Educ. 2012;76(1):Article 6.
5.  Henriksen B, Roche V. Creation of medicinal chemistry learning communities throughenhanced learning technology and interdisciplinary collaboration. Am J Pharm Educ. 2012;76 (8): Article 158.

April 11, 2013

I, I and II, or I, II and III?


by Adrian Wong, Pharm.D., PGY Pharmacy Practice Resident, The Johns Hopkins Hospital

Recently graduated and staring at the computer screen in front of me, I once again repeated what I had done many times in pharmacy school – crammed.  I had received warnings about how horrific the Multistate Pharmacy Jurisprudence Examination (MPJE) was from all my mentors and peers.  I was truly dreading the outcome.  Examinations were never my strong suit and I feared those multiple-multiple choice questions that seem to appear on these high stakes exams all too frequently.  Regardless of the name they are given – K-type, complex multiple-choice (CMC), or complex-response questions – they all evoke the same feeling of dread.  If I need to jog your memory, an example is shown here:

Question:  Based on the best available evidence, which of the following is the most appropriate medication to initiate for management of this patient’s congestive heart failure?

I.   Metoprolol succinate
II.  Metoprolol tartrate
III. Atenolol

a.    I only
b.    III only
c.     I and II only
d.    II and III only
e.    I, II and III

After my experience with these questions, it always seems to come down to one of two answers.  Even using an educated guess, I never seemed to get the “right” answer.  From my experience with multiple-choice questions, the answer is rarely ever all of the above.  So why did this format of question come to be?  Who came up with this traumatizing format?  What is the data behind this torture?

Based on my research, the complex multiple-choice (aka K-type) question was introduced by the Educational Testing Service in 1978.1  This question format was designed to accommodate for situations when there is more than one correct choice - much as in real life.  These questions also appear to be more difficult that comparable “traditional” multiple-choice questions.2  Therefore, in the world of health professionals, where multiple correct answers may exist, and, in an attempt to increase the difficulty of board examination questions, the CMC format was adopted by many professional testing services and persists today.

Weaknesses of this format exist.  Albanese evaluated the use of Type-K questions and identified several limitations including:2

1.    Increased likelihood of “cluing” of secondary choices
2.    Lower test score reliability
3.    Greater difficulty constructing questions
4.    Extended time required to answer questions

“Cluing” results when a test-taker is able to narrow down choices based on the wording of the question or the available answer options.  For example, my thought process for the question above helped me to narrow down the choices solely by looking at the question or “stem.”  The question is looking for only one “most appropriate” answer (assuming, of course, that the test-writer has written a grammatically correct statement), as denoted by “is” versus “are.”   Thus, as a saavy test taker, I would gravitate toward choices “a” and “b.”  An additional clue is that there are two similar choices (metoprolol succinate vs. tartrate), one of which is likely to be the correct answer.  Thus, cluing may lead to lower test score reliability and the results may be dependent on how well a “test-taker” one is through “cluing.”2

Additional studies have further illustrated the limitations of this assessment format.  One study examined the amount of time needed to complete a CMC-based test compared to multiple true-false (MTF) test.3   On average, it took 3.5 times as long to complete a CMC-based test compared to a MTF test.

However, after evaluating this literature, I will begrudgingly admit that CMC questions, under certain circumstances, could be effective despite their inherent weaknesses.  Researchers at one pharmacy school evaluated the use of CMC questions using a partial-credit scoring system and compared it to traditional dichotomous (right vs. wrong) scoring.4  The instructors designed a test to examine student knowledge regarding nonprescription drugs.  The test was administered to 150 student pharmacists in their second professional year.  The purpose of this study was to optimize the measurement of student pharmacist knowledge without penalty for guessing or incorrect responses.  Partial-credit scoring was accomplished by assigning a tiered score based on descending “best” answers.  Test items were sent to an external content review panel for content validity.  Parameters evaluated in this study included item difficulty,
item discrimination (e.g. the ability to determine low and high-ability students), and the coefficient of effective length (CEL), a measure that determines how many more questions a test would need in order to produce the same reliability as another scoring method.  The authors found that with partial-credit scoring, the test was more reflective of actual student knowledge.  There was no statistical differences between the two methods with regard to item discrimination but there was greater CEL with dichotomous scoring.  Indeed, the findings indicate that dichotomous scoring would require 26% more questions to achieve the same reliability measuring student’s actual knowledge of the subject matter.  The authors recommend more studies regarding this partial-credit scoring method for CMC questions, including its ability to predict student achievement and effect on student confidence. 

Alternatives to traditional multiple choice testing that have been evaluated in the literature include the use of open-ended, uncued (UnQ) items, which allows the test-taker to select an answer from over 500 responses.  This type of test has been used for Family Practice board examinations.One study conducted in over 7,000 family practice residents found the UnQ to be a more reliable method for determining a physician’s competence.

The best mode of assessment probably dependents on the material being tested.  In my experiences, the open-response format allows for the best indicator of a student’s knowledge - but like any test, the questions must be carefully worded.  The biggest weakness of open-response essay-type exams is the time required to grade them [Editor’s note:  As well as the inherent subjectivity required when judging the “correctness” of the student’s answers].   To my chagrin, the use of CMC questions will likely continue for licensing examinations for healthcare professionals.

References:

1.  Haladyna TM. The effectiveness of several multiple-choice formats. Appl Measure Educ 1992;5:73-88.
2.  Albanese MA. Type K and other complex multiple-choice items: an analysis of research and item properties. Educ Measure Issues and Practice 1993;12:28-33.
3.  Frisbie D, Sweeney DC. The relative merits of multiple true-false achievement tests. Journal of Educational Measurement 1982;19:29-35.