May 8, 2013

Curving Exam Scores – Is it More Harm Than Good?


by Thanh-Van (Vicky) Nguyen, Pharm.D., PGY 1 Pharmacy Practice Resident, Kaiser Permanente Mid-Atlantic

When students hear the terms “curve” and “grades” used together in a sentence, they automatically assume that grades will be scaled upwards.  In fact, this is not always the case.  The act of scoring on a curve has been performed for years and is commonplace in higher education, particularly in science curricula.  However, grading on the curve has multiple meanings and, as with all other practices, there are pros and cons to grading on a curve.

Typically, curving an exam is achieved by adding points to everyone’s score in order to shift the average upward.  The score of the student(s) who performed the best on the exam is boosted to 100% and then the other exam scores are scaled up by a similar amount.  This consequently boosts all scores on the exam and often translates into higher grades for everyone.  If the procedure is rarely performed and students are not accustomed to curving, students wouldn’t anticipate receiving addition points and will likely put effort into studying.  But when curving becomes routine, potential pitfalls may occur.  Students may become less motivated to perform at their best if they know that they will automatically receive additional points.  The question then becomes whether or not curving grades accurately reflects knowledge and performance.  By curving grades, are we merely boosting students’ egos only to send them off into the world less prepared?  Does it create a false sense of security?

There are certain circumstances that may warrant curving exam scores.  For example, when creating exams, professors may misjudge the difficulty or clarity of their questions.  In these instances, it is simply unfair to punish students with point deductions due to poorly written questions.  In other situations, the subject matter may not have been taught well, resulting in poor student performance.  When the decision is made to curve scores for these reasons, its important to resolve the underlying problem.  This can be as simple as rewording future exam questions, modifying one’s teaching method, or providing additional instruction.  Unfortunately, some instructors move forward without conducting a root cause analysis or taking any corrective action.

Some proponents argue that curving can prevent grade inflation and ensures that grades are appropriately distributed among students.  In one form of curving, instructors assign grades based on a normal distribution.  The grades are distributed such that students who score near the average receive a high C or a low B.  Students above the mean would receive an A or B.  Students below the mean would receive a C or D.  Outliers are assigned an F.  This form of curving exists but it’s rarely used.  Assigning an “F” to someone who scored 82% on an exam where the class average was 93% would likely be met with outrage.  This method of curving penalizes students for not performing as well as their peers – even when they demonstrate reasonably good mastery of the material.

The procedures used to curve scores are not standardized and practices vary among professors and institutions.  Some professors review exam questions and curve based on the number of questions where a substantial portion of students got the answer incorrect.  Other professors simply bump the highest grade up to 100% and scale the rest by a similar amount.  Whether all scores should be curved across the board is another aspect to consider.  Curving can also be done selectively to help boost a few students’ scores or to reduce the number of students who fail.  This practice calls into questions fairness and smacks of favoritism.  Regardless, there is a lack of consensus regarding the best method of curving.  There comes a point when curving no longer accurately represents student learning but rather a manipulation of numbers.

There is a time and place when curving grades may be appropriate but the practice should not be commonplace.  When curving scores becomes a routine practice, it’s time to re-evaluate the teaching and evaluation methods.

References

1.  Kulick G, Wright R. The Impact of Grading on a Curve: A Simulation Analysis. International Journal for the Scholarship of Teaching and Learning. 2008;2(2):1-17. 

May 3, 2013

Motivating the Unmotivated


by Kristin Ho, Pharm.D., PGY1 Pharmacy Practice Resident, Walter Reed National Military Medical Center

Over the past year, I’ve served as a co-preceptor for students who are completing advanced pharmacy practice experiences.  I find it is much easier to interact with students who are motivated and want to complete the rotation at our institution.  It’s challenging when students lack motivation and my inexperience as a preceptor doesn’t make it any easier!  But I’m not alone.  According to a survey conducted at the University of California, San Francisco School of Pharmacy, many preceptors don’t feel very confidence in their ability to identify and manage the unmotivated student.  A majority of the survey respondents (61.5%) stated they had difficulty determining the reason why a student was unmotivated and 69.1% wanted more training on how to engage and motivate students.1

What is motivation?

Motivation is "a student’s willingness, need, desire, and compulsion to participate in, and be successful in, the learning process."2 There are two forms of motivation: intrinsic and extrinsic. Intrinsic motivation comes from an internal desire – formulated from both cognitive and emotional processes in the brain – to perform a task.  In other words, the student perceives the task to be "in and of itself" to be rewarding.  Extrinsic motivation is driven by external factors unrelated to the task.  Under these circumstances, the task is performed to gain some reward or avoid a punishment that is associated with, but not intrinsically a part of, the task.3 Intrinsically motivated students tend to do better because they are eager and willing to learn without inducement. Conversely, extrinsically motivated students must be encouraged, persuaded, cajoled, or, in extreme cases, coerced to perform the task.

Whether the lack of motivation is attributed to intrinsic or extrinsic factors, it is important to identify the reasons why students are unmotivated in order to appropriately address the problem.  Some reasons why students are not motivated and clues that can help you to identify students are: 4

Clues
Reasons
Student engages in negative self-talk about abilities and/or by makes faulty attributions to explain poor performance
Lack of confidence
Student procrastinates, verbal complains, frequent seeks teacher’s help, and other avoidant behaviors
Effort needed to complete work seems too much or unrealistic
Student requires praise or rewards as a ‘pay-off’ in order to apply greater effort 
Fails to see a pay-off in doing the assigned work
Student display indifferent or hostile behavior toward instructor or preceptor
Negative relationship with instructor or preceptor

How to motivate unmotivated students?

If the students’ lack of motivation stems from fear of failure, preceptors should encourage students to focus on their improvements and help them evaluate their progress by encouraging them to critique their own work. This method, called attribution retraining, helps the student look for the explanations for their successes and failures. The goal of attribution retraining is assisting students in concentrating on the tasks rather than being distracted by their fear of failure.  Preceptors and instructors can help the student identify alternative methods or approaches to a problem instead of giving up; and attributing the student’s failures to ineffective strategies rather than a lack of ability.5  For example, a student may attribute poor clinical judgments to an inherent lack of ability. If the student believes he/she cannot succeed during the rotation, there is less motivation to strive for success. If a student perceives writing SOAP notes as being too difficult, the preceptor should use attribution retraining by encouraging the student to practice with hypothetical case studies so the task becomes easier when the student encounters real patients.

Positive or negative feedback influences motivation.  When a preceptor praises a student, this extrinsic motivator boosts self-confidence. Preceptors should acknowledge sincere efforts even when the student’s performance is less than stellar. If the student’s performance is weak, providing feedback for improvement as well as assure that he/she can improve and succeed over time. Before the preceptor provides feedback, ask the students to reflect on their perceived strengths and weaknesses to determine whether the students’ self-assessment is accurate. If the preceptor and student are in agreement, the preceptor can affirm the strengths and provide encouragement. This should be followed by a discussion of perceived weaknesses.  This will give the preceptor some insight into what the student identifies as areas that need improvement and facilitates goal setting for future performance.6

Most importantly, preceptors should display enthusiasm in teaching and a personal interest in the student to build a positive relationship. This can be achieved by tailoring the rotation to the student’s interest. Students are naturally more motivated to succeed when their interests are considered in the rotation plan. Therefore, constructing approaches to help the student realize how each learning activity relates to his or her personal and professional goals can improve motivation. For example, if a student has accepted a community pharmacy position and has no interest in acute care, it might be helpful to include more patient counseling during the rotation.  This learning activity would provide the student with more one-on-one patient interactions and boost confidence when speaking to patients.

Motivation is a powerful force. As preceptors and instructors, it can be challenging to motivate unmotivated students. However, identifying students who are unmotivated by paying attention to clues and addressing the problem with an appropriate method to encourage motivation is a valuable teaching tool. This resonates with me!  One of my preceptors reminds me that it’s easy to teach the intrinsically motivated students, but the truly great preceptor is one who can increase the unmotivated student’s desire to learn  … and achieve the intended learning outcomes.

References
1.  Mitra A, Robin CL, Peter AJ, et al. Development needs of volunteer pharmacy practice preceptors. Am J Pharm Educ 2011;75: Article 10.
2.  Bomia L, Beluzo L, Demeester D, et al. The impact of teaching strategies on intrinsic motivation. Educ Resour Inf Cent. 1997. ED418925
3.  Ryan RM, Deci EL. Intrinsic and Extrinsic Motivations:  Classic Definitions and New Directions.  Contemp Educ Psychol. 2000; 25 :54-67.
4. Wright, J. Six reasons why students are unmotivated (and what teachers can do). Intervention Central [Internet]. 2011. Accessed April 11, 2013.
5.  Lumsden LS. Student motivation to learn. Educ Resour Inf Cent. 1994. ED370200
6.  Orsmond P, Merry S, Reiling K. Biology students’ utilization of tutors’ formative feedback: a qualitative interview study. Assess Eval Higher Educ. 2005; 30:369-86.

April 22, 2013

Grades – How Important Are They?


by Justine Beck, Pharm.D., PGY1 Pharmacy Practice Resident, Walter Reed National Military Medical Center

The type of evaluation system used by an academic institution, pass/fail versus assignment of grades, has been a point of controversy for decades.  I hadn’t put much thought into this issue, since all of my education was completed at institutions that utilized a traditional grading system and where the overall performance was determined by calculating a grade point average (GPA).  However, this year I was no longer an applicant but rather a participant in the residency selection process.  When reviewing and compiling the information on the residency applications, I came across a few pharmacy schools that use a pass/fail evaluation system and, therefore, do not report a GPA.  At first I was taken aback, unsure how to compare the academic performance of the applicants from these schools to applicants who were graduating from more traditional programs.  My natural instinct was to question whether an applicant who ‘passed’ pharmacy school would perform the same in a residency program as an applicant who had a numeric GPA.


With an overwhelming number of applicants to pharmacy residency programs
1, an applicant from a program that uses the pass/fail grading system may be at a disadvantage when competing against applicants who have a GPA.  Admittedly, the most important criterion used when making selection decisions for residency programs is the personal interview.  However, there are several pre-screening hurdles that applicants must jump over before an interview is offered.

While there is a paucity of literature available specific to pharmacy regarding the impact of pass/fail grading, there is some data related to medical residency programs.  Dietrick et al. polled general surgery residency program directors to determine whether pass/fail versus competitive grading systems affected an applicant’s ability to compete for a residency training position.  The results demonstrated that 89% of program directors in general surgery preferred a transcript with grades over a pass/fail evaluation system. Also, 81% of the survey respondents thought that the medical students’ ability to compete for a residency position was adversely influenced by the pass/fail method of evaluation. Interestingly, 72% of the respondents stated letters of recommendation most frequently misled them in choosing a candidate for a residency position.2

Another survey conducted in Ontario found that 66% of program directors felt that students applying to their program from a school that used a pass/fail system would be disadvantaged.3   Moss et al. reported that the application performance index of residents from medical schools that reported grades performed significantly better than those from schools that used a pass/fail system.  Additionally, no residents from a school that used a pass/fail system ranked above the 87th percentile, and 82% of those who ranked below the 15th percentile came from pass/fail schools.4

Advocates for a pass/fail grading system reason that grades discourage collaboration and rely too heavily on external motivation.  Intrinsic motivation is learning prompted by true interest and enjoyment, whereas extrinsic motivation is based on external rewards, such as grades and honor society inductions.  Further, they argue, pass/fail grading systems improve student well-being by reducing stress, anxiety, and depression. Interestingly, over the years, many schools that adopted the pass/fail grading system have reverted back to multi-tiered grading systems (i.e. pass/fail/honors/high honors).5   Despite the potential benefits of pass/fail grading, it seems that the preference is an evaluation system that can differentiate students.

Inevitably, grades matter.  The much maligned GPA is the only way to sum up a student’s academic achievement in a quantifiable form.  Peter Filene wrote in The Joy of Teaching that, “grades can be used as a pedagogical whip to reinforce the mentality of working-to-get-a-grade, or they can be used in creative ways as carrots to encourage learning.” 6 I believe the real challenge is finding ways to use grades as a means to stimulate learning rather than a quantifiable measure of success or failure.  Students need feedback to help stimulate self-improvement.  Developing unique and creative ways to evaluate students would help achieve the dual aims of differentiating performance while cultivating intrinsic motivation to learn.

References
1. National Matching Services Inc. 2012. ASHP Resident Matching Program, Match Statistics. Accessed March 17, 2013.
2. Dietrick JA, Weaver MT, Merrick HW.  Pass/fail grading: a disadvantage for
students applying for residency. Am J Surg 1991;162(1):63-66.
4. Moss TJ, Deland EC, Maloney JV Jr. Selection of medical students for graduate training: pass/fail versus grades. N Engl J Med 1978;299(1):25-7.
5. Spring L, Robillard D, Gehlbach L, Moores Simas TA. Impact of pass/fail grading on medical students’ well-being and academic outcomes. Med Educ 2011;45:867-877.
6. Filene P. The Joy of Teaching: A Practical Guide for New College Instructors. Chapel Hill: University of North Carolina Press, 2005. Chapter 8, Evaluating and grading; p.93-111.

Open-note vs. Closed-book Exams


by Bonnie Li, Doctor of Pharmacy Candidate, University of Maryland School of Pharmacy

Open-note, cheat sheet, or closed-book exams---which test format is best for students? In December 2012, Neal Conan from NPR’s Talk of the Nation spoke with associate professors of psychology Afshin Gharib and William Phillips from Dominican University of California about an experiment they conducted.  It all started over an argument about what kind of exam is best.1 One professor preferred administering open-note tests while the other let his students use a “cheat sheet.” During the experiment, students were given either an open-book, a cheat sheet, or a closed-book exam in an introductory psychology course.2  An unannounced closed-book quiz was given two weeks later to test retention of the content. The students were also asked about their anxiety level before each exam. The results found that while initial grades were higher in the open-book group, the retention scores across all three exam formats were not statistically different.  Additionally, the researchers found that while the students’ level of organization on their cheat sheet correlated with higher initial test scores, it did not correlate with higher scores on the follow-up pop quiz. Good students still out performed poor ones regardless of exam type. This might be because weaker students spent more time looking up notes and reading rather than actively completing the test. The findings confirm the results from an older experiment by Agarwal that found no real differences in retention a week after administering either an open- or closed-book exam.3  While the results of these studies are prone to type II error due to their small sample sizes, none-the-less, they raise interesting questions about the benefits of open-note and “cheat sheets” permitted exams. Why not alleviate student anxiety by allowing open-notes if the results are not significantly different from closed-note tests?

Benefits and Pitfalls

There are some pitfalls to open-note exams.  Because students are allowed to use class notes and textbook resources during the exam, students might use them to look for pre-built answers like a “scavenger hunt” rather than synthesizing concepts from class.4  This kind of behavior can be prevented if instructors write complex or scenario-specific questions. Another disadvantage is that teachers would likely need to spend more time grading exams and writing complex questions every year.  Allowing students to use computers with internet access during a test also runs the risk of students communicating with one another to obtain answers.

Liska and Simonson from the University of Wisconsin described the positive results from using open-textbook and open-note exams in a business statistics class.5 The authors found it helpful that teachers were challenged to write questions that required interpretation, analysis, and critical thinking. The students were less anxious, even though open-note exams were not easier than closed-books exams. Open-note exams emphasized applying concepts and critical thinking.

There can also be benefits to using “cheat sheets” during exams.  Unlike open-note exams, cheat sheets contain a limited amount of information. Maryellen Weimer contends that that these condensed versions of their notes force students to prioritize and organize content from the class.6  In creating these sheets in preparation for the test, students may engage in discussions with each other (and possibly the professor) about the material, thereby further solidifying their understanding of the course content.  In other words, the creation of cheat sheets may enhance learning!

Anecdotal Experience

Throughout pharmacy school, I have taken closed-book, open-note, cheat sheet, take-home, and even group exams. In classes with open-note exams, I felt less pressure in class to vigorously jot down notes, and I spent more time actively listening to the presenter.  I felt I was applying information and learning during an open-note exam.  Indeed, knowing how to apply information actually felt more enjoyable than regurgitating memorized facts.  My self-esteem was better if I did well on an open-book verses closed book exam too.

There are teachers who believe that an exam should assess students’ knowledge – the stuff stored away in their heads – and no more.   In these circumstances, the closed-book exam is probably best.  For example, it’s important to know the pathophysiology of atrial fibrillation and the mechanism of action of beta blockers, so a closed-book exam would require me to memorize this information. However, when it comes to therapeutics, for which the answer is not always black and white, an open note exam might be a better option. As a future pharmacist, I will need the skills to discriminate one side effect from a list of ten on Micromedex, or which drug-drug interactions really need to addressed in a patient on multiple drugs. In a world where content is readily available, the most important skill health professionals must possess is the ability to find and interpret information from the right sources. I understand that there will always be situations where we have to think on our feet, but I also see the need to ask the right questions and double-check information from the right places. I think that open-note exams can help prepare pharmacy students by encouraging them to find and analyze information in a time sensitive manner.

So the next time you think about creating a test, consider your objectives. Do you want to test students on their recall of content or its application?

References

1. Cheat Sheet Or Open Book: Putting Tests To The Test: NPR [Internet]. Talk of the Nation. National Public Radio, 2012.  Accessed April 17, 2013.
2. Gharib A, Phillips W, Mathew N. Cheat Sheet or Open-Book? A Comparison of the Effects of Exam Types on Performance, Retention, and Anxiety. Psychology Research. 2012;2(8):469–478. 
3. Agarwal PK, Karpicke JD, Kang SHK, Roediger HL, McDermott KB. Examining the testing effect with open- and closed-book tests. Applied Cognitive Psychology 2008;22(7):861–876.
4. Golub E. PCs in the Classroom and Open Book Exams. Ubiquity 2005;6(9):1–4.
5. Liska T, Simonson J. Open-Text and Open-Note Exams. The Best of The Teaching Professor. Madison, WI: Magna Publications; 2005. p. 83.
6. Weimer M. Crib Sheets Help Students Prioritize and Organize Course Content.  Faculty  Focus [Internet], 2013. Accessed April 17, 2013.