April 22, 2013

Encouraging Participation Across Campuses


by Kalin Clifford, PharmD, PGY-2 Geriatric Pharmacy Practice Resident, University of Maryland School of Pharmacy

Distance learning is becoming the norm as improvements in technology allow students to attend, across hundreds of miles, lectures synchronously delivered from the main campus.  However, student participation in classroom discussions across distant campuses can be very challenging.  How can educators engage not only the students in the physical classroom, but those students who are attending hundreds of miles away?  Several tools have been evaluated that purport to increase the effectiveness of the educator and enhance learner participation in synchronous distance campuses.

Recently I began teaching in an elective course that utilizes video-teleconferencing technology and links students on two campuses. One of my main concerns is effectively engaging students at the distant campus.  After all, they are a part of the same class and active participation in the class discussions is an essential component of the learning.  After my first class session in this course, I became aware that I’d failed to involve the distance campus during the discussion.  My pharmacy school did not have any distance campuses and all didactic instruction was delivered at one site.  This method of delivery – via video-teleconferencing - was completely new to me.  After talking with some of the guest lecturers in the course, many of them stated that it was difficult to connect with the students at the distant campus. Clearly there is a need for proven methods that can increase student participation and engagement across campuses.

Distance education has emerged as an alternative to “traditional” methods of delivering instruction because it increases flexibility and provides access to more students.Distant campuses must provide the same quality of instruction for students. Therefore, new methods and techniques are required to increase participation not only within a classroom, but also with distant classrooms.2  Several methods to increase learner participation across distant campuses during synchronous exercises activities have been described, including:
·  Using Audience Response Systems (ARS)
·  Creating a Randomized Online Discussion Registration Process
·  Learning Management Systems with voice over internet protocol (VOIP)

The first method that may be useful in increasing student engagement would be the audience response systems (ARS).  Using response devices (aka "clickers") which students were required to purchase, Clauson, et al. found that 81.3% of students believed that the clicker improved the overall class experience.3  Students felt the ARS encouraged greater participation (89.3%) and improved the clarity of subject matter (71.1%) but it also lead to greater anonymity (89.8%).3  The majority of students (85.3%) thought the ARS increased ease of participation and student focus when lectures were delivered synchronously from a different campus.3  ARS may be an option for those campuses considering distance education.

A second technique is utilization of a web-based program to randomly select students to participate using an online class registration log.  This system allows student to register when they are present for class and each is assign a number for the day. The instructor clicks on the screen when they ask a question to the audience and the program randomly selects a participant (associated with a number) to respond to the question.  Students receive participation credit as an incentive to both attend and participate in each class.  Mehvar found that 75-90% of students believed they were more prepared for class, more likely to attend class, and more attentive during the lecture.4  Approximately 80% of students from both campuses agreed that by requiring participation improved overall learning. It is important to note that students received credit for participation, they were not scored based on the correctness of their answer.4  This can be a system used in larger classrooms where most students are reluctant to raise their hand.

The third option is a learning management system with VOIP.  In this method, students “log on” to a secure online classroom.  The students view the lecture simultaneously as its presented to other students on the main campus.  This system also provides a “chat-log” for distant students to type in a question for the instructor, and the instructor can answer questions when they have a break, or pause, during the lecture. Henriksen and Roche found that students did not raise their hands to contribute classroom discussions, even when strongly encouraged to do so by faculty. Distant and campus-based students were more likely to utilize the “chat-log” apparently because they could anonymously submit their questions.

Many of these options are great techniques that have been proven effective with students on distant campuses; however, the question of cost should be considered. More often than not the university would need to provide support and funding for these systems.

Several methods to increase student engagement across multiple campuses have been reported in the literature. These techniques are effective; however, not all universities may have the financial resources to acquire these tools.  For any program that is considering distant campuses, these tools need to be evaluated to improve overall student learning and development.

References:
1.  Hussain I. A study of learners’ reflection onandrogogical skills of distance education tutors. International Journal of Instruction. 2013;6(1):123-38.2.  Stewart DW, Brown SD, Clavier CW, Wyatt J. Active-learning processes used in US pharmacy education. Am J Pharm Educ. 2011;75(4):Article 68.
3.  Clauson KA, Alkhateeb FM, Signh-Franco D. Concurrent use of an audience response system at a multi-campus college of pharmacy. Am J Pharm Educ. 2012;76(1):Article 6.
5.  Henriksen B, Roche V. Creation of medicinal chemistry learning communities throughenhanced learning technology and interdisciplinary collaboration. Am J Pharm Educ. 2012;76 (8): Article 158.

April 11, 2013

I, I and II, or I, II and III?


by Adrian Wong, Pharm.D., PGY Pharmacy Practice Resident, The Johns Hopkins Hospital

Recently graduated and staring at the computer screen in front of me, I once again repeated what I had done many times in pharmacy school – crammed.  I had received warnings about how horrific the Multistate Pharmacy Jurisprudence Examination (MPJE) was from all my mentors and peers.  I was truly dreading the outcome.  Examinations were never my strong suit and I feared those multiple-multiple choice questions that seem to appear on these high stakes exams all too frequently.  Regardless of the name they are given – K-type, complex multiple-choice (CMC), or complex-response questions – they all evoke the same feeling of dread.  If I need to jog your memory, an example is shown here:

Question:  Based on the best available evidence, which of the following is the most appropriate medication to initiate for management of this patient’s congestive heart failure?

I.   Metoprolol succinate
II.  Metoprolol tartrate
III. Atenolol

a.    I only
b.    III only
c.     I and II only
d.    II and III only
e.    I, II and III

After my experience with these questions, it always seems to come down to one of two answers.  Even using an educated guess, I never seemed to get the “right” answer.  From my experience with multiple-choice questions, the answer is rarely ever all of the above.  So why did this format of question come to be?  Who came up with this traumatizing format?  What is the data behind this torture?

Based on my research, the complex multiple-choice (aka K-type) question was introduced by the Educational Testing Service in 1978.1  This question format was designed to accommodate for situations when there is more than one correct choice - much as in real life.  These questions also appear to be more difficult that comparable “traditional” multiple-choice questions.2  Therefore, in the world of health professionals, where multiple correct answers may exist, and, in an attempt to increase the difficulty of board examination questions, the CMC format was adopted by many professional testing services and persists today.

Weaknesses of this format exist.  Albanese evaluated the use of Type-K questions and identified several limitations including:2

1.    Increased likelihood of “cluing” of secondary choices
2.    Lower test score reliability
3.    Greater difficulty constructing questions
4.    Extended time required to answer questions

“Cluing” results when a test-taker is able to narrow down choices based on the wording of the question or the available answer options.  For example, my thought process for the question above helped me to narrow down the choices solely by looking at the question or “stem.”  The question is looking for only one “most appropriate” answer (assuming, of course, that the test-writer has written a grammatically correct statement), as denoted by “is” versus “are.”   Thus, as a saavy test taker, I would gravitate toward choices “a” and “b.”  An additional clue is that there are two similar choices (metoprolol succinate vs. tartrate), one of which is likely to be the correct answer.  Thus, cluing may lead to lower test score reliability and the results may be dependent on how well a “test-taker” one is through “cluing.”2

Additional studies have further illustrated the limitations of this assessment format.  One study examined the amount of time needed to complete a CMC-based test compared to multiple true-false (MTF) test.3   On average, it took 3.5 times as long to complete a CMC-based test compared to a MTF test.

However, after evaluating this literature, I will begrudgingly admit that CMC questions, under certain circumstances, could be effective despite their inherent weaknesses.  Researchers at one pharmacy school evaluated the use of CMC questions using a partial-credit scoring system and compared it to traditional dichotomous (right vs. wrong) scoring.4  The instructors designed a test to examine student knowledge regarding nonprescription drugs.  The test was administered to 150 student pharmacists in their second professional year.  The purpose of this study was to optimize the measurement of student pharmacist knowledge without penalty for guessing or incorrect responses.  Partial-credit scoring was accomplished by assigning a tiered score based on descending “best” answers.  Test items were sent to an external content review panel for content validity.  Parameters evaluated in this study included item difficulty,
item discrimination (e.g. the ability to determine low and high-ability students), and the coefficient of effective length (CEL), a measure that determines how many more questions a test would need in order to produce the same reliability as another scoring method.  The authors found that with partial-credit scoring, the test was more reflective of actual student knowledge.  There was no statistical differences between the two methods with regard to item discrimination but there was greater CEL with dichotomous scoring.  Indeed, the findings indicate that dichotomous scoring would require 26% more questions to achieve the same reliability measuring student’s actual knowledge of the subject matter.  The authors recommend more studies regarding this partial-credit scoring method for CMC questions, including its ability to predict student achievement and effect on student confidence. 

Alternatives to traditional multiple choice testing that have been evaluated in the literature include the use of open-ended, uncued (UnQ) items, which allows the test-taker to select an answer from over 500 responses.  This type of test has been used for Family Practice board examinations.One study conducted in over 7,000 family practice residents found the UnQ to be a more reliable method for determining a physician’s competence.

The best mode of assessment probably dependents on the material being tested.  In my experiences, the open-response format allows for the best indicator of a student’s knowledge - but like any test, the questions must be carefully worded.  The biggest weakness of open-response essay-type exams is the time required to grade them [Editor’s note:  As well as the inherent subjectivity required when judging the “correctness” of the student’s answers].   To my chagrin, the use of CMC questions will likely continue for licensing examinations for healthcare professionals.

References:

1.  Haladyna TM. The effectiveness of several multiple-choice formats. Appl Measure Educ 1992;5:73-88.
2.  Albanese MA. Type K and other complex multiple-choice items: an analysis of research and item properties. Educ Measure Issues and Practice 1993;12:28-33.
3.  Frisbie D, Sweeney DC. The relative merits of multiple true-false achievement tests. Journal of Educational Measurement 1982;19:29-35.