April 2, 2013

Computerized Adaptive Testing


by David Cannon, Pharm.D., Clinical Instructor, University of Maryland School of Pharmacy

Unique assessment tools have always been fascinating to me.  Once, when I was taking a practice exam consisting of 25 questions on pharmacy law the following message appeared:  “You scored a 56%, you passed!” How could that be, I thought?  Surely the minimum passing score for a state law exam could not be that low! But, as it turned out, this exam was an adaptive test. While the computer was reporting the percentage of questions I scored correctly, behind the scenes it was doing calculations based on the difficulty and weight of the questions.  Once I began to peel back the surface of these complicated algorithms, I wanted to learn more.  But first, let’s review some basics about assessment …

The purpose of an exam, including high stakes exams to make state licensure decisions, is to use the assessment data (answers to test items) to make inferences about the learner.  Assessment is best approached by first considering what the end requirements of the learner are. Then think about what actions, jobs, or thoughts would illustrate mastery of the desired requirements. By deciding what the goals of assessment are makes the process of actually creating it much easier.1

Evidence-Centered Design utilizes a series of key questions to analyze the assessment design. Table 1 is good example of a set of questions recommended by Mislevy, et. al.:1

Table 1:

a. Why are we assessing?
b. What will be said, done, or predicted on the basis of the assessment results?
c. What portions of a field of study or practice does the assessment serve?
d. Which knowledge and proficiencies are relevant to the field of study or practice?
e. Which knowledge or proficiencies will be assessed?
f. What behaviors would indicate levels of proficiency?
g. How can assessment tasks be contrived to elicit behavior that discriminates among levels of knowledge and proficiency?
h. How will the assessment be conducted and at what point will sufficient evidence be obtained?
i. What will the assessment look like?
j. How will the assessment be implemented?

Taken from Automated Scoring of Complex Tasks in Computer-based Testing1




















ECD draws parallels to instructional design in that these questions do not necessarily need to be asked in order, the outputs to each question should be considered when examining the others, and these questions should be repeated as necessary.1 To understand how evidence-centered design is utilized in creating assessments, the assessment tool must be broken down its individual components.

When designing assessments used for licensing examinations, many domains of knowledge are tested. A domain is a complex of knowledge or skills that is valued, where features of good performance or situations during which proficiency can be exhibited, and where there are relationships between knowledge and performance.1   In a high stakes examination, like a state board licensure exam, it is not sufficient for an examinee to be competent in only one domain but not the others. To test proficiency in each of the domains, smaller subunits of the assessment called “testlets” are used. Testlets typically contain a group of assessment items that are related to each other that would elicit the behaviors associated with the domain.3  It is vital to understand how these examinations are designed from an evidence based perspective in order to evaluate the validity of computerized adaptive testing.

So what is a computerized adaptive test anyways?  CAT is an assessment tool that utilizes a iterative algorithm with the following steps:2
1) Search the available items in the testlet domain for an optimal item based on the student’s ability
2) Present the chosen item to the student
3) The student either gets the item right or wrong
4) Using this information as well as the responses on all prior items, an updated ability estimate of the student is determined
5) Repeat steps 1-4 until some termination criteria is met

CAT is utilized in many high-stakes licensing examinations such as the National Council Licensure Examination (NCLEX), which is required by most states for nurses before they can practice. In the case of NCLEX, after each item is answered by the examinee a calculation is done in the background that determines an estimate of the persons competency based on the difficulty of the item answered. The computer then subsequently asks a slightly more difficult question to apply the algorithm again, creating a new estimate of the candidates competency. This is repeated until the computer reaches a predetermined cutoff (with a confidence interval 95% in the case of the NACLEX3) for minimum competency or until the number of test items has been exhausted. Put another way, the exam will cease when the algorithm has determined with 95% certainty that the student’s ability falls above or below a minimum competency standard. Check out this VIDEO of how the algorithm works behind the NACLEX.3

Now you might be wondering how you create an adaptive test.  It’s a pretty complicated process that would involve breaking down the different subject areas you want to test into different domains.  Then you’d need to develop an item bank for each domain. Content experts would come together and decide what items should be included while at the same time evaluating their appropriateness and difficulty/weight.  A great free resource for creating your own adaptive test can be found here.4  The NCLEX is a great example of how computerized adaptive testing brings together the ideas of evidence centered design and instructional design by helping educators assess their students with greater accuracy.

References:

1.  Mislevy RJ, Steinberg LS, Almond RG, Lukas JF.  Concepts, Terminology, and Basic Models of Evidence-Centered Design.  In:  Williamson, D, Mislevy, R, Bejar, I (eds). Automated Scoring of Complex Tasks in Computer-based Testing (1st ed). Mahwah, NJ.: Lawrence Erlbaum Associates, 2006 (pp 15-47).
2. Thissen, D., Mislevy, R.J.. Testing Algorithms. In Wainer, H. (eds.) Computerized Adaptive Testing: A Primer. Mahwah, NJ: Lawrence Erlbaum Associates, 2000.
3.  Computerized Adaptive Testing (CAT). National Council of State Boards of Nursing. Accessed on: March 11, 2013.
4.  Software for developing computer-adaptive tests. Assessment Focus. Accessed on: March 26, 2013.

March 29, 2013

Teaching Emotional Competence through Simulation


by Matt Newman, PharmD, PGY1 Pharmacy Practice Resident, The Johns Hopkins Hospital

Recent graduates of Doctor of Pharmacy programs are likely familiar with the role of simulation in education. Activities like patient counseling laboratories and clinical skills practicums are common, even if the formats may vary. While these instructional methods are useful and are aimed at providing a true-to-life experience, one aspect of pharmacy practice in the “real world” is not easily taught: social and emotional compassion and competence.

As a pharmacy student, I participated in a somewhat dreaded patient counseling lab during “angry week.”  In this session, rather than being presented with a calm and cooperative patient needing advice about smoking cessation or seeking help with the high cost of co-payments, the patient was irate about a perceived medication error that had occurred with her prescription. Sitting in the counseling room, I remember a distinct feeling of uneasiness as I pondered the best way to manage the patient’s emotional state and formulate an appropriate response to her concerns.  I have found that the best way to improve this skill set is through experience.

You may have had a similar lab in pharmacy school.   And you probably already have some idea of what social emotional competence means and how you might demonstrate it in practice.  Exact definitions vary, but emotional intelligence is considered “the overlap between emotion and intelligence,” or, “the intelligent use of emotions.”1 It is a set of skills used to read, understand, and react effectively to emotional signals sent by others and oneself.2  While this sounds obvious, assessing one’s ability to utilize this skill set can be difficult. There is, however, an increasing body of literature regarding social emotional intelligence, which demonstrates an expanding awareness of its importance.

While the need for emotional intelligence in healthcare has been thoroughly described, there has been little research about how to best teach and measurement it, especially among pharmacy students.1 A group of instructors at one pharmacy school sought to measure the development of social emotional competence in students before and after a series of simulated patient counseling activities. To do so, a group of first-year students were asked to complete the Social Emotional Development Index (SED-I) before and after participation in mock patient consultations. Students were also graded on a scale of 0-3 for social emotional competence.

The SED-I is a self-assessment tool in which participants respond to questions such as “I take the lead role,” “People know that I care about them,” and “I act without considering another person’s perspective” with the goal of assessing the respondents SED in four domains: connection, influence, consideration, and awareness. In this study, students took the SED-I at baseline and after performing two mock counseling sessions on topics such as smoking cessation, nonprescription medications, blood pressure, and blood glucose monitoring. Statistical analysis demonstrated a significant positive correlation between students’ patient counseling assessment scores (as judged by the instructors) and their self-assessment using the SED-I.1 In other words, students who performed better in the lab also scored higher on the SED-I.

These results indicate the potential utility of the SED-I as a tool to evaluate the development of social emotional intelligence. While not surprising, it is useful to note that the students who performed better on the lab activities were the ones who had more developed social emotional intelligence.  This reinforces the current understanding that this type of intelligence is important during patient interactions. The authors noted that pharmacy curricula are effective at teaching core knowledge and technical skills, but social skills may be more important in order to influence patient behavior.1  A possible limitation of the study is the use of second-year pharmacy students as the “patients” in the counseling sessions; the use of professional actors would have made the counseling sessions more realistic.

Another group of pharmacy faculty studied students’ perceptions of emotional intelligence material used in a communications course.3 Objectives for this course included “Define an emotional concept,” “Relate how self-confidence would be beneficial to the Director of Pharmacy in a large hospital,” and “Describe the characteristics of people who are competent in communication skills and relate how these characteristics would benefit the pharmacist who manages a staff of 20.”  In addition to traditional didactic content, a patient counseling activity similar to the previous study was used to teach the core principles. Instructors reviewed a video recording of the counseling activity with the students, and noted the empathic responses used. They also role-modeled for the students. Additionally, students were asked to answer two reflective questions and course content was assessed on a formal examination. Student responses were mostly positive: they recognized the importance of these skills and the need to apply them to practice. It is interesting to note that the authors mention the lack of standardized tools available to assess the students. Perhaps the SED-I could have been used to assess student performance in this type of educational activity.

I wonder how my patient counseling lab experience may have been different if the SED-I, or another measure of social emotional intelligence, had been used. While most may not think about social emotional intelligence in day-to-day interactions, awareness of the concept is important. The same skills needed for effective patient counseling would also be useful in ensuring productive interactions with many others, including peers, family, and members of the medical team.

Using simulation to teach social skills is useful but it is not without caveats. Using students or actors as “patients” during patient counseling labs is a great way for students to gain experience and confidence in their interactions. However, it is difficult to emulate real emotions and personalities as they will be experienced in the clinical setting.  Finding the best method of evaluating student’s social and emotional development is a work in progress. Regardless, including instruction regarding social emotional intelligence into pharmacy curriculums is important and simulated patient interactions serve as a reasonable substitute for real-life experience.

References
1.  Galal S, Carr-Lopez S, Seal CR, et al. Development and assessment of social and emotional competence through simulated patient consultations. Am J Pharm Educ. 2012;76: Article 132.
2.  Romanelli F, Cain J, Smith KM. Emotional intelligence as a predictor of academic and/or professional success. Am J Pharm Educ. 2006;70: Article 69.
3.  Lust E, Moore FC. Emotional intelligence instruction in a pharmacy communications course. Am J Pharm Educ. 2006;70: Article 06.