December 8, 2020

The Importance of Post-Exam Quality Assurance

by Karmen Garey, PharmD, PGY-1 Baptist Memorial Hospital – North Mississippi Pharmacy Resident, University of Mississippi School of Pharmacy

From the students’ perspective, once they hit “submit” after completing an exam they think “Thank goodness that’s done!” However, for teachers, there is still some critical work to do. Now it’s time to review the performance data to ensure the examination was fair and measured what was intended. Here are a few tips and strategies to assess the quality of an exam.

Make certain the exam (as a whole) is a “good” one 

Before the exam is administered to students, a good exam should be written with the following goals in mind:1,2

  • An exam should address multiple levels of Bloom’s taxonomy — from knowledge recall to application and analysis.
  • The exam should include a variety of questions that test a range of concepts that map back to the learning objectives.
  • The consistency of the exam's performance over time is important. An exam should routinely perform the same from year to year despite some changes to the questions.
  • An exam should measure the learning outcomes and course material it was designed to test.

Make certain the questions included on the exam are “good” ones

There are two types of questions that should be included on exams: mastery questions and discriminating questions.  Mastery questions are those questions that students are expected to excel on.3 This type of question is typically a “knowledge level” question in Bloom’s Taxonomy. The questions often test factual recall and the recognition of fundamental material.2  These questions might be called “gimmie questions” by the students; however, teachers include these questions to ensure that students have a firm understanding of the basic but super important concepts or facts.  Discrimination questions, on the other hand, are intended to identify students who have a deeper knowledge of the material and separate students into different performance levels (e.g. identify "A", "B", and "C" students).  Higher-performing students are expected to answer these questions correctly more often than lower-performing students.  This type of question often targets the comprehension, application, analysis, synthesis, or evaluation cognitive level in Bloom’s taxonomy. These questions require an in-depth knowledge of the subject matter.2

Next, let’s look at the distractors.  Does each question include appropriate distractors?3 A distractor is an answer choice that, while wrong, sounds and appears like it could be plausible. A good distractor should be clear and concise and should be similar in structure and content to the correct response. Savvy test-takers have learned to spot answers that seem different in some way, so even small variations in the style, subject matter, and length of the answer choices can provide clues. 

Next, is the question stem clearly written.  Is it clear what the learner is being asked?  Or is the question open to interpretation?  When writing questions, it is important to ensure that the question is not misconstrued.  Sometimes students will overthink a question and try to find the hidden meaning when there is none. To avoid this problem, use words that are unambiguous.  Avoid phrasing that could be cryptic.

Finally, is the answer to the question correctly keyed.  If a lot of students selected the “wrong” answer, it's possible that the question was miskeyed.  While this is not something that happens often, it does happen! So it is always a good idea to double-check that the correct answer was selected on the answer key. 

Some other things to consider as you look at the post-exam performance data.  How did the exam scores look last year? While a group of students performing much better or much worse than previous year’s students is not always an indication that the exam is invalid, it should prompt additional questions.

  • Was the material taught in a manner that was different from previous years?
  • Was the exam formatted or delivered differently?
  • Could the students this year have been less (or better) prepared in some way to comprehend the material?
  • Is cheating suspected?
  • If there are multiple instructors, did students received different messages about the content?

The answers to these questions may not be obvious or even relevant, but it is something to keep in mind.

Use the post-exam statistical analysis to identify problem questions3

As technology becomes a more integral part of exam delivery, it enables a wealth of data that can be used for post-exam quality assurance. Most post-exam statistical analysis tools report similar elements; however, the names may be slightly different. ExamSoft is among the most common exam delivery tools available today and routinely reports these statistics:

  • Item Difficulty represents the difficulty of a question. It reports the percentage of students who correctly answered the question. The lower the percentage the more difficult the question. There is not a set number that the item difficulty should be but the number should be used to ensure the intent behind the question matches the number. For example, if the teacher wants the item to be a mastery question, the difficulty should be 0.90 to 1.00 with very few students getting the question wrong.  If the question is meant to separate those who have a firm grasp on the material vs. those who don’t, lower levels are acceptable. An instructor may have a difficulty “cutoff” number in mind where anything below 0.6 (for example) prompts additional analysis of the question.
  • Upper/Lower 27%, Discrimination Index, and Point Biserial are each calculated differently but they report a similar concept. Stated simply, they all determine whether the top performers on the exam achieved better results on a question compared to those who did not perform well. If the top performers don’t out-perform the poor performers, the question should be assessed to determine why.
    • Upper 27% / Lower 27% - what percentage of the top 27%  vs. the bottom 27% of performers got the question correct.
    • Discrimination Index – this represents the difference in performance between the best performers vs. the lowest performers.
    • Point Biserial – indicates whether those who answered correctly on a specific item correlates with doing well on the exam overall.  In other words, does performance on this question predict whether a student did well (or not so well) on the exam? 

 

Correlation with Overall Exam was

Point Biserial

Very good

>0.3

Good

0.2-0.29

Moderate

0.09-0.19

Poor

<0.09



So, let’s look at the statistical analysis from two example questions. 

  • This was a mastery question — students are expected to do well on this question. It’s a fundamental concept that all students should know.
  • The Discrimination index = 0.04 which indicates almost no discrimination between the top and bottom performers. In this case, because it’s a mastery question and we expected all students to perform well on this question.  Thus, we don’t expect this question to discriminate between the best and worse performers.
  • The Point Biserial = 0.10 indicating this question only moderately correlate with doing well on the exam overall. Again, the top and bottom performers performed quite similarly on this question, so there won’t be a strong correlation between the performance on this question and the overall exam.
  • If this question was not intended to be a mastery question, perhaps the material was taught particularly well … or maybe there was cheating involved

Now let’s take a look at a question where only 66% of the students selected the correct response.

  • Item difficulty = 0.66 so 66% of the students selected the correct response. This is not a bad thing but it is important to make sure the students who understood the material were more likely to get this question right.
  • This is intended to be a discriminating question, so let’s make certain it’s actually discriminating between the best and worse performers.
  • Look at the Upper vs. Lower 27%: 82% of the top performers got this question correct. Only 46% of those who performed the poorest on this exam got this question correct.
  • Discrimination Index: 0.36. This question did a good job discriminating between the best and worst performers on this exam.
  • Point Biserial = 0.28 Performance on this question has a good correlation with the student’s overall exam performance.

While there are no hard rules for how to analyze an examination, the strategies I’ve outlined in this blog post are some of the best practices every teacher should follow. It is important to follow a systematic process and establish “cut-offs” in advance. The key is to be clear and consistent from exam to exam.

References

  1. Brame C. Writing Good Multiple Choice Test Questions. 2013. Accessed December 3, 2020.
  2. Omar N, Haris SS, Hassan R, Arshad H, Rahmat M, Zainal NFA, et al. Automated Analysis of Exam Questions According to Bloom's Taxonomy. Procedia - Social and Behavioral Sciences. 2012;59:297–303. Accessed December 1, 2020.
  3. Ermie E. Psychometrics 101: Know What Your Assessment Data Is Telling You. Examsoft. 2015. Accessed November 18, 2020.

A Hopeful Pharmacist-Led Educational Program to Reduce Prescription Errors

by Spencer Harris, Doctor of Pharmacy Candidate, University of Mississippi School of Pharmacy

Summary and Analysis of:  Gursanscky J, Young J, Griffett K, Liew D, Smallwood D. Benefit of targeted, pharmacist-led education for junior doctors in reducing prescription writing errors - a controlled trial. Journal of Pharmacy Practice and Research. 2018;48(1):26–35.

Writing a safe and properly-formatted prescription is no easy task. Not only does the prescriber need to include the patient’s name, date of birth or address, the date of the writing, the name of the drug, the dose, the dosage form, the instructions on how to take it, the quantity, the number of refills, and the signature of the authorizing provider but the prescriber must write a prescription that is safe for the patient. Factor in the multitude of patients a physician sees, the innumerable questions that she receives, the monotony of writing dozens of prescriptions every day, and many other variables that add stress on her shoulders, it's understandable there will be an error here and there. While understandable, it is not something that can be accepted or overlooked. Each year, according to the FDA’s Wedwatch website, more than one hundred thousand reports about medication errors are documented. A subset of these reports are related to errors in prescribing errors, both in the sense of missing information and prescribing inappropriate therapy.  These errors affect patient health outcomes; this is inexcusable. I have witnessed these errors firsthand, as I am sure nearly every person who has worked in a pharmacy has.

Educational programs might be one way to address this problem. But an educational program must be efficient and compatible with the constant bustle of healthcare, where there is no time to waste. It is for this reason that I read the study by Gursanscky and his colleagues from Monash University in Australia with high hopes.


The investigators implemented a pharmacist-led approach to teaching junior physicians (who write a notably large proportion of prescriptions in teaching hospitals) about prescription writing.  They compared this approach to an online education program (based on the National Inpatient Medication Chart Training course) and to a control group that did not receive any additional instruction. The study was a cluster-randomized trial that enrolled all junior doctors in the general medical units at an Australian tertiary hospital (twelve interns and four registrars). The junior physicians were divided equally into four person-groups who were randomly assigned to either the pharmacist-led intervention (one group), the e-learning intervention (one group), or the control arm (two groups).

The pharmacist-led intervention consisted of three very brief (10-minute) sessions per week for four weeks.  During these sessions, a clinical pharmacist discussed types of errors, their frequency, and severity. Over the four weeks, the pharmacist discussed each error type, why it was unsafe, its consequences, and how to avoid it. Following each tutorial, the pharmacist addressed participant questions. A full report on the intervention can be found in the original study.

Data was collected for three weeks before the intervention and for four weeks during the intervention. The data collected was the prescription error rate among all groups. An error was defined as a prescription that had incomplete patient or prescriber details or which was “illegible, incomplete, or incorrect.” The error rates were then compared using a Chi-square analysis for the pre- and post-intervention periods.

The results (n= 9,657 prescriptions analyzed) showed that the pharmacist-led group had a significantly lower rate of errors in the post-intervention period. Interestingly, the error rates in both the control group and the e-learning group increased significantly in the post-intervention period.

Table 1: Rate of Errors per Total Orders Before and After the Intervention Period

 

Control

E-learning

Pharmacist-led

Pre-intervention

0.49

0.58

0.58

Post-intervention

0.59

0.63

0.37

p-value

<0.001

0.025

<0.001

This study addresses a real-world problem that negatively impacts patients and places a substantial burden on the healthcare system. Additionally, the study clearly describes the design of the educational intervention and outcome measures (e.g. the prescription writing error and its methods of data collection).  The number of prescriptions that were analyzed over the course of the study is very large (n=9,657). With that large of a sample, it is likely that the measured error rate is small but there is always the possibility of bias in the selection process. This study also has some flaws that can leave it weak in the eyes of reasonable readers. Specifically, the sample size of providers is small with only sixteen physicians, four per group.  The study duration was relatively short — approximately two months. These shortcomings may have led to the odd and significant increase in the error rate among the e-learning group and control group. Why would a course designed by professionals to instruct providers on how to write prescriptions result in a higher prescription error rate? Of course, the e-learning course could be poorly designed in some way, but I believe that the more likely reason is there was a small number of participants in the group.  Thus the changes in error rates observed in the control and pharmacist-led intervention groups might be due to chance as well.

Personally, I believe a pharmacist-led approach can and should result in a lower error rate, but I believe that this study must be replicated on a larger scale before any conclusions can be made about the effectiveness of this approach. None-the-less, the study is still relevant. The reason is simple; there are preventable medication errors being made all over the world and they lead to problems that directly affect patients. Until this problem is solved, we should be looking for answers and taking action to find good practices for reducing the errors. While this study is not of the highest quality, the intervention is simple and practical to implement.

Therefore, I urge those who are involved in the training of prescribers to use this study as a template to provide pharmacist-led instruction on prescription-writing. A successful program should include frequent but brief tutorials with an opportunity to ask questions. We must actively make efforts to provide our patients with the high-quality healthcare that they deserve.

References

  1. Gursanscky J, Young J, Griffett K, Liew D, Smallwood D. Benefit of targeted, pharmacist-led education for junior doctors in reducing prescription writing errors - a controlled trial. Journal of Pharmacy Practice and Research. 2018; 48(1):26–35.
  2. Working to Reduce Medication Errors [Internet]. U.S. Food and Drug Administration. FDA; 2019.  Accessed October 23, 2020.

Educating Older Adults Reduces Inappropriate Benzodiazepine Use

by Hallie Butler, Doctor of Pharmacy Candidate, University of Mississippi School of Pharmacy

Review and Analysis of: Tannenbaum C, Martin P, Tamblyn R, Benedetti A, Ahmed S. Reduction of Inappropriate Benzodiazepine Prescriptions Among Older Adults Through Direct Patient Education: The EMPOWER Cluster Randomized Trial. JAMA Intern Med. 2014;174(6):890–898.

Shared decision making has been encouraged because it not only uses evidence but also considers the patient’s preferences and values to help choose the most effective therapy. The American Board of Internal Medicine initiated the Choosing Wisely campaign to assist providers and patients when deciding which therapies should be discontinued. The idea is that we need to de-escalate or discontinue therapies in older adults, those older than 65 years of age, that are unnecessary or may cause harm. The American Geriatrics Society took part in this campaign and they recommend against the use of benzodiazepines as a treatment for insomnia in older adults. The reason: benzodiazepines cause cognitive impairment and drastically increase the risk of falls and fractures. Unfortunately, benzodiazepines are commonly prescribed.  While research has consistently shown that the risks of benzodiazepines in the elderly far outweigh their benefits, older adults are more likely to be prescribed medications from this class than younger adults.2 Even though physicians are aware of the risks of benzodiazepines, more than 50% of them continue to prescribe them to their older patients. The objective of the EMPOWER trial was to implement and measure the effectiveness of a direct-to-patient education program for older adults receiving long-term benzodiazepine therapy. In this study, they assessed rates of dose reduction and cessation of benzodiazepine use.1 


This study was a 2-arm, parallel-group, pragmatic cluster randomized control trial. Thirty community pharmacies participated. These pharmacies had at least 20% or more of their patients age 65 or older. There were 303 participants in this study ranging in age from 65 to 95 years old. The pharmacies were randomly assigned to either the intervention or control groups. All of the participants including the pharmacists, patients, evaluators, and prescribers were all blinded to the outcome assessment. To be eligible for this study, the patient had to have at least 5 active prescriptions with at least one being a benzodiazepine. They also had to receive a refill of a benzodiazepine for three consecutive months prior to study enrollment. Patients that had a diagnosis of severe mental illness or dementia, had a current prescription for an antipsychotic and/or cholinesterase inhibitor or memantine in the previous three months, or who were a resident of a long term care facility, were excluded. 1

The educational intervention included a booklet on self-efficacy and social learning theory. Each of the participants completed a self-assessment about their opinions on benzodiazepine use and then received information on the harms associated with their use. Knowledge statements were presented with the purpose of creating a cognitive dissonance regarding the safety of benzodiazepine use. The participants were also educated on certain drug interactions and listened to peer champion stories to promote self-efficacy. The study team discussed with the patients about treatment options that were equally or more effective substitutions and educated them on how to taper off their benzodiazepine. The taper schedule was based on a 21-week protocol. The protocol was picture-based and showed the diminishing dose from a full pill to half pill, and finally a quarter pill. The participants were encouraged to speak with their providers and/or pharmacist about deprescribing. All of the reading material was written at a sixth-grade level and in 14 point font.1 This should make it accessible to nearly all participants.  The control group received usual care and there was no active effort to educate these participants about the risks of benzodiazepine use.

The complete cessation of benzodiazepine use in six months was the primary outcome. In order to be classified as complete cessation, the patient must have had no benzodiazepine prescriptions or renewals at the time of the six-month follow-up and sustained for at least three consecutive months. The investigators verified this using pharmacy profiles. The study team defined a dose reduction as at least 25% or more reduction in dose compared to baseline for at least three consecutive months. Every participant had a complete follow-up at their pharmacy in six months. One research nurse and one investigator, who was blinded to group allocation, used a protocol to independently assess the outcomes.1

Complete cessation was achieved in 27% of participants vs 5% in controls. There was an 8-fold higher probability of participants who received the intervention to discontinue benzodiazepine therapy. In addition, 11% of the intervention group reduced their benzodiazepine dose. This study suggests that teaching adults with an evidence-based approach, in a way that makes them question the safety and necessity of benzodiazepine use, is a safe and effective method to address over-prescribing. In previous studies that did not include a direct patient educational program, efforts to have physicians deprescribe benzodiazepine had a smaller impact. 

Systematically recruiting participants through community pharmacies is just one of the many strengths of this study.1 Some other strengths would be the blinding of all participants and how they objectively assessment of drug discontinuation rates. I believe one weakness of this study was the six-month time frame for patient follow-up. With a longer follow-up period, the intervention could have proven to be more or less effective — there might have been a higher discontinuation rate or, perhaps, there might have been a high relapse rate.

Educators should pay attention to this particular study for several reasons. The patient education techniques these researchers used had a significant impact on patient behavior. This is a major accomplishment as many older adults are very reluctant to stop benzodiazepine use.2 The educational intervention was well designed. It included different forms of instruction and promoted self-efficacy.  Promoting self-efficacy can help patients improve other chronic illnesses as well such as hypertension and diabetes. Patients must believe that they can make a difference in their health outcomes. A picture-based drug-tapering protocol is a great instructional tool because it is friendly to all ages, languages, and health literacy levels. A larger font should also be used when distributing materials to older adults as many of them have visual impairments. The strategies employed in this study can be used in a wide array of disease states and can be used as a model to get patients more involved in their care.

References

  1. Tannenbaum C, Martin P, Tamblyn R, Benedetti A, Ahmed S. Reduction of Inappropriate Benzodiazepine Prescriptions Among Older Adults Through Direct Patient Education: The EMPOWER Cluster Randomized TrialJAMA Intern Med.2014;174(6):890–898.
  2. Pereto A. Data: Seniors prescribed benzodiazepines most often. Athena Health. Accessed November 25, 2020.

December 7, 2020

Training Pharmacy Students to Manage Opioid Overdoses and Administer Naloxone

by Cole Sisson, Doctor of Pharmacy Candidate, University of Mississippi School of Pharmacy

Summary and Analysis of: Kwon M, Moody AE, Thigpen J, Gauld A. Implementation of an Opioid Overdose and Naloxone Distribution Training in a Pharmacist Laboratory Course Am J Pharm Educ 2020; 84 (2): Article 7179.

Opioid overdoses caused almost 47,000 deaths in the US in 2018 and, according to the CDC, the number of deaths has been growing since 1999.1 With the continuing increases in deaths due to prescribed and synthetic opioids, it is more important than ever that Americans be knowledgeable about and have access to overdose reversal agents like naloxone, which is a life-saving medication when administered correctly to those experiencing an overdose. Naloxone is commonly carried by emergency medical personnel and first responders, but the average person can be trained on its use.  Wide-spread availability of naloxone can expand the likelihood that someone will have access to this medication when needed. Naloxone dispensing and training is especially important in community settings like pharmacies, however many patients (and even some pharmacists) are reluctant to use naloxone due to a lack of confidence using an injectable medication and stigma related to opioid use. Integrating training about opioid overdoses and naloxone prescribing in pharmacy school curriculums can increase knowledge among new pharmacists entering the profession who can advocate for increased use and availability of these rescue medications.

At the Notre Dame of Maryland School of Pharmacy, Kwon and faculty colleagues designed, implemented, and evaluated an opioid overdose education and naloxone distribution (OEND) program.2 They designed a program based on the 5 E’s learning method: Engage, Explore, Explain, Elaborate, and Evaluate.  To measure knowledge and attitudinal change, the investigators used the Opioid Overdose Knowledge Scale (OOKS) and Attitude Scale (OOAS) before and after the OEND program. The faculty engaged a class of P3 pharmacy students in a patient care laboratory session consisting of four parts: an interactive introductory presentation, a hands-on session with various placebo forms of naloxone, a large group review of the information learned in the first two parts, and then a patient counseling and overdose care scenario to test the newly learned skills. The students received prompt feedback after completing the scenarios. Afterward, the students took the post-test OOKS and OOAS evaluations.


Fifty-six students completed the OEND program. When compared to the baseline, the mean OOKS score increased significantly (p<0.001) in each knowledge domain including risk factors for overdose, signs of overdose, actions to care for an overdose victim, and general knowledge about naloxone. Similarly, the mean score in the OOAS evaluation increased significantly (p<0.001) from pre- to post-test, and the largest mean increases in the categories of self-perceived confidence in counseling and dispensing naloxone and counseling on how to rouse and stimulate someone experiencing overdose. As a longitudinal measure of knowledge retention, the pharmacy faculty also included naloxone counseling and overdose care in the final examination for the students that semester. The students were required to counsel a standardized patient on a randomly selected naloxone dosage form, and, in another station, care for a standardized patient who was experiencing an apparent overdose. The mean total score was very high on both of these stations and nearly all students achieved at or above the passing score. While this was not a direct re-administration of the standardized Opioid Overdose Knowledge Scale, it served as a good proxy for retained knowledge by the students.

This study evaluated the effectiveness of a well-designed instructional program and used standardized questionnaires (the OOKS and OOAS) to assess learning. The immediate results following the completion of the program showed significant increases in pharmacy student knowledge and attitudes related to managing an opioid overdose and dispensing naloxone.  While retention of this material was very strong, students were informed that these topics would be tested during the final examination, so it is possible that students did not retain this information so much as relearned it for the exam. This program was implemented with one student cohort at one pharmacy school, so additional studies will be needed to determine the generalizability of these findings to other colleges/schools of pharmacy. 

Similar OEND programs have been implemented and evaluated but none of the reports are as robust as the study by Kwon. Monteiro et al. evaluated an interprofessional workshop focused on increasing knowledge, skills, and attitudes of students towards opioid misuse.  The interprofessional teams included health professional students from medicine, nursing, pharmacy, physical therapy, and social work. While this study only assessed pre- and post- OOKS scores among the medical students, the results demonstrated significant improvements in knowledge.3 In another study, Schartel et al. evaluated the success of a program for P1 pharmacy students in a lab course.  However, they only taught students about and evaluated the use of one naloxone dosage form and, while knowledge improved significantly, they did not assess changes in student attitudes.4 

Pharmacists are one of the most accessible health professionals and many patients ask a pharmacist about a health issue before seeing care from a physician. Implementing training programs in pharmacy curricula can help bridge the gaps in access and increase community awareness about managing opioid overdoses.  Training pharmacists to dispense and teach patients how to use naloxone products can help slow the escalating number of deaths in the US due to the opioid crisis. Interactive and well-designed programs like the one implemented by Kwon and colleagues are an effective way to increase both knowledge and attitudes towards opioid overdoses.

References

  1. “Understanding the Epidemic” [Internet]. Centers for Disease Control and Prevention. Centers for Disease Control and Prevention; 2020 [cited 2020Dec6]. https://www.cdc.gov/drugoverdose/epidemic/index.html
  2. Kwon M, Moody AE, Thigpen J, Gauld A. Implementation of an Opioid Overdose and Naloxone Distribution Training in a Pharmacist Laboratory Course. Am J Pharm Educ 2020; 84 (2): Article 7179.
  3. Monteiro K, Dumenco L, Collins S, et al. An interprofessional education workshop to develop health professional student opioid misuse knowledge, attitudes, and skills. J Am Pharm Assoc 2017; 57 (2): S113–S117.
  4. Schartel A, Lardieri A, Mattingly A, Feemster AA. Implementation and assessment of a naloxone-training program for first-year student pharmacists. Curr Pharm Teach Learn. 2018; 10 (6): 717-722.

December 6, 2020

Supportive Counseling and Its Impact on Expecting Mothers

by Layla Langdon, Doctor of Pharmacy Student, University of Mississippi School of Pharmacy

Summary and Analysis of: Esfandiari M, Faramarzi M, Nasiri-Amiri F, et al. Effect of supportive counseling on pregnancy-specific stress, general stress, and prenatal health behaviors: A multicenter randomized controlled trial [Internet]. Patient Education and Counseling 2020;103 (11): 2297-2304 

This article caught my attention because we have been studying women’s health and the impact of the mother’s behaviors and stress on a developing baby. Also, as a student pharmacist, I am very interested in pursuing a career in pediatrics, and a child’s health really starts in the womb. This study attempted to demonstrate the impact of an educational support program on a woman’s pregnancy-related and general stress as well as prenatal health behaviors. Pregnancy-related stress is often the result of worrying about maternal and fetal health, parental responsibility, physical symptoms, labor pain, childbirth, and the cost of raising a child.1 All of these factors weigh on a woman and starts to take a toll on her health and can lead to a poor pregnancy outcome. Using supportive counseling to supplement usual antenatal care, this study aimed to reduce maternal stress and promote healthy behaviors that would benefit the mother and the developing child.


To test this theory, pregnant women between gestational age 6 and 32 weeks with no comorbidities were recruited to participate in this randomized, control study. The participating women were divided into two groups with 40 participants each. Women in both groups completed four questionnaires at baseline including the Revised Prenatal Distress (NUPDQ), Spielberger State-Anxiety Inventory (STAI-Y), Prenatal Health Behaviors Scale (PHBS), and the Perceived Stress Scale (PSS-14).  In addition, all of the women provided a saliva sample to measure salivary cortisol concentration. Each participant was advised to fast and avoid alcohol for at least 24 hours before the salivary sample was taken. Changes in the NUPDQ, STAI-Y, and PHBS were the primary outcomes for this study, and the PSS-14 and the salivary cortisol assay were considered secondary outcomes.

The control group received only usual antenatal care based on Iranian national guidelines. Each participant in this group received midwifery examinations, assessments of the mother’s and fetus’s health, and education about personal hygiene, sexual activity, signs of a high-risk pregnancy, common pregnancy complaints, nutritional and medicinal supplements, and use of fertility health services. In addition to usual antenatal care, the intervention group received weekly supportive counseling conducted by a female expert psychologist. These supportive counseling sessions consisted of face-to-face instruction with 12 to 14 women in each group. This gave the women the opportunity to interact with one another.  During these sessions, they discussed their stress and anxiety.  The instructor also designed group work and guided exercises to address unhealthy behaviors. The program targeted pregnancy-related worries such as health problems and costs, parental responsibility, physical symptoms, infantile health, parenting, labor pain, and childbirth phobia. Six weeks after completing the educational program, all participants in both groups again completed the four questionnaires and provided a salivary sample to measure their cortisol.

The results revealed there were significant improvements in the mean NuPDQ, STAI-Y, PHBS, and PSS-14 scores in the intervention group, including in the subscales of these instruments, when compared to the control group. Specifically, there were large effect size improvements in the medical and financial problems, infant health, physical symptoms, and labor and delivery subgroups of the NuPDQ and the four subgroups of the PHBS (See Table 1). The salivary cortisol levels improved in both the intervention and control groups but there were no significant differences in the mean change observed. 

Table 1. Mean Pre (T0) and Post (T1) Scores and Differences for Selected Outcomes Following an Educational Support Program for Pregnant Women

 

Intervention

Control

 

T0 Mean

T1 Mean

Change

T0 Mean

T1 Mean

Change

Primary Outcomes

NuPDQ

11.85

5.6

-6.97

9.42

11.32

2.62

STAI-Y

44.4

35.8

-7.2

40.65

41.82

.52

PHBS

 

 

 

 

 

 

Harmful Behavior of Health

4.17

2.42

-1.72

4.37

4.82

0.42

Health Promoting Behavior

20.2

23.67

3.53

20.45

20

-0.51

Harmful Physical activity of Health

5.52

3.6

-1.91

5.57

5.62

0.03

Health Promoting Physical activity

3.97

7.07

2.88

3.1

2.95

0.06

Secondary Outcomes

PSS-14

23.45

16.82

-7.20

21.82

21.77

-0.53

Serum Cortisol

23.32

20.25

-3.32

17.57

14.98

-2.61


One of the strengths of this study was the use of four different questionaries to evaluate the effect of supportive counseling on pregnancy-specific, general stress, and healthy behaviors. Another strength of this study is that the supportive counseling provided to the experimental group was provided in small groups with only 12 to 14 participants per group. This allowed each participant to develop relationships with other pregnant women who may be experiencing the same struggles. This study also aimed at improving each participant's self-esteem and maximizing their adaptive skills. These are important objectives because pregnant women often feel incapable of birthing and raising a child. The weaknesses of this study are that the questionnaires used were all based on self-evaluation. The authors do not discuss the sustainability of the program and don’t report outcomes after delivery – so the health outcomes of the babies is unknown. The findings of this study probably should not be generalized to complicated pregnancies.  While salivary cortisol was included as a measurement of stress, it does not correlate well with psychological stress.

In future studies, it would be helpful for each participant to complete a session with a mental health professional. This would allow a more personalize assessment and help the participants identify and analyze the specific stressors they are experiencing. Also, the addition of this session could be used as an external evaluation. Although this is a subjective measurement similar to the self-evaluations, an assessment performed by a mental health professional would be consistent for all participants. Future studies should gather data through the entire pregnancies, including delivery, plus three months postpartum.  This is important to truly determine the long-term effect of supportive counseling on pregnancy-related stress and outcomes.

A similar study analyzed the effect of a supportive intervention in pregnant women who were depressed using the Postnatal Depression Scale (EPDS >12).3 In this study, the intervention group received the same number of counseling sessions, six visits, but over eight weeks. That study also concluded that supportive counseling in addition to usual prenatal care improved outcomes. Specifically, the participants reported improvements in depressive symptoms, depressive severity, and quality of life. Another study found that supportive counseling improved the patient’s satisfaction during delivery.4 Although these studies had minor differences in terms of the number of counseling sessions provided, the program duration, and the number of participants, they all concluded that supportive counseling subjectively improved pregnancy-related stress 

While the supporting counseling program appears to have been effective, it would have been helpful if the intervention were described in more detail. This would allow other health professionals, such as pharmacists and nurses, to implement a similar program. However, this study is important because it demonstrated the benefits of adding supportive counseling to usual prenatal care. This may also improve the health of the fetus and allow for a smoother birthing experience. Overall, I believe that providing supportive counseling to pregnant women should be the standard of care during all pregnancies.

 

References

  1. Esfandiari M, Faramarzi M, Nasiri-Amiri F, et al. Effect of supportive counseling on pregnancy-specific stress, general stress, and prenatal health behaviors: A multicenter randomized controlled trial [Internet]. Patient Education and Counseling 2020;103 (11): 2297-2304.
  2. Nast I, Bolten M, Meinlschmidt G, Hellhammer DH. How to Measure Prenatal Stress? A Systematic Review of Psychometric Instruments to Assess Psychosocial Stress during Pregnancy. Paediatric and Perinatal Epidemiology. 2013;27(4):313–22.
  3. Neighmond P. To Prevent Pregnancy-Related Depression, At-Risk Women Advised To Get Counseling [Internet]. National Public Radio. NPR; 2019 [cited 2020Oct19].
  4. Segre LS, Brock RL, O'Hara MW. Depression treatment for impoverished mothers by point-of-care providers: A randomized controlled trial. J Consult Clin Psychol 2015; 83 (2): 314-24.
  5. Pasha H, Basirat Z, Hajahmadi M, Bakhtiari A, Faramarzi M, Salmalian H. Maternal expectations and experiences of labor analgesia with nitrous oxide.. Iranian Red Crescent Med J 2012; 14 (12): 792-7.