The Statistical Errors in the Acupuncture for C-Section Trial

We need non-pharmacologic pain control options after c-section, but this acupuncture study has some concerning red flags.

As surgeries go, Cesarean sections are pretty painful.

And treatment of post-operative C-section pain is tricky; your pharmacopeia is limited as the new mother may want to be relatively alert to engage in childcare, and you need to worry about the pharmacokinetics of medications getting into breast milk and the new baby.

Non-drug methods of post C-section pain control are thus sorely needed, which prompted researchers from the Medical University of Greifswald in Germany to conduct this study, a randomized trial which suggests that acupuncture is effective in treating post-operative pain among women who just had a C-section.

Full disclosure: I’ve been in touch with the lead author of this study as I have concerns about some of the statistics. I’ll get to that in a moment, but first let me give you the run-down as reported.

Acupuncture, as it is traditionally understood, is the process of inserting needles in specific parts of the body in order to manipulate the flow of energy fields and promote health. But this study eschews the mystical aspects of the practice. This paper has no mention of Qi, meridians, or Traditional Chinese Medicine energy points.

No, this paper hangs its biologic plausibility hat on the idea that stimulation of the vagus nerve can mediate pain relief through central processes.

It’s not crazy. There is some evidence that stimulation of receptors in one part of the body might attenuate pain in other parts of the body — this might be part of the mechanism of how capsaicin, when applied topically, can relieve pain.

So, by framing acupuncture in a way that is more consistent with our mechanistic understanding of the universe, can we give the technique a fair shake? Here’s the setup.

One-hundred-twenty women — mean age 31, all white — who were about to undergo an elective C-section were randomized to acupuncture or placebo acupuncture.

Ok this is critical. Randomizing people to acupuncture versus usual care is a real problem, since it is obvious to them that they are getting acupuncture, which can have a strong placebo effect given the sort of mystical cultural associations the practice has.

Most good randomized trials of acupuncture compare “real” acupuncture to “sham” acupuncture. In these designs, needles are placed in the body regardless — but in the real acupuncture arm they go in those traditional energy spots, and in the sham group they go in other spots. Meta-analyses of acupuncture trials which include a sham of this type tend to conclude the same thing- that real acupuncture is better than usual care, but sham acupuncture is just as good. In other words, it’s not where you stick the needles, it’s just sticking needles, or the whole experience surrounding the sticking of needles.

But this trial didn’t take that approach. Rather, in both arms, needles were put in the same places. But in the placebo group, the needles didn’t actually penetrate the skin.

The procedure involved putting 4 tiny needles in both ears — the needles look something like this.

Women in the placebo group got a simulated prick from a sharp probe and a similar bandage but nothing was left in the skin.

Women also got needles or placebo needles placed in 6 points on the body as well.

The primary outcome, as described in the paper is “pain intensity on movement” on post-operative day 1. It is a decent outcome, as getting people moving post-op is so critical, though there doesn’t appear to be a standard “movement” to elicit that pain. I will note that in the trials entry, they describe the primary outcome as “Pain intensity as measured by numeric rating scale 1–10”, without a mention of movement.

And when you look at the pain scores, that outcome — pain with movement — seems quite well-chosen. There was no difference in maximum pain level, or minimum pain level between the groups. Nor was there any difference in pain on discharge or satisfaction with pain control.

There was no difference in the percent of women who noted that pain disturbed their sleep, or their mood, or their movement.

Secondary outcomes looked at drug dosing as a proxy for breakthrough pain, and there was no difference in acetaminophen or diclofenac dosing.

OK, so from a pain standpoint we have one positive outcome among many, but it happened to be the primary outcome (albeit not clearly prespecified). Is a mean intensity of 4.7 in the real acupuncture group versus 6.0 in the placebo acupuncture group clinically meaningful? Ideally, people with well-controlled pain are going to give you scores below 4.

Of course, pain is highly subjective, and thus highly subject to placebo effects.

What is somewhat less subjective is mobilization. Getting up and out of bed into a chair or standing. And early mobilization is a goal for many women after c-section and most surgeons too. And here there really does seem to be a difference between the real and placebo acupuncture group. (The third group you see is a non-contemporaneous usual care group).

So up until this point, I felt like this study showed some modest effects, and the lack of any magical thinking was refreshing.

But then I came across something that, frankly, got me worried. And it surprised me that the editors over at JAMA Network Open didn’t catch it.

A critical component of any acupuncture vs. placebo acupuncture study is an assessment of how well the placebo worked — an assessment of blinding. If individuals knew which arm of the study they were in, or even suspected it, it could dramatically alter the results — both through direct placebo effects and even a desire to prove that the therapy actually works.

Fortunately, the authors asked the women which group they thought they were in. 25 out of 58 women in the acupuncture group thought they were in the acupuncture group, just 11 women out of 55 in the placebo group thought they were in the acupuncture group (some women didn’t answer I guess).

Now, the authors describe this difference — 43% awareness versus 20% — as not statistically significant, with a p-value of 0.08 according to Fisher’s exact test.

Something needled me about that p-value though, so I checked their math.

Actually, this difference is quite statistically significant. The p-value is 0.009 using the Fisher Exact test, not 0.08. You can check it yourself — there are multiple online calculators that will do this for you.

So either the p-value they report is wrong, or the number of women who knew they got real acupuncture was wrong. But something here doesn’t add up, and when you are conducting a trial in a space that has been so inundated with shady science over the years, you really have to check you work.

As I mentioned, I reached out to the lead author Dr. Taras Usichenko to ask about the discrepancy. He said he wasn’t able to reach his statistician before my deadline, but even if there was significantly more awareness in the true acupuncture group, it wouldn’t change his opinion of the results. He wrote:

You may be right in thinking that the distribution indicted a degree of unblinding; however, this assessment was only performed at the end of the study period when participants had benefited from the intervention.

He goes on to write that this may reflect that women realized they were getting real acupuncture because acupuncture was effective, as opposed to the idea that it was observed to be effective because women knew they were getting real acupuncture.

Of course, if that’s the case, why assess for adequacy of blinding anyway?

Dr. Usichenko also said he will ask JAMA Network Open to publish a correction if his team determines there is an error in that table.

I think the best course here would be for Dr. Usichenko’s team to release a deidentified analytic dataset so independent statisticians could attempt to replicate the results — transparency is always a good thing.

Ok, so what do we do with this paper? We’ve got an important scientific question — how to treat pain post C-section. We have an intervention that might be modestly effective, especially in terms of mobilization postoperatively. But we also have some red flags like the weird specification of the primary outcome, and what I suspect is a pretty significant failure of blinding.

In other words, this study supports the idea that acupuncture may just be an elaborate placebo.

Dr. Usichenko suggested as much in his email to me writing:

So even if the mechanism of action is entirely expectation … it is still of potential interest considering the improved outcomes.

If it is safe and helps, why not use it — even if we’re just exploiting the placebo effect? I mean, if a given patient asks for it, I suppose its fine, but remember that one of the ways science benefits humankind is by advancing our understanding. If we close our eyes and pretend acupuncture works in a way that it does not work, future studies will be following the wrong scientific path — testing hypotheses that are doomed to failure. If it’s a placebo, fine — let’s figure out exactly how placebos work and exploit that mechanism for pain control.

I look forward to further updates on this study as the authors investigate the statistical anomalies.

A version of this commentary first appeared on




Medicine, science, statistics. Associate Professor of Medicine at Yale University. New book “How Medicine Works and When it Doesn’t” for pre-order now.

Love podcasts or audiobooks? Learn on the go with our new app.

What Prenatal Vitamins Do for Mom and Baby

Physiology First Ambassador Academy: Day 6

What’s Holding Back Healthcare Innovation in the 21st Century?

In the Longevity Moonshot, Solve for Independence, Not Fear

5 Common Habits Sabotaging Your Nervous System

Akenta Health Is Fighting Heart Disease with Their High-Touch, Mobile-First Platform

6 Essential Life Lessons I Learned from my Clinical Rotation

What Intermittent Fasting Does For Your Brain

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
F. Perry Wilson, MD MSCE

F. Perry Wilson, MD MSCE

Medicine, science, statistics. Associate Professor of Medicine at Yale University. New book “How Medicine Works and When it Doesn’t” for pre-order now.

More from Medium

Aggregation of ideologies

Need some energy? Renewable or Nuclear today?

Transmedia Storytelling, A Long-Dead Soviet Psychologist, & Hulu’s East Los High:

How fast could the Tesla Roadster accelerate?