Without evidence of benefit, an intervention should not be presumed to be beneficial or safe.

- Rogue Medic

Misleading Research

In my last post, RSI Problems – What Oversight? I quoted from an article by Danny Robbins High-risk EMS procedure gets a low level of oversight. [1] In the article is one quote that really required a post of its own.

One physician willing to speak out on the evils of RSI, not bad medical oversight, is Dr. Henry Wang.

“My gut feeling is that, for every one of these cases, there’s probably a handful of others you never hear about,” said Henry Wang, an assistant professor of emergency medicine at the University of Pittsburgh who has closely examined intubation by EMS personnel.(article)

Unfortunately, Dr. Wang does not evaluate quality – only quantity. He and his accomplice, Dr. Donald Yealy, have studied the average number of intubations performed by medics in Pennsylvania. He found that the number was 1 per year, on average.[2]

A reasonable researcher might look at how different medical directors dealt with the low average number. How do you maintain a skill that is used so infrequently? It apparently has not occurred to Dr. Wang that one might train successfully for low frequency procedures. Maybe he did not have the right hunch?

Another study from Dr. Wang and Dr. Yealy was How many attempts are required to accomplish out-of-hospital endotracheal intubation?[3]

Remember the study that looked at epinephrine use in cardiac arrest and used the government’s death index to assess their patients?[4] Dr. Wang, again. This time without Dr. Yealy.

Dr. Wang and Dr. Yealy sent a letter to the journal Academic Emergency Medicine. Human patients or simulators for teaching endotracheal intubation: whom are we fooling?[5] In it they criticize a study of the use of simulators for intubation training, instead of OR (Operating Room) practice. What were their complaints? The real world of EMS presents a variety of intubation settings and the simulator is not real. Well, the experience of intubating in the OR is also not like the real world of EMS. Another criticism of the use of the simulator is that OR training is the traditional “proven” method of intubation training. Of course they did not provide any evidence that it has been proven superior to any other method of training. Science? Hardly.

If you are looking for a study of the effect of lax medical supervision, which is common in their home state of Pennsylvania, you will not find them evaluating that. They study things that can be counted, so that the numbers can be put in a computer, and out pops an answer. It must be right. Anything that cannot be counted doesn’t matter. Quality cannot be counted.

It is as if Lord Kelvin[6] were working in EMS, today.

Almost 30 years ago, Alvin Feinstein coined the phrase, ‘‘the curse of Kelvin,’’ to refer to the unthinking and inappropriate worship of quantifiable information in medicine.1 Lord Kelvin (who was addressing physicists, not physicians), had been quoted as saying in effect, that if your knowledge could not be expressed in numbers, then it was of a meager and unsatisfactory kind. Health care, because of its desire to be ‘‘scientific,’’ has not only been stricken by the curse of Kelvin, but has positively embraced it; this despite the fact that many prominent scientists (including some of Kelvin’s contemporaries, eg, Darwin or Virchow) succeeded while completely ignoring his advice. Thus, we see a proliferation of scales and measurement instruments aimed at quantifying the hitherto unquantifiabled for example, patient satisfaction.[7]

Perhaps, I am being a bit unfair to Dr. Wang and to Dr. Yealy. They are the dominant EMS researchers in Pennsylvania. Their research is revered in Pennsylvania. There must be some value in these quantitative studies. Some value in nasty letters criticizing important EMS research for not being traditional enough.

Well, how do you determine the value of research?

The important thing in research is the predictive value of the results.

Predictive value?

Unfortunately, for Dr. Wang and Dr. Yealy, the research that they have spent the most time on, that is viewed as sacrosanct, hasn’t any predictive value. If you know that a medic averages only one tube per year (mean average) in Pennsylvania, that tells you nothing about an individual medic’s ability to safely and successfully manage an airway.

Explain predictive value.

You study something for a reason.

Yeah, you want to learn about it.

What is worth knowing about something more than how it will behave in specific circumstances?

OK, but how does that work with intubation?

If you want to be able to tell if a medic will do a good job managing an airway –

Any medical director should want to know that.

Well, you want some research that helps you to figure this out.

So this helps medical directors?

Not just medical directors, but anyone looking at research.

Give me an example.

If you count up the number of chest tubes inserted by emergency physicians in a state and divide by the number of emergency physicians in the state, what does that tell you to help you predict the skill of the physician on the next chest tube placement?

Not much.

You will find places where medics have very low intubation success rates (maybe even some unrecognized esophageal intubations) and other areas where it is rare for a tube to be missed and unrecognized esophageal intubation is something they only read about when it happens elsewhere.

Well, that is covered by the average. That’s why they call it an average.

Yes, but how will that help you identify the places where the care is good?

But this research did provide some useful information.

Nothing that a high school student couldn’t have done with access to the same records.

So why do people pay attention to this?

One of my medical directors said that they are very persuasive in person.

Isn’t the same thing true for all salesmen?

Yes, and doctors are not immune to this kind of influence.

What can we do to avoid being misled?

Look at the way the research was done, did the methods bias the results toward some preconceived notion? Did the study ask any important questions that would help to improve patient care, to recognize any problems that are correctable, to identify better treatments, . . . ?

Well, doesn’t this show that paramedics do not get enough tubes?

Maybe, but what does the average tell us about the intubation skill of medics at any one service?

Not much.

So, what was the point?

Scare a lot of people with a number that is presented as unacceptable.

Why is it unacceptable?

If you don’t intubate enough, you won’t be good at intubation?

Yes, this is probably true, but did they look at what the medics do to maintain skill levels where intubation is infrequent?

No. I guess they just assumed that nothing is done.

And that is one of the big problems with medical oversight. It is often done by someone who does nothing to make sure the medics are skilled to begin with and followed with nothing to maintain any skill the medics might have.

Don’t these medical directors care about their patients?

Do these medical directors even view the patients harmed by their medics as their patients?

I guess not.

How is it that there are services where medics intubate better than the ED physicians and other services where the medics seem to be blindfolded and playing pin the tube on the trachea?

Maybe they should have been evaluating different methods of assuring quality, instead of just counting tubes like Count von Count from the Muppets.

Exactly. They criticize researchers who do important work in looking at alternative methods of obtaining airway practice.

But that is important.

They waste their time averaging every medic in the state, when any good medical director will already know which medics need practice and which do not.

Good point. I suppose this does not really answer any questions on intubation.

No it does not, but we will cover predictive value in more depth so that you can see how far out in left field they really are.

In another post?

At least one post.

Other posts about this:

RSI Problems – What Oversight?

More RSI Oversight

Intubation Confirmation

More Intubation Confirmation

RSI, Intubation, Medical Direction, and Lawyers.

RSI, Risk Management, and Rocket Science

Footnotes:

^ 1 RSI procedure gets low level of oversight in Texas
The Star-Telegram article is no longer maintained at their site, but EMS1.com has what I believe is the full article on their site. This was published in various abbreviated formats by various news organizations. The abbreviated articles usually were attributed to AP or some other news organization, rather than to Danny Robbins.
High-risk EMS procedure gets a low level of oversight at JEMS.com

Now apparently only available at Free Republic.

^ 2 Wang HE, Kupas DF, Hostler D, Cooney R, Yealy DM, Lave JR.
Procedural experience with out-of-hospital endotracheal intubation.
Crit Care Med. 2005 Aug;33(8):1718-21.
PMID: 16096447 [PubMed – indexed for MEDLINE]

^ 3 Wang HE, Yealy DM.
How many attempts are required to accomplish out-of-hospital endotracheal intubation?
Acad Emerg Med. 2006 Apr;13(4):372-7. Epub 2006 Mar 10.
PMID: 16531595 [PubMed – indexed for MEDLINE]

^ 4 Wang HE, Min A, Hostler D, Chang CC, Callaway CW.
Differential effects of out-of-hospital interventions on short- and long-term survival after cardiopulmonary arrest.
Resuscitation. 2005 Oct;67(1):69-74.
PMID: 16146669 [PubMed – indexed for MEDLINE]

^ 5 Wang HE, Yealy DM.
Human patients or simulators for teaching endotracheal intubation: whom are we fooling?
Acad Emerg Med. 2006 Feb;13(2):232; author reply 232-3. No abstract available.
PMID: 16461753 [PubMed – indexed for MEDLINE]

^ 6 William Thomson, 1st Baron Kelvin (Lord Kelvin)
Wikipedia article

^ 7 Wears RL.
Patient satisfaction and the curse of Kelvin.
Ann Emerg Med. 2005 Jul;46(1):11-2. No abstract available.
PMID: 15988418 [PubMed – indexed for MEDLINE]

.

Comments

  1. I find the 1 per annum tube statistic to be suspect. Perhaps this is due to a large number of volunteer medics who do not work a significant number of shifts/year. Remember:Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital. -Aaron Levenstein

  2. Oh, and I almost forgot my favorite quote:98.3% of all statistics are made-up.

  3. Actually, the doctors are pretty good at counting and feeding the numbers into the computer. So the statistic should be accurate. Then again, maybe their methodology is flawed.They found that the most tubes per medic was 23 (only 1 medic) and only one medic for the next few numbers. This is something I found surprising. Before moving to PA, I was tubing more patients than that in a year working only about 60 hours a week.This is supposed to cover every active medic in PA.

  4. Accurate doesn’t mean the same as meaningful.My point is that they counted medics with at least 1 patient contact for the year- in their study. Not exactly discriminating is it? This is not my definition of “active”.My point is that the volunteer who does a handful of calls a year will skew the numbers.

  5. “Accurate doesn’t mean the same as meaningful.”True. And that works in more than one way.Not a meaningful representation of the experience level of the “average medic,” since this is not looking at the average medic, but averaging the documented tubes and averaging that number across all medics.Knowing what number the quantifiers arrived at tells you nothing about the ability of any medic to intubate.The formula X = A times B times C (where X = the likelihood of a successful intubation and A, B, and C are things that are documented that affect intubation success) does not exist. Well it might exist in a video game about emergency care, because they have to come up with some formula for the game to look as if it is doing something interesting. This research falls into the same category.