And so the creep began. Six years later, the National Academy of Sciences issued its own report, including illustrative diagrams, instructing medical personnel in the proper performance of closed-chest cardiac massage and CPR. Soon thereafter, all cardiac arrests in U.S. hospitals were treated with these methods. Resurrecting the dead became medicine’s obsession—and not just inside hospitals. Many U.S. communities funded “mobile intensive care units” that enabled ambulance personnel to deliver CPR outside the hospital, too. In other words, CPR became—and remains today—the “default” treatment for every person who dies. Unless you explicitly forbid it, you will leave this world the same way: Doctors or ambulance personnel will pump on your chest, put a breathing tube down your throat, squeeze oxygen into your lungs, jab you with needles, and electrocute your heart.

Anyone who has watched these things done to a frail, demented, or terminally ill 90-year-old person understands just how crazy, and creepy, this can be. How did the CPR technique pioneered by the Hopkins doctors (which continues to save many lives today) become an accepted final rite of passage for everyone?

Three things doctors and researchers didn’t understand in 1960 contributed to CPR creep. First, the Hopkins researchers, in their initial study, treated a very narrow spectrum of patients; most were young, healthy people (including several children) whose hearts had stopped during elective surgery, victims of anesthesia mishaps. Their impressive success rate (70 percent) was much higher than the success rate for patients who suffer cardiac arrest in hospital today (5 to 15 percent), most of whom are elderly people with advanced heart disease and other serious conditions. (Recent large studies involving only elderly patients have documented CPR survival rates as low as zero and as high as 18 percent, with up to one-quarter of all survivors suffering permanent brain damage.) We now know that many new treatments, studied initially in a narrow spectrum of patients, aren't nearly as successful when used in a broader patient population. Ignorance of this “spectrum bias” potentiates creep.

Second, CPR creep reflects our failure to understand the difference between efficacy and effectiveness. The efficacy of a medical treatment refers to whether it can achieve its desired effect when studied under the ideal conditions of a research study. In contrast, the effectiveness of a medical treatment measures how well it performs in the “real world,” where conditions are far from ideal. For example, the first study of out-of-hospital CPR in 1967 found that 50 percent of cardiac arrest victims in Belfast were resuscitated successfully. But when doctors in New York tried to replicate these spectacular results on the streets of lower Manhattan, their “mobile ICU” ambulances were able to save only 6 percent of cardiac arrest victims. (Recent studies have shown that survival rates after out-of-hospital CPR range from 2 percent in urban Chicago to almost 20 percent in suburban Seattle.) Medical innovations that prove efficacious in one research setting often are ineffective elsewhere. Confusing efficacy with effectiveness promotes creep.