Patient Safety Research: Creating Crisis

By ACSH Staff — Jan 10, 2005
The November 11, 2004 issue of New England Journal of Medicine (NEJM) celebrated the fifth anniversary of the release by the Institute of Medicine (IOM) of the monograph To Err is Human.(1) The NEJM editorial by Drew Altman, Ph.D. (Kaiser Family Foundation), Carolyn Clancy, M.D. (U.S. Agency for Healthcare Research and Quality), and Robert Blendon, Sc.D. (Harvard School of Public Health) repeated the old assertion of a patient safety crisis in which 44,000 to 98,000 patients died in American hospitals each year due to preventable medical errors.(2)

The November 11, 2004 issue of New England Journal of Medicine (NEJM) celebrated the fifth anniversary of the release by the Institute of Medicine (IOM) of the monograph To Err is Human.(1) The NEJM editorial by Drew Altman, Ph.D. (Kaiser Family Foundation), Carolyn Clancy, M.D. (U.S. Agency for Healthcare Research and Quality), and Robert Blendon, Sc.D. (Harvard School of Public Health) repeated the old assertion of a patient safety crisis in which 44,000 to 98,000 patients died in American hospitals each year due to preventable medical errors.(2)

Altman and colleagues applauded massive public and private spending in recent years to improve patient safety. But they also asked the rhetorical question "How can we increase confidence in health care as we continue to address safety and quality?" They reported that more than half of public respondents said they were dissatisfied with the quality of care in America, up from 44% dissatisfaction the year that the crusade for safety began. They expressed frustration about how best to collect and analyze statistics on safety.

The authors confidently asserted that "if we do not expand and accelerate current efforts, we can expect future surveys to reveal a persistent lack of confidence in the safety and quality of the nation s health care system." What if the safety surveys and the reporting of them are exaggerating the risks to patients and are themselves causing heightened fear of hospital errors?

Two years after To Err Is Human, the IOM released Crossing the Quality Chasm: A New Health System for the 21st Century (March 1, 2001) as a follow-up. The IOM offered a plan for "revamping" American healthcare, with thirteen solutions, including such vague ideas as "continuous healing relationships," "making safety a system property," and "identifying medical priority conditions," along with creating "a national quality forum" to direct, administrate, and coordinate quality and safety efforts. The two IOM reports suggested that they were responding to a crisis and that a national patient safety center and a billion-dollar "innovation fund" were necessary first steps. Federal funding followed in 2001, with $100 million available for safety research.

Flawed Research

The Texas Medical Foundation (TMF), one of the fifty or more Medicare audit organizations assessing healthcare quality for all of the Unites States, did annual studies in the late 80s and early 90s, studies far larger than those used by the IOM. For example, the TMF used 318,000 cases from 400 hospitals, compiling a three-year series of studies, compared to IOM's 45,000 cases from seventy-eight hospitals in two separate one-year studies in three different states. The TMF data for times and circumstances comparable to those in the IOM studies show no patient safety crisis and a rate of negligence that cannot be avoided in complicated healthcare situations -- injury due to negligence rates of less the 0.5%.

The annual national total death numbers per year constantly quoted from the IOM and subsequent safety reports are 44,000 to 98,000. But what if the IOM, in its capacity as the de facto think tank arm of the National Academy of Sciences, is using flawed statistics for political ends -- to make the case for greater government intervention in healthcare? The IOM papers and research on patient safety were funded by a federal agency, the Agency for Healthcare Research and Quality (AHRQ), throughout most of the 90s known as the Agency for Healthcare Policy and Research. That organization lost its original job of writing clinical guidelines and was devoid of purpose before the safety movement was energized. It reinvented itself after the 1999 IOM safety report as the guardian of patient safety. The more evidence for a patient safety crisis the safety movement discovers, the stronger the case for AHRQ's existence.

The death numbers pronounced in the IOM monograph of 1999 and repeated since were based on two research projects of a Harvard Medical School group, the first looking at fifty-one New York hospitals in 1984 (published in NEJM in 1991.(3,4) The second study was of hospital care in twenty-eight Utah and Colorado hospitals in 1992, published in 1999 and 2000.(5,6,7,8,9) Although neither of these studies actually asserted 44,000 or 98,000 anywhere in the data, the IOM authors created these numbers, and the media has used them as if they were hard data, despite the range of hypothetical deaths -- 44,000 to 98,000 -- varying by over 100% and despite a serious mistake by the IOM: exaggerating a national projection of 25,000 deaths in the later Utah/Colorado study.

USA Today's November 30, 1999 front-page headline proclaimed "Medical mistakes 8th top killer," and the accompanying article reported: "Medical errors kill more Americans than traffic accidents, breast cancer, or AIDS, Institute of Medicine officials said Monday as they called for a sweeping 'systems' approach to make medicine safer."(10)

As a safety expert, I was outraged to read the prepublication release of the 1999 IOM monograph because I had analyzed the 1991 report of the Harvard study in New York that the IOM was relying on and found it to be deeply flawed. Still, when the IOM report came out in 1999, organized medicine had already quietly agreed to play a supportive role in any government-proposed safety program, fearing the alternative of an even more direct regulatory role for government. But the patient safety crusade hadn't counted on an honest Harvard physician/attorney named Brennan.

Voices of Dissent Are Heard

Troyen Brennan M.D., J.D. -- a lead Harvard researcher on the two studies that were the backbone of the IOM report and the source of the negligence death numbers that scared so many -- asserted in an essay in NEJM that the research of the Harvard group was weak and was being misused by the IOM. Brennan wrote:

--"I have cautioned against drawing conclusions about the numbers of deaths in these studies."

--"The ability of identifying errors is methodologically suspect."

--"In both studies (New York and Utah/Colorado) we agreed among ourselves about whether events should be classified as preventable...these decisions do not necessarily reflect the views of the average physician, and certainly don't mean that all preventable adverse events were blunders."(11)

Another major segment of patient safety research relied on by the IOM in their 1999 announcement of a crisis was research on adverse drug events (ADEs), meaning undiscovered or uncorrected mistakes in prescribing and administration of medications and fluids. However, that research is frequently weak. It is clear that the ADE research dredges for numbers and exaggerates effects by including "possible" drug events and expected-and-unavoidable drug events.

So say Jerry Avorn, M.D., and David Bates, M.D., of Brigham and Women's Hospital in Boston, writing in the Journal of the American Medical Association (JAMA). Dr. Avorn says in an editorial about a couple of ADE reports: "These two studies push hard at the boundaries of clinical epidemiology and health services research, and a skeptic might wonder whether the envelopes of these disciplines might not have gotten a bit nicked in the process."(12) Dr. Bates, in an editorial commenting on another drug event study, says problems exist in studies of ADEs, such as whether they are properly identified and evaluated and whether ADEs are really avoidable in a practical sense, particularly in severely ill patients.(13) The millions of drug administrations daily in American hospitals present a potential for error, but also an opportunity for research data dredging and manipulation, including the creation of a "crisis."

There Are Only Three Large Patient Safety Studies

Three comprehensive studies of negligence or patient safety in American hospital inpatient care have been conducted:

--The first was a study by Don Harper Mills, M.D., J.D., a pathologist/attorney for the California Medical Association, who, with three associate attorney/physicians, looked at care in California hospitals in 1974.(14).

--The second study examined care in New York hospitals in 1984. Conducted by a group from Harvard that included Dr. Brennan and Lucien Leape, M.D.(15).

--The third study reported on patient care in Utah and Colorado hospitals in 1992. It was the Harvard group's second study, led by Dr Brennan.(16).

The first study: The methods of the California study are summarized in a piece I did for Texas Medicine.(17) In that study, Mills and his colleagues found 4% adverse outcomes, of which 0.8% of the total were negligence, and found a net serious negligence incident rate of 0.2%. At the time no crisis was proclaimed, but now we have crisis and hand-wringing, well publicized by the press and media.

The second study: Review and discussion of the New York hospitals study in the mainstream press announced estimates of 80,000, 100,000, and 180,000 patient deaths across the U.S. and as many as 3 million inpatient injuries. Of the 154 adverse event deaths in the study (0.51% of the total of adverse events), seventy (almost half of the deaths and 0.23% of the total sample) were judged by the authors to be due to negligence. But the death of an individual, according to the authors, could be judged negligent even if it came only one day early in the case of a terminally-ill individual. Furthermore, it is well known in peer review that bad outcomes tend to bias judgments of negligence. Finding fault is more common if a death occurs, even if the criticism is quibbling. This bias is difficult to eliminate and inflates the numbers. Nonetheless, the authors took their number of negligent injuries and deaths they discerned to be representative of hospital negligence throughout the nation, multiplying by 1,200 to extrapolate the amount of damage being done in the American hospital population as a whole.(18,19) A small number in New York -- exaggerated slightly before being multiplied by 1,200 -- quickly becomes a crisis. Lost in the shuffle are variations in the makeup of the population from state and state -- and the panic-reducing fact that only some 10% of the injuries the New York study identified resulted in any serious or permanent injury. Even if the biased New York numbers were taken for granted, serious annual negligent patient injury would be only 19,800 across the country, not 198,000. Simlarly, while the Harvard group estimated some 44,000 to 98,000 negligent deaths per year, the larger Texas Medical Foundation study of 1989 through 1992, discussed below, shows a national projection of deaths due to negligence at fewer than 20,000.(20, 21,22) that is in a nation that sees more than 25 million hospital admissions a year.

The third study: Dr. Brennan conceded in his 2000 NEJM essay that the Harvard study of Utah and Colorado involved ambiguous judgments by him and one of his co-authors, Dr. E.J. Thomas, as to what constituted negligence that their judgments did not meet the standard requirements for statistical agreement between evaluation methods.(23,24,25) Twenty-two family practitioners and internists did the physician reviews -- sixteen for Colorado and six for Utah charts. They identified 418 adverse events from Colorado and 169 from Utah, a total of 587 (3.9% of all cases), later revised down to 459. All physician-determined adverse events were then reviewed by Dr.s Brennan and Thomas to judge "preventable" adverse events. Preventable adverse events numbered 265 (1.9% of the total), and negligence events totaled 169 (1.1% of the total). The authors made no comment in their original report about how they dealt with their own internal disagreements about what constituted negligence except for a vague reference to "working out" any differences. So much for achieving objective judgments of negligence and preventability. Multiple analysis is the usual means of increasing reliability. Consensus single reviews as were done in Utah/Colorado, artificially eliminate any source of variance, disguising doubt about the conclusiosn. (By contrast, the TMF reviews discussed below were repeated at multiple levels by individuals, committees, and then supervising committees before final judgment.)

These are but a few of the problems with the two major Harvard patient safety studies, yet on this shaky foundation the idea of a "crisis" was built, and quickly taken up by activists, media, and politicians.

As part of my analysis of these patient safety sudies, I analyzed the TMF studies of more than 300,000 hospital admissions, finding a significant negligence injury rate of less than 0.25% in a Medicare population much more elderly, frail, and ill than the Harvard study cohorts, clearly in conflict with the crisis propaganda of the IOM and also available to any researcher who cared to look, published in the TMF regular Newsletter.(20,21)

A reasonable healthcare policy wonk should hesitate to turn the system upside down and destroy public confidence when the IOM research is so weak and other data does not support crisis chatter.

The Political Fallout

In 1992, after the Harvard/New York study reports, there was a flurry of activity and media reaction to the 1991 Harvard New York study. Rep. Pete Stark announced a crisis in a letter to his congressional colleagues, encouraging immediate intervention, but there was no response. Even at that early stage, Troyen Brennan and Paul Weiler of the Harvard Group warned Stark in a letter that the deaths in their study were, in many cases, deaths of critically ill patients who were dying anyway, and that while the authors judged those deaths to be premature because of some mismanagement, they were premature only by a matter of days. In fact, many of the criticisms of the research I have voiced here -- and that were articulated by Brennan in April of 2000 in NEJM -- were caveats included in the text of the original research articles, including the unreliability of reviewers, the small number of serious injuries, and the tendency to find fault in death cases.

But by 1997-98, under political pressure, the American Medical Association had formed the Patient Safety Foundation, the Veteran's Administration had formed a special commission on patient safety, and in late 1998 the Pew Health Professions Commission, asserting that physician competence was impaired generally, called for additional periodic tests of physician's skills, separate from any state and specialty activities.(26) Sydney Wolfe, M.D., Ralph Nader's medical mouthpiece, applauded these developments but warned that more needed to be done.

In contrast to Brennan and his allies, Dr. Lucien Leape, a member of the Harvard/New York Hospital Medical Practice study group (described by an American Medical News reporter as a "pioneer" in patient safety), said in 1998, "I think we have a real movement here, not just a fad." Leape also voiced a desire to get public support, "without scaring them." Weeks later he apparently changed his mind about scare tactics, though, and began a chant repeated throughout 1999 that the American inpatient safety problem was "equal to three jumbo jets crashing every two days."(27)

Mark Chassin, M.D. (a career apparatchik, co-chair of the IOM roundtable on quality during 1998 and 1999, previously an agency director with the New York Department of Health, later in charge of Quality Assurance with Mount Sanai Hospital in New York), announced to a seminar in April 1999 that medical care in the United States was "pretty mediocre"(28). Dr. Chassin, Dr. Leape, and other safety experts held a seminar for the press in early 1999 because they were concerned that the public and the media were unaware of the magnitude of the patient safety problem. The journalists responded that it was a complicated issue with little public appeal.(29) Attendees at another patient safety conference in summer 1999 at the Annenberg Center were stunned at the lack of interest in serious presentations and the focus on safety sloganeering and propaganda.(30) That same summer, the journal Public Health Reports ran an interview with Leape and an attorney from the Pew Health Commission Group that emphasized the existence of a crisis and related old anecdotes about a member of the press killed by a mistaken Adriamycin dosage and another patient who had the wrong leg amputated, plus a new anecdote about Paul Elwood, M.D. -- a prominent managed-care theoretician -- receiving poor orthopedic care for a fractured leg.(31)

The September 13, 1999 issue of American Medical News soon reported a rally of consumer advocacy groups described as a "loose knit coalition of patients rights and injury prevention groups" who demonstrated in front of the AMA headquarters, demanding more sensitivity to the status of victims of healthcare negligence and improved stature for victim advocacy groups. More than a dozen local and national groups, including Families Advocating Injury Reduction, the Association for Responsible Medicine, the American Society for Action on Pain, and the Coalition for Post-tubal Women, were present, alleging that they represented the more than 300,000 patients harmed every year as a result of healthcare treatment. Soon, patients in wheelchairs were telling their horror stories before Congress. And in a healthcare system as large as ours, there will always be horror stories. Those anecdotes are not solid statistics, but when they are perceived as confirmation of statistical studies (however weak the actual data may be), the idea of a mounting crisis is easily ingrained, with misguided regulation and needless public fear likely to follow.

NOTES

1 Kohn LT, Corrigan JM, Donaldson MS, eds. To Err is Human -- Building a Safer Health System. Washington, DC: National Academy of Science Press; prepublication, November 1999; final publication, 2000.

2 Altman DE, Clancy C, Blendon RJ. Improving patient safety -- five years after the IOM report. NEJM 2004;351:2041-43.

3 Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I. NEJM 1991;324:370-376.

Leape LL, Brennan TA, Laird NM, et al. The nature of adverse events in hospitalized patients: results of the Harvard Medical Practice Study II. NEJM 1991;324:377-384.

4 Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence: results of the Harvard Medical Practice Study III. NEJM 1991;325:245-251.

5 Gawande AA, Thomas EJ, Zinner MJ, Brennan TA. The incidence and nature of surgical adverse events in Colorado and Utah in 1992. Surgery. 1999;126:66-75.

6 Thomas EJ, Studdert DM, Newhouse JP, et al. Costs of medical injuries in Utah and Colorado. Inquiry 1999;36:255-264.

7 Thomas EJ, Brennan TA. Incidence and types of preventable adverse events in elderly patients: population based review of medical records. BMJ 2000;320:741-744.

8 Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261-271.

9 Studdert DM, Thomas EJ, Burstin HR, Zbar BI, Orav EJ, Brennan TA. Negligent care and malpractice claiming behavior in Utah and Colorado. Med Care 2000;38:250-260.

10 Davis B, Appleby J. Medical mistakes 8th top killer. USA Today, November 30, 1999:1.

11 Brennan TA. The Institute of Medicine report on medical errors -- could it do harm? NEJM 2000;342:1123-1125.

12 Avorn J. Putting adverse drug events into perspective [editorial]. JAMA 1997;277:341-342.

13 Bates DW. Drugs and adverse drug reactions: how worried should we be? [editorial]. JAMA 1998;279:1216-1217.

14 Mills DH. Medical insurance feasibility study. West J Med 1978;128:360-365.

15 supra notes 2-4.

16 supra notes 5-9.

17 Dunn JD. Patient safety in America: comparison and analysis of national and Texas patient safety research. Texas Medicine (Oct) 2000; 96:66-74.

18 Caplan RA, Posner KL, Cheney FW. Effect of outcome on physician judgments of appropriateness of care. JAMA 1991;265:1957-1960.

19 Leape LL, Lawthers AG, Brennan TA, Johnson WG. Preventing medical injury. QRB 1993;19:144-149.

20 Quantum. Annual report to the provider: April 1, 1989-March 31, 1990. Austin, TX: Texas Medical Foundation; 1990:6, fig 10.

21 Quantum. Annual report to the provider. April 1, 1990-March 31, 1991. Austin, TX: Texas Medical Foundation; 1991:7, fig 10.

22 Quantum. Annual report to the provider: April 1, 1991-March 31, 1992. Austin, TX: Texas Medical Foundation; 1992:7.

23 Fleiss JL. The measure of interrater agreement. In: Statistical Methods for Rates and Proportions. New York, NY: John Wiley and Sons; 1981:chap 13.

24 Fleiss JL. Reliability of measurement. In: The Design and Analysis of Clinical Experiments. New York, NY: John Wiley and Sons; 1986:chap 1.

25 Goldman RL. The reliability of peer assessments of quality of care. JAMA 1992;267:958-960.

26 Medical Staff Briefing. December 1998, Opus Communications, Marblehead, MA. p. 7.

27 Prager LO. Initiatives chip away at error rate. AMNews Dec. 7, 1998, p. 10.

28 Prager LO. World's best healthcare can be pretty mediocre. AMNews May 17, 1999, p. 13.

29 Moore JD. Healthcare leaders strategize quality. Modern Healthcare, April 19, 1999, p. 70.

30 Personal communication from one of the presenters. The meeting reportedly had the flavor of a rally rather than a serious symposium.

31 A conversation on medical injury. Public Health Rep (July-Aug) 1999;114:302-317.

Dr. Dunn is an Emergency Physician and Peer Review Consultant who lives in Brownwood, TX and teaches emergency medicine at Darnall Army Community Hospital, Fort Hood, TX.