Whether you’ve been to a morbidity and mortality conference – M&M for those on the inside -or simply watched a fictional version on TV, the ritual is unmistakable. A junior physician, usually a resident, presents the relevant facts of a case in which a patient suffered a poor outcome. Snapshots of the history, physical exam, labs, imaging, and the hospital course are displayed on a power point presentation. The presenting resident is visibly unnerved. The attending on the case may share a few details, but largely stands back. Faculty shower the resident with questions about his knowledge of the case, and sometimes his knowledge of the pertinent medical literature. Invariably one person asks a question that has already been addressed in the presentation. A senior attending shares an anecdote of a loosely related case, ending with an even more loosely related teaching point.
The formulaic nature of M&M is a nod toward the medical profession’s centuries-long history of systematically examining the root of the medical error. Ernest Codman instituted the first morbidity and mortality conferences in the early 1900s at the Massachusetts General Hospital. By 1916, the American College of Surgeons had established a case report system that ascribed responsibility for adverse outcomes. In 1983, the Accreditation Council for Graduate Medical Education (ACGME) mandated that M&M be a part of every residency training curriculum.
Historically, the conference has not been considered benign. The faculty’s cross-examination of the resident has often been perceived as unnecessarily harsh; an attack on the resident’s mistake. One famous senior physician allegedly asked a resident who had described a case of a patient’s death, “Why didn’t you just take a gun and shoot him?”
Partly in reaction to its historically pejorative nature, and partly as a consequence of the patient safety movement sweeping hospitals over the last 15 years, the emphasis of M&M has shifted in many centers to focus on the failings of the health care system, not the physician. The “Swiss Cheese Model” is often invoked, in which an adverse event is interpreted as a consequence of several individual failures across multiple departments, processes, and even equipment.
But when we focus only on processes, systems, and workflows, we miss a sizable chunk of the complex error puzzle. Sure, the systems we create are often faulty, but in a field in which cognitive demand is high, we can’t ignore a huge contributor to medical error: the human brain. Often, this comes in the form of biases and heuristics, phenomena that have served humanity well in evolution, and, to a certain extent, lead us to become master clinicians. Heuristics allow us to make snap decisions about a scenario, like avoiding a growling dog on the sidewalk or, in medicine, instinctively knowing when a patient looks sick on first glance, without thoroughly understanding why. But the same cognitive processes can also lead us to anchor on a diagnosis even when conflicting evidence arises (the “anchoring bias”) or assume that a current patient has the same disease that a previous patient, who looked the same way, also had (the “availability heuristic”).
The removal of human error in many modern M&M conferences has led some to propose reforming the focus of the conference to discuss the cognitive error in medical decision-making. It’s called “metacognition,” or, colloquially, or thinking about thinking. This approach recognizes that clinical reasoning lies at the core of patient safety and that a “cognitive autopsy” is critical to improving patient care.
Although focusing on human error may seem to represent a shift back to harshly ascribing personal blame to individuals, the truth is that it’s quite the opposite. Even the most intelligent and experienced physicians fall prey to cognitive mishaps. The key is to normalize the errors and help learners recognize that the effects of cognitive biases are inherent to all humans. Ultimately, the goal is to teach the next generation of physicians to identify and overcome their own cognitive errors.
It’s time to include human error in physician decision-making on the patient safety agenda.