CH 13267

All men make mistakes, but a good man yields when he knows his course is wrong, and repairs the evil. The only crime is pride.

— Sophocles, Antigone

(Thanks to my friend Michelle Tanner, MD who contributed immensely to this article).

In the post Cognitive Bias, we went over a list of cognitives biases that may affect our clinical decisions. There are many more, and sometimes these biases are given different names. Rather than use the word bias, many authors, including the thought-leader in this field, Pat Croskerry, prefer the term cognitive dispositions to respond (CDR) to describe many situations where clinicians’ cognitive processes might be distorted, including the use of inappropriate heuristics, cognitive biases, logical fallacies, and other mental errors. The term CDR is thought to carry less of a negative connotation, and indeed, physicians have been resistant to interventions aimed at increasing awareness of and reducing errors due to cognitive biases.

After the publication of the 2000 Institute of Medicine Report To Err is Human, which attributed up to 98,000 deaths per year to medical errors, many efforts were made to reduce errors in our practices and systems. Development of multidisciplinary teams, computerized order entry, clinical guidelines, and quality improvement task forces have attempted to lessen medical errors and their impact on patients. We have seen an increased emphasis on things like medication safety cross-checking, reduction in resident work hours, using checklists in hospital order sets or ‘time-outs’ in the operating room. But most serious medical errors actually stems from misdiagnosis. Yes, every now and again a patient might have surgery on the wrong side or receive a medication that interacts with another medication, but at any given time, up to 15% of patients admitted to the hospital are being treated for the wrong diagnosis – with interventions that carry risk – while the actual cause of their symptoms remains unknown and likely untreated. To Err Is Human noted that postmortem causes of death were different from antemortem diagnoses 40% of the time in autopsies! How many of those deaths might have been prevented if physicians had been treating the correct diagnosis?

Most of these failures of diagnosis (probably two-thirds) are related to CDRs and lot of work has been done since 2000 to elucidate various causes and interventions, but physicians have been resistant to being told that there might be a problem with how they think. Physicians love to blame medical errors on someone or something else – thus the focus has been on resident’s lack of sleep or medication interaction checking. Seeking to reduce physicians resistance due to a feeling of being criticized is a prime reason why Croskerry and others prefer to use the term cognitive disposition to respond rather than negative words like bias or fallacy. I’m happy with either term because I’m not sure that relabeling will change the main problem: physicians tend to be a bit narcissistic and therefore resistant to the idea that all of us are biased and all of us have to actively work to monitor those biases and make decisions that are overly-influenced by them.

We make poor decisions for one of two reasons: either we lack information or we don’t apply what we know correctly. Riegleman, in his 1991 book Minimizing medical mistakes: the art of medical decision making, called this ‘errors of ignorance’ and ‘errors of implementation.’ One of the goals of To Err is Human was to create an environment where medical errors were attributed to systematic rather than personal failures, hoping to make progress in reducing error by de-emphasizing individual blame. Our focus here, of course, is to evaluate the errors of implementation. Graber et al., in 2002, further categorized diagnostic errors into three types: No-fault errors, system errors, and cognitive errors. No-fault errors will always happen (like when our hypothetical physician failed to diagnose mesenteric ischemia despite doing the correct work-up). System errors have been explored heavily since the publication of To Err is Human. But the cognitive errors remain and understanding our CDRs (our biases, etc.) is the first step to reducing this type of error.

Croskerry divides the CDRs into the following categories:

  • Errors of overattachment to a particular diagnosis
    • Anchoring, confirmation bias, premature closure, sunk costs
  • Errors due to failure to consider alternative diagnoses
    • Multiple alternatives bias, representativeness restraint, search satisfying, Sutton’s slip, unpacking principle, vertical line failure
  • Errors due to inheriting someone else’s thinking
    • Diagnosis momentum, framing effect, ascertainment effect, bandwagon effect
  • Errors in prevalence perception or estimation
    • Availability bias, ambiguity effect, base-rate neglect, gambler’s fallacy, hindsight bias, playing the odds, posterior probability error, order effects
  • Errors involving patient characteristics or presentation context
    • Fundamental attribution error, gender bias, psych-out error, triage cueing, contrast effect, yin-yang out
  • Errors associated with physician affect, personality, or decision style
    • Commission bias, omission bias, outcome bias, visceral bias, overconfidence, vertical line failure, belief bias, ego bias, sunk costs, zebra retreat

Some additional biases mentioned above include the bandwagon effect (doing something just because every one else does, like giving magnesium to women in premature labor), ambiguity effect (picking a diagnosis or treatment because more is known about it, like the outcome), contrast effect (minimizing the treatment of one patient because, in contrast, her problems pale in comparison to the last patient), belief bias (accepting or rejecting data based on its conclusion or whether it fits with what one already believes rather than on the strength of the data itself), ego bias (overestimating the prognosis of your patients compared to that of others’ patients), and zebra retreat (not pursuing a suspected rare diagnosis out of fear of being view negatively by colleagues or others for wasting time, resources, etc.).

We are all vulnerable to cognitive dispositions that can lead to error. Just being aware of this is meaningful and can make us less likely to make these mistakes, but we need to do more. We need to actively work to de-bias ourselves. Let’s look at some strategies for this (adapted from Croskerry):

Develop insight/awareness: Education about CDRs is a crucial first step to reducing their impact on our clinical thinking, but it cannot stop with reading an article or a book. We have to look for examples of them in our own practices and integrate our understanding of CDRs into our quality improvement processes.  We need to identify our biases and how they affect our decision-making and diagnosis formulation. An analysis of cognitive errors (and their root causes) should be a part of every peer review process, quality improvement meetings, and morbidity and mortality conferences. Most cases that are reviewed in these formats are selected because a less than optimal outcome occurred; the root cause (or at least a major contributor) in most cases was a cognitive error.  

Consider alternatives: We need to establish forced consideration of alternative possibilities, both in our own practices and in how and what we teach; considering alternatives should be a part of how we teach medicine routinely. Always ask the question, “What else could this be?” Ask yourself, ask your learner, ask your consultant. The ensuing conversation is perhaps the most educational thing we can ever do. Even when the diagnosis is obvious, always ask the question. This needs to become part of the culture of medicine. 

Metacognition: We all need to continually examine and reflect on our thinking processes actively, and not just when things go wrong. Even when things go right, it is a meaningful and important step to consider why things went right. We focus to much on negative outcomes (this is a form of bias); consequently, we develop a skewed sense of what contributed to the negative outcome. So try thinking about what went right as well, reinforcing the good things in our clinical processes. 

Decrease reliance on memory: In the pre-computer days, a highly valued quality in a physician was a good memory. Unfortunately, medical schools today still emphasizes this skill, selecting students who might excel in rote memorization but lag behind in critical thinking skills. In the 1950s, memory was everything: there was no quick way of searching the literature, of comprehensively checking drug interactions, of finding the latest treatment guidelines, etc. But today, memory is our greatest weakness. Our memories are poor and biased, and there is more data that we need to have mastery of than ever before in order to be a good doctor. So stop relying on your memory. We need to encourage the habitual use of cognitive aids, whether that’s mnemonics, practice guidelines, algorithms, or computers. If you don’t treat a particular disease every week, then look it up each time you encounter it. If you don’t use a particular drug all the time, then cross check the dose and interactions every time you prescribe it. Even if you do treat a particular disease every day, you should still do a comprehensive literature search every 6 months or so (yearly at the very least).

Many physicians are sorely dated in their treatment. What little new information they learn often comes from the worst sources: drug and product reps, throw-away journals, popular media, and even TV commercials. Education is a life-long process. Young physicians need to develop the habits of life-long learning early. Today, this means relying on electronic databases, practice guidelines, etc. as part of daily practice. I, for one, use Pubmed at least five times a day (and I feel like I’m pretty up-to-date in my area of expertise).

Our memory, as a multitude of psychological experiments have shown, are among our worst assets. Stop trusting it.

Specific training: We need to identify specific flaws in our thinking and specific biases and direct efforts to overcome them. For example, the area that seems to contribute most to misdiagnosis relates to a poor understanding of Bayesian probabilities and inference, so specific training in Bayesian probabilities might be in order, or learning from examples of popular biases, like distinguishing correlation from causation, etc. 

Simulation: We should use mental rehearsal and visualization as well as practical simulation/videos exhibiting right and wrong approaches. Though mental rehearsal may sound like a waste of your time, it is a powerful tool. If we appropriately employ metacognition, mental rehearsal of scenarios is a natural extension. Remember, one of our goals is to make our System 1 thinking better by employing System 2 thinking when we have time to do so (packing the parachute correctly). So a practical simulation in shoulder dystocia, done in a System 2 manner, will make our “instinctual” responses (the System 1 responses) better in the heat of the moment when the real shoulder dystocia happens. A real shoulder dystocia is no time to learn; you either have an absolute and definitive pathway in your mind of how you will deliver the baby before it suffers permanent injury or you don’t. But this is true even for things like making differential diagnoses. A howardism: practice does not make perfect, but good practice certainly helps get us closer. A corollary of this axiom is that bad practice makes a bad doctor; unfortunately, a lot of people have been packing the parachute incorrectly for many years and they have gotten lucky with the way the wind was blowing when they jumped out of the plane. 

Cognitive forcing strategies: We need to develop specific and general strategies to avoid bias in clinical situations. We can use our clinical processes and approaches to force us to think and avoid certain biases, even when we otherwise would not. Always asking the question, “What else could this be?” is an example of a cognitive forcing strategy. Our heuristics and clinical algorithms should incorporate cognitive forcing strategies. For example, an order sheet might ask you to provide a reason why you have elected not to use a preoperative antibiotic or thromboembolism prophylaxis. It may seem annoying to have to fill that out every time, but it makes you think. 

Make tasks easier: Reduce task difficulty and ambiguity. We need to train physicians in the proper use of relevant information databases and make these resources available to them. We need to remove as many barriers as possible to good decision making. This may come in the form of evidence-based order sets, clinical decision tools and nomograms, or efficient utilization of evidence-based resources. Bates et al. list “ten commandments” for effective clinical decision support. 

Minimize time pressures: Provide sufficient quality time to make decisions. We fall back to System 1 thinking when we are pressed for time, stressed, depressed, under pressure, etc. Hospitals and clinics should promote an atmosphere where appropriate time is given, so that System 2 critical thinking can occur when necessary, without further adding to the stress of a physician who already feels over-worked, under-appreciated, and behind. I won’t hold my breath for that. But clinicians can do this too. Don’t be afraid to tell a patient “I don’t know” or “I’m not sure” and then get back to them after finding the data you need to make a good decision. We should emphasize this idea even on simple decisions. Our snap, instinctive answers are usually correct (especially if we have been packing the parachute well) but we need to always take the time to do something if it is the right thing to do. For example, in education, you might consider always using a form of the One-minute preceptorThis simple tool can turn usually non-educational patient “check-outs” into an educational process for both you and your learner. 

Accountability: Establish clear accountability and follow-up for decisions. Physicians too often don’t learn from cases that go wrong. They circle the wagons around themselves and go into an ego-defense mode, blaming the patient, nurses, the resident, or really anyone but themselves. While others may have some part in contributing to what went wrong, you can really only change yourself. We have to keep ourselves honest (and when we don’t, we need honest and not-always-punitive peer review processes to provide feedback). Physicians, unfortunately, often learn little from “bad cases,” or the “crashes,” but they also learn very little from “near-misses.” Usually, for every time a physician has a “crash,” there have been several near-misses (or, as Geroge Carlin called them, “near-hits”). Ideally, we would learn as much from a near-miss as we might from a crash, and, in doing so, hopefully reduce the number of both. We cannot wait for things to go wrong to learn how to improve our processes.

Using  personal or institutional databases for self-reflection is one way to be honest about outcomes. I maintain a database of every case or delivery I do; I can use this to compare any number of metrics to national, regional, and institutional averages (like primary cesarean rates, for example). We also need to utilize quality improvement conferences, even in nonacademic settings. Even when things go right, we can still learn and improve. 

Feedback: We should provide rapid and reliable feedback so that errors are appreciated, understood, and corrected, allowing calibration of cognition. We need to do this for ourselves, our peers, and our institutions. Peer review processes should use modern tools like root-cause analysis, and utilize evidence-based data to inform the analysis. Information about potential cognitive biases should be returned to physicians with opportunities for improvement. Also, adverse situations and affective disorders that might lead to increased reliance on CDRs should be assessed, including things like substance abuse, sleep deprivation, mood and personality disorders, levels of stress, emotional intelligence, communications skills, etc. 

Leo Leonidas has suggested the following “ten commandments” to reduce cognitive errors (I have removed the Thou shalts and modified slightly):

  1. Reflect on how you think and decide.
  2. Do not rely on your memory when making decisions.
  3. Have an information-friendly work environment.
  4. Consider other possibilities even though you are sure you are right.
  5. Know Bayesian probability and the epidemiology of the diseases (and tests) in your differential.
  6. Rehearse both the common and the serious conditions you expect to see in your speciality.
  7. Ask yourself if you are the right person to the make this decision.
  8. Take time when deciding; resist pressures to work faster than accuracy allows.
  9. Create accountability procedures and follow-up for decisions you have made.
  10. Use a database for patient problems and decisions to provide a basis for self-improvement.

Let’s implement these commandments with some examples:

1. Reflect on how you think and decide.

Case: A patient presents in labor with a history of diet-controlled gestational diabetes. She has been complete and pushing for the last 45 minutes. The experienced nurse taking care of the patient informs you that she is worried about her progress because she believes the baby is large. You and the nurse recall your diabetic patient last week who had a bad shoulder dystocia. You decide to proceed with a cesarean delivery for arrest of descent. You deliver a healthy baby weighing 7 lbs and 14 ounces.

What went wrong?

  • Decision was made with System 1 instead of System 2 thinking.
  • Ascertainment bias, framing effect, hindsight bias, base-rate neglect, availability, and probably a visceral bias all influenced the decision to perform a cesarean. 
  • This patient did not meet criteria for an arrest of descent diagnosis. Available methods of assessing fetal weight (like an ultrasound or even palpation) were not used and did not inform the decision. Negative feelings of the last case influenced the current case.

2. Do not rely on your memory when making decisions.

Case: A patient is admitted with severe preeclampsia at 36 weeks gestation. She also has Type IIB von Willebrand’s disease. Her condition has deteriorated and the consultant cardiologist has diagnosed cardiomyopathy and recommends, among other things, diuresis. You elect to deliver the patient. Worried about hemorrhage, you recall a patient with von Willebrand’s disease from residency, and you order DDAVP. She undergoes a cesarean delivery and develops severe thrombocytopenia and flash pulmonary edema and is transferred to the intensive care unit where she develops ARDS (and dies). 

What went wrong?

  • Overconfidence bias, commission bias. The physician treated an unusual condition without looking it up first, relying on faulty memories. 
  • DDAVP is contraindicated in patients with cardiomypathy/pulmonary edema. DDVAVP may exacerbate severe thrombocytopenia in Type IIB vWD. It also may increase blood pressure in patients with preeclampsia.

3. Have an information-friendly work environment.

Case: You’re attending the delivery of a 41 weeks gestation fetus with meconium stained amniotic fluid (MSAF). The experienced nurse offers you a DeLee trap suction. You inform her that based on recent randomized trials, which show no benefit and potential for harm from deep-suctioning for MSAF, you have stopped using the trap suction, and that current neonatal resuscitation guidelines have done away with this step. She becomes angered and questions your competence in front of the patient and tells you that you should ask the Neonatal Nurse Practitioner what she would like for you to do.

What went wrong?

  • Hindsight bias, overconfidence bias on the part of the nurse.
  • The work environment is not receptive to quality improvement based on utilizing data, and instead values opinion and anecdotal experience. This type of culture likely stems from leadership which does not value evidence based medicine, and institutions that promote ageism, hierarchy, and paternalistic approaches to patient care. An information-friendly environment also means having easy access to the appropriate electronic information databases; but all the databases in the world are useless if the culture doesn’t promote their routine utilization. 

4. Consider other possibilities even though you are sure you are right.

Case: A previously healthy 29 weeks gestation pregnant woman presents with a headache and she is found to have severe hypertension and massive proteinuria. You start magnesium sulfate. Her blood pressure is not controlled after administering the maximum dose of two different antihypertensives. After administration of betamethasone, you proceed with cesarean delivery. After delivery, the newborn develops severe thrombocytopenia and the mother is admitted to the intensive care unit with renal failure. Later, the consultant nephrologist diagnoses the mother with new onset lupus nephritis.

What went wrong?

  • Anchoring, availability, confirmation bias, premature closure, overconfidence bias, Sutton’s slip or perhaps search satisfying. In popular culture, these biases are summed up with the phrase, If your only tool is hammer, then every problem looks like a nail.
  • The physician failed to consider the differential diagnosis. 

5. Know Bayesian probability and the epidemiology of the diseases (and tests) in your differential.

Case: A 20 year old woman presents at 32 weeks gestation with a complaint of leakage of fluid. After taking her history, which sounds likes the fluid was urine, you estimate that she has about a 5% chance of having ruptured membranes. You perform a ferning test for ruptured membranes which is 51.4% sensitive and 70.8% specific for ruptured membranes. The test is positive and you admit the patient and treat her with antibiotics and steroids. Two weeks later she has a failed induction leading to a cesarean delivery. At that time, you discover that her membranes were not ruptured.

What went wrong?

  • Premature closure, base-rate neglect, commission bias.
  • The physician has a poor understanding of the positive predictive value of the test that was used. The PPV of the fern test in the case is very low, but when the test came back positive, the patient was treated as if the PPV were 100%, not considering what the post-test probability of the hypothesis was. 

6. Rehearse both the common and the serious conditions you expect to see in your speciality.

Case: You are attending the delivery of a patient who has a severe shoulder dystocia. Your labor and delivery unit has recently conducted a simulated drill for managing shoulder dystocia and though the dystocia is difficult, all goes well with an appropriate team response from the entire staff, delivering a healthy newborn. You discover a fourth degree laceration, which you repair, using chromic suture to repair the sphincter. Two months later, she presents with fecal incontinence.

What went wrong?

  • Under-emphasis of the seemingly less important problem. This is a form of contrast bias. We are biased towards emphasizing “life and death” scenarios sometimes at the expense of other unusual but less important problems. Simulation was a benefit to the shoulder dystocia but rehearsal could have been a benefit too for the fourth degree laceration.

7. Ask yourself if you are the right person to the make this decision.

Case: Your cousin comes to you for her prenatal care. She was considering a home-birth because she believes that the local hospital has too high a cesarean delivery rate. She says she trusts your judgment. While in labor, she has repetitive late decelerations with minimal to absent variability starting at 8 cm dilation. You are conflicted because you know how important a vaginal delivery is to her. You allow her to continue laboring and two hours later she gives birth to a newborn with Apgars of 1 and 4 and a pH of 6.91. The neonate seizes later that night.

What went wrong?

  • Visceral bias.
  • In this case, due to inherent and perhaps unavoidable bias, the physician made a poor decision. This is why we shouldn’t treat family members, for example. But this commandment also applies to the use of consultants. Physicians need to be careful not to venture outside their scope of expertise (overconfidence bias).

8. Take time when deciding; resist pressures to work faster than accuracy allows.

Case: A young nurse calls you regarding your post-operative patient’s potassium level. It is 2.7. You don’t routinely deal with potassium replacement. You tell her that you would like to look it up and call her back. She says, “Geez, it’s just potassium. I’m trying to go on my break.” Feeling rushed, you order 2 g of potassium chloride IV over 10 minutes (this is listed in some pocket drug guides!). The patient receives the dose as ordered and suffers cardiac arrest and dies.

What went wrong?

  • Overconfidence bias.
  • Arguably, the physician’s main problem is a lack of knowledge, but feeling pressured, he deviated from what should have been his normal habit and did not look it up (if this scenario seems far-fetched, it was taken from a case report from Australia).

9. Create accountability procedures and follow-up for decisions you have made.

Case: Your hospital quality review committee notes that you have a higher than average cesarean delivery wound infection rate. It is also noted that you are the only member of the department who gives prophylactic antibiotics after delivery of the fetus. You change to administering antibiotics before the case, and see a subsequent decline in wound infection rates.

What went wrong?

  • Nothing went wrong in this case. Peer review worked well, but it required the physician being receptive to it and being a willing participant in continuous quality improvement processes. It also required the non-malignant utilization of peer review. The situation might have been avoided if the physician had better habits of continuous education. 

10. Use a database for patient problems and decisions to provide a basis for self-improvement.

Case: You track all of your individual surgical and obstetric procedures in a database which records complications and provides statistical feedback. You note that your primary cesarean delivery rate is higher than the community and national averages. Reviewing indications, you note that you have a higher than expected number of arrest of dilation indications. You review current literature on the subject and decide to reassess how you decide if a patient is in active labor (now defining active labor as starting at 6 cm) and you decide to give patients 4 hours rather than 2 hours of no change to define arrest. In the following 6 months, your primary cesarean delivery rate is halved.

What went wrong?

  • Again nothing went wrong. This type of continuous quality improvement is the hallmark of a good physician. But it must be driven by data (provided from the database) rather than a subjective recall of outcomes. We must promote a culture of using objective data rather than memory and perception to judge the quality of care that we provide. Additionally, we must be open to the idea that the way we have always done things might not the be the best way and look continuously for ways to improve. This is another skill that is strengthened with metacognition. 

Trowbridge (2008) offers these twelve tips for teaching avoidance of diagnostic errors:

  1. Explicitly describe heuristics and how they affect clinical reasoning.
  2. Promote the use of ‘diagnostic timeout’s.
  3. Promote the practice of ‘worst case scenario medicine’.
  4. Promote the use of a systematic approach to common problems.
  5. Ask why.
  6. Teach and emphasize the value of the clinical exam.
  7. Teach Bayesian theory as a way to direct the clinical evaluation and avoid premature closure.
  8. Acknowledge how the patient makes the clinician feel.
  9. Encourage learners to find clinical data that doesn’t fit with a provisional diagnosis; Ask ‘‘What can’t we explain?’’
  10. Embrace Zebras.
  11. Encourage learners to slow down.
  12. Admit one’s own mistakes.

The Differential Diagnosis as a Cognitive Forcing Tool

I believe that the differential diagnosis can be one of our most powerful tools in overcoming bias in the diagnostic process. But the differential diagnosis must be made at the very beginning of a patient encounter to provide mental checks and raise awareness of looming cognitive errors before we are flooded with sources of bias. The more information that is learned about the patient, the more biased we potentially become. The traditional method of making a differential diagnosis is one of forming the differential as the patient’s story unfolds, usually after the history and physical; yet this may lead to multiple cognitive errors. Triage cueing from the patient’s first words may lay the ground work for availability, anchoring, confirmation bias, and premature closure. The most recent and common disease processes will easily be retrieved from our memory, limiting the scope of our thinking merely by their availability.

With bias occurring during the patient interview, by default – through system 1 thinking – we may begin to anchor on the first and most likely diagnosis without full consideration of other possibilities. This causes us to use the interviewing process to seek confirmation of our initial thoughts and it becomes harder to consider alternatives. Scientific inquiry should not seek confirmation of our hypothesis (or our favored diagnosis), but rather proof for rejection of other possibilities. Once we’ve gathered enough data to confirm our initial heuristic thinking, we close in quickly, becoming anchored to our diagnosis. A simple strategy to prevent this course of events is to pause before every patient interview and contemplate the full scope of possibilities; that is, to make the differential diagnosis after learning the chief complaint but before interviewing the patient. By using the chief complaint given on the chart, a full scope of diagnostic possibilities can be considered including the most likely, the most common, the rare and the life threatening. This will help shape the interview with a larger availability of possibilities and encourage history-taking that works to exclude other diagnoses. Here a howardism,

You can’t diagnosis what you don’t think of first.

Having taught hundreds of medical students how to make differential diagnoses, I have always been impressed how easy it is to bias them to exclude even common and likely diagnoses. For example, a patient presents with right lower quadrant pain. The student is biased (because I am a gynecologist), so the differential diagnosis focuses only on gynecologic issues. When taking the history, the student then fails to ask about anorexia, migration of the pain, etc., and fails to consider appendicitis as a likely or even a possible diagnosis. The history and physical was limited because the differential was not broad enough. In these cases, triage cueing becomes devastating.

If bias based merely on my speciality is that profound, imagine what happens when the student opens the door and sees the patient (making assumptions about class, socioeconomic status, drug-dependency, etc.), then hears the patient speak (who narrows the complaint down to her ovary or some other source of self-identified pain), then takes a history too narrowly focused (not asking broad review of system questions, etc.). I have considered lead poisoning as a cause of pelvic/abdominal pain every time I have ever seen a patient with pain, but, alas, I have never diagnosed it nor have I ever tested a patient for it. But I did exclude it as very improbable based on history.

For further reading:

  • Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8):775-780.
  • Wachter R. Why diagnostic errors don’t get any respect–and what can be done about them. Health Aff. 2010. 29(9):1605-10.
  • Newman-Toker DE, Pronovost PJ. Diagnostic Errors: the next frontier for patient safety. JAMA. 2009; 301(10):1060-2.
  • Croskerry P. Cognitive forcing strategies in clinical decision making. Ann of Emerg Med. 2003; 41(1).
  • Graber M, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21:535-557.
  • Corskerry P. A universal model of diagnostic reasoning. Acad Med. 2009; 84(8):1022-28.
  • Redelmeier, DA. The cognitive psychology of missed diagnoses. Ann of Int Med. 2005; 142:115-120.