Mistakes are the growing pains of wisdom. William Jordan
Given the scope of the medical errors problem, the knee-jerk response is: we need more
reporting, so we can prevent them! The hope is that transparency will drive change, and once
we understand the problem better, we will be able to fix it. This makes intuitive sense, which
is why most hospitals have error reporting systems.
Error reporting systemsThese are called incident reporting (IR) systems, and the reports come from the medical staff – the doctors and nurses who are taking care of the patient. IR systems are passive forms of surveillance, which rely on the willingness of the medical staff to report errors. While these systems are low cost, the experience with them has been disappointing. Medical staff often don’t bother to report because:
* They were too busy and did not have enough time
* Did not know whom to report to or how to report
* “Forgot” to report; found the form too long and detailed
* Felt the error was too trivial
This is why only 10-20% of errors are ever reported; of those, only 5-10% actually cause harm to patients. While well-organized IR systems can yield important insights, they can also waste substantial resources on chasing red herrings which divert attention from more important problems.
Medical errors are so common that the admonition to “report everything” is silly – the system would quickly accumulate unmanageable mountains of data and result in caregivers spending so much of their time reporting that they would not have time to care for their patients. Good reporting systems are:
* Easy to use
* Encourage voluntary error reporting by rewarding those who report errors (“good catch” awards)
* Use a team to review and address problems After all, the goal of an IR system is not data collection, but meaningful improvements in safety and quality.
Error reports, whether filed on paper or through the Web, and whether sent to the hospital’s safety officer or to a government agency, can be divided into three main categories:
* Anonymous- These reports are ones in which no identifying information is asked of the reporter. Although they have the advantage of encouraging reporting, anonymous systems have the disadvantage of preventing necessary follow-up questions from being answered, as a result of which their clinical value is limited.
* Confidential- In a confidential reporting system, the identity of the reporter is known but shielded from authorities. Such systems tend to capture more useful data than anonymous systems, because follow-up intelligent questions can be asked.
* Open - Finally, in open reporting systems , all people and places are publicly identified. These systems have a relatively poor track record in healthcare, because the potential for unwanted publicity and blame is very strong, and it is often easy for individuals to cover up errors (even with “mandatory” reporting).
We can do a better job at reporting errors if we are willing to learn from the aviation industry. The ASRS – Aviation Safety Reporting System is the linchpin of the modern aviation industry’s impressive record of safety. The ASRS has five traits:
* Ease of reporting
* Timely analysis and feedback
* Third party administration
* Regulatory action
These five traits have led to its success, and until we can replicate these, our medical error reporting systems will leave a lot to be desired.
Improving safetySharing stories of reported errors so that we can learn from them is an important part of improving safety; this was traditionally done in hospital M & M (morbidity and mortality) conferences,- a generations-old ritual that provided a forum for doctors to confess their mistakes and help their colleagues avoid making similar ones.
Increasingly, hospitals and other healthcare organizations are also compulsorily required to submit reports of significant error to the government, accreditation bodies, and regulators. These errors are called “sentinel events” and “never events”. They are serious, preventable errors that should never occur if the appropriate safety measures have been properly implemented. They are the “kind of mistake that should never happen” in the field of medical treatment and serve as signals that the event is significant enough to trigger a full investigation into the cause of the incident.
“Never events” are framed in the negative and carry a huge psychological burden. A healthier alternative is the “always events” which represents a positive affirming behavior that can motivate us to improve patient safety. Some basic examples of “always events” include: identifying patients using more than one source; using active identification, which involves asking the patient to state her name; compulsory “readbacks” of verbal orders for medications; writing down orders communicated on the phone; tracking of key imaging, lab and pathology results; making critical information available at handoffs or transitions in care; and transparent disclosure of adverse outcomes with patients and families;
Standardization and validation of “always events” is a better way of creating a positive long- term culture of patient safety. Measuring errors is tricky and tracking progress in patient safety is difficult, as we are trying to capture the absence of harm. Safety is a “dynamic nonevent”. Given these limitations, hospitals need a more effective way to track and reduce harm. More active methods of surveillance include retrospective chart review, direct observation, and trigger tools.
The IHI Global Trigger Tool for Measuring Adverse Events provides an easy-to-use method for accurately identifying harm. This is a retrospective focused review of a random sample of inpatient hospital records using “triggers” (or clues) to identify possible adverse events. Tracking adverse events over time is a useful way to tell if the changes which are being implemented are improving the safety of the care process.
Analyzing the errorAfter we find a significant error, this needs to be analyzed, so we can prevent it from recurring. Root cause analysis (RCA) is a structured method that is designed to answer three basic questions: what happened, why did it happen, and what can be done to prevent it from happening again? RCA is a tool that helps identify the underlying factors that precipitate an error or near miss.
It repeatedly digs deeper into an issue by asking “Why” until no additional logical answers can be identified, which means you have reached what is called a root cause. RCA focuses on systems and processes, not on individual performance . The goal is to identify the factors that led to the error, and to suggest solutions that can prevent similar errors from causing harm in the future.
While the RCA is retrospective (after the event) the alternative prospective approach is based on the failure modes and effects analysis (FMEA). This has been widely used in other high- risk industries and has been advocated by the Institute of Medicine as a means of analyzing a system to identify its failure modes and possible consequences of failure (effects) so as to prioritize areas for improvement.
Finally, a very useful source of important safety insights can come from the analyses of malpractice cases. These closed claim analyses are a rich source of information for teaching about both medical errors and patient safety.