
The Patient Safety Group (PSG) of the Royal College of Surgeons of Edinburgh (Ƶ) are delighted to lend our enthusiastic support to the sixth World Patient Safety Day (WPSD). This event, established by the World Health Organisation (WHO) in 2019, takes place on 17 September every year. It helps to raise global awareness amongst all stakeholders about key Patient Safety issues and foster collaboration between patients, health care workers, health care leaders and policy makers to improve patient safety. Each year a new theme is selected to highlight a priority patient safety area for action.
The theme set by the WHO for this year’s WPSD is “Improving diagnosis for patient safety”, recognising the vital importance of correct and timely diagnosis in ensuring patient safety and improving health outcomes.
Diagnostic errors are a significant challenge in the healthcare system, often leading to adverse patient outcomes, increased healthcare costs, and a loss of trust in medical professionals. Studies suggest that diagnostic errors contribute to approximately 10% of patient deaths and up to 17% of adverse events in hospitals. In a field where precision is paramount, even a small margin of error can have catastrophic consequences. Fortunately, advancements in Artificial Intelligence (AI) offer promising solutions tothese longstanding issues. AI's ability to process vast amounts of data,identifypatterns, and support decision-making processes positions it as a powerful tool in reducing diagnostic errors and improving patient care.
Diagnostic errors can be broadly categorised into the following domains:
- Failure to detect: Missing the presence of disease or condition
- Delayed diagnosis:Identifyingthe disease or condition, but at a pointoutstanding ideal diagnostictimes
- Misdiagnosis:Failing to detectthe correct disease or condition
There are many factors which can lead to diagnostic errors. These can include human factors (cognitive biases,workload and time pressures, communication failures), equipment failures (ageing and outdated technologies) or logistical failures (paper systems, waiting lists, staffing limitations).Certainly, therewon’tbe a single solution to resolve all thesechallengesbut the rise ofincreasingly effectiveAIsystemshas signalled an opportunity to leapfrog multiple challenges simultaneously andsupport clinicians to make moreaccurateandtimelydiagnosesin a cost-effective way.
The uses of AI in reducing diagnostic errors:
Data Processing and Pattern Recognition
AI reliesonlarge volumes of datawhich it can then process atrelatively highspeeds, compared withthat of clinicians. Byanalysingelectronic health records, medical images and lab results AI may be able to recognise patterns which are more difficult to perceive by human observers.In an earlier blogon remote diagnostics,we described the need for studies such as theEnhanced monitoring usingsensors after surgery (EMUs)which will use physiological datato help detectpost-operativecomplications. One of the secondary outcomes of this study will be to look at time-stamped sensor clinical event data todeterminerelationships between physiological waveforms and patient deterioration. With the use of AI-predictive algorithms this data can also be used to predict deteriorations based on micro-changes inpatient’swaveform data.This would allow for deteriorations to be recognised earlier allowing more time to intervene.
Reducing Cognitive Biases
Human decision-making in healthcare is often influenced by cognitive biases, such as anchoring—where clinicians rely too heavily oninitialinformation—or availability bias, where recent experiences disproportionately shape decision-making. AI can mitigate these biases by offering evidence-based suggestions andhighlightingdiagnoses that might otherwise be overlooked.
Decision support
AIoffersan invaluable‘re-look’forclinicians, offering decision support by cross-referencing patient symptoms and test results withhistorical patient databases. This capability is particularly beneficial in complex cases where multiple conditions might present similar symptomsand relies onconsideration ofdetails to differentiate between conditions.
Training and Education
AI has the potential to revolutionize surgical training and education, enhancing patient safety by ensuring that clinicians areoptimised tomanageeverydayconditions.TheCreating new models of laparoscopic surgery skills acquisition and assessment (CAMELs)study, which is being run by the University of Edinburgh in association withWellcomeLeap,aims to predict and assess operating room performance based onoperative performance ratings derived frombox-trainersimulation lab exercisesand in-theatre video assessments. Byanalyzingvideo data and correlating it with performance metrics, AI canidentifyspecific areas wheresurgicaltrainees may needadditionalpractice or instruction, thereby tailoring the training toeach individual'sneeds.In doing so, this workseeksto optimise training pathways for surgeons and ensure that simulated training is optimised to support real-world surgical practice.
While AI holds great promise in reducing diagnostic errors, it is not without challenges and ethical considerations:
- Data Quality and Bias: AI systems are only as good as the data they are trained on. If the training data is biased or incomplete, the AI may produce inaccurate or biased results, potentially leading to new diagnostic errors.This is especially relevant in the context of our diverse patient populations, whom are not always well represented in traditional medical databases2(most of which arelargely composedof data from high-income Caucasian patients).Diversifying training data and ensuring models are available across variably resourced contexts will help alleviate this disparity.
- Transparency and Trust: Clinicians and patients must trust AI-driven diagnostic tools. However, many AI modelsoperateas "black boxes," making it difficult to understand how they arrive at their conclusions. This lack of transparency can be a barrier to adoptionand can require a human clinician to take responsibility for AI generated guidance. Questions about accountability for patient safety may come into play as these technologies develop.
- Integration into Clinical Workflows: For AI to be effective, it must be seamlessly integrated into existing clinical workflows. This integration requires significant changes in healthcare infrastructure,training for healthcare professionalsand sustainable funding structures.
The potential for AI to significantly reduce diagnostic errors is immense. As AI technologies advance, we cananticipateeven greater accuracy, speed, and reliability in diagnostics, potentially leading to afuture where diagnostic errors are rare, and patient care ismore timely and precise. To realize this potential, the focus must be on improving data quality, ensuring transparency, and building trust among healthcare professionals. With ongoing research, collaboration between AI developers and healthcare providers, and a commitment to addressing ethical concerns, AI is poised to revolutionise diagnostics andsubstantially reducethe burden of diagnostic errors in healthcare.
Written byAfra Jiwa and Malcolm Cameron
References:
- Committee on Diagnostic Error in Health Care, Board on Health Care Services, Institute of Medicine, The National Academies of Sciences, Engineering, and Medicine.Improving Diagnosis in Health Care. (Balogh EP, Miller BT, Ball JR, eds.). National Academies Press (US); 2015. Accessed August 26, 2024. http://www.ncbi.nlm.nih.gov/books/NBK338596/
- Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies.npjDigit Med. 2023;6(1):1-3. doi:10.1038/s41746-023-00858-z