New guidelines for reporting on experiments with medical AI aim to ensure that such research is transparent, rigorous, and reliable.
What’s new: Spirit-AI and Consort-AI are complementary protocols designed to improve the quality of clinical trials for AI-based interventions.
How it works: The guidelines are intended to address concerns of doctors, regulators, and funders of technologies such as the Google tumor detector shown above.
- Spirit-AI calls for clinical trials to observe established best practices in medicine. For example, it asks that researchers clearly explain the AI’s intended use, the version of the algorithm to be used, where its input data comes from, and how the model would contribute to doctors’ decisions.
- Consort-AI aims to ensure that such studies are reported clearly. Its provisions largely mirror those of Spirit-AI.
- Both sets of recommendations were developed by over 100 stakeholders around the world. Researchers at University of Birmingham and University Hospitals Birmingham NHS Foundation Trust led the effort.
Behind the news: Less than 1 percent of 20,500 studies of medical AI met benchmarks for quality and transparency, according to a 2019 study by researchers involved in the new initiatives.
Why it matters: These protocols could help medical AI products pass peer and regulatory reviews faster, so they can help patients sooner.
We’re thinking: The medical community has set high standards for safety and efficacy. Medical AI needs to meet — better yet, exceed — them. But the technology also poses new challenges such as explainability, and a comprehensive set of standards must address issues like that as well.