Creating Humble AI: A New Approach to Enhance Medical Diagnostics
Imagine if Artificial Intelligence (AI) was not only incredibly smart but also “humble”? It’s not as strange as it sounds. AI is revolutionizing the healthcare industry, offering potential game-changers in patient diagnosis and personalised treatments. But there’s a caveat. According to a global team of scientists led by MIT, current AI systems might mislead doctors due to their tendency to make overly confident, although incorrect, decisions.
To manage these risks, these researchers recommend programming AI systems with an attribute usually reserved for humans – humility. What they mean is that these AI systems should be programmed to be aware when they lack confidence in their diagnostic suggestions or recommendations. This would prompt users to gather additional information when any uncertainty arises.
“We’re now using AI as an oracle, but we can use AI as a coach. We could use AI as a true co-pilot. That would not only increase our ability to retrieve information but increase our agency to be able to connect the dots,” says Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science. He advocates for a framework where AI exhibits curiosity and humility, fostering a partnership between doctors and AI systems.
The potential danger of overconfident AI systems can’t be overstated. They can lead to medical errors, especially when ICU physicians defer to AI perceived as reliable, even against their own intuition. To combat this, it’s about instilling human values within AI. As Sebastián Andrés Cajas Ordoñez, who headed the study published in BMJ Health and Care Informatics, explains, “we are trying to include humans in these human-AI systems, and encourage humans to collectively reflect and reimagine, instead of letting isolated AI agents doing everything.”
Part of this collaboration is the Epistemic Virtue Score, a computational module developed by the team which ensures AI models evaluate their certainty when making diagnostic predictions. This means that an AI system would provide answers but would also raise a flag of caution when it deems necessary.
Diversity in AI development is another significant focus. The potential for biases and exclusions from AI models trained on specific datasets is not overlooked. Through their work, the global consortium aims to incorporate more viewpoints, question existing datasets and capture all relevant drivers.
“We make them question the dataset. Are they confident about their training data and validation data? Do they think that there are patients that were excluded, unintentionally or intentionally, and how will that affect the model itself?” asks Celi. “We must be more deliberate and thoughtful in how we develop AI, not just in health care, but in every sector.”
If you’re intrigued and want to explore AI automation options for your company, a good place to start is at implementi.ai. For a more in-depth look at this amazing leap into the future, you can visit the Originalartikel hier.