Feb 11 β€’ 09:13 UTC πŸ‡ΆπŸ‡¦ Qatar Al Jazeera

Study: Artificial Intelligence Believes Incorrect Medical Information and Uses It

A new study reveals that AI tools can accept and use incorrect medical information if it comes from a source deemed trustworthy by the AI.

A recent study conducted by Mount Sinai University in New York highlights a concerning issue where artificial intelligence (AI) tools utilize inaccurate medical information if they are presented from sources that the AI considers reliable. Reported by Reuters, this study showed that AI models often misinterpret misinformation found in doctors' notes more readily than that found on social media platforms. This raises serious implications for the medical field, where AI is increasingly being integrated into decision-making processes.

Dr. Eyal Klang, an author of the study from the Icahn School of Medicine at Mount Sinai, underscored that AI systems tend to automatically accept medical language deemed trustworthy without rigorous verification of the information's accuracy. He stated that for these models, the phrasing of the information is prioritized over its validity, potentially leading to the dissemination of misleading health advice. This fundamental flaw in AI processing calls into question the reliability of AI recommendations in healthcare contexts.

The study analyzed 20 different AI models, ranging from open-source to closed models, categorizing the data they were trained on into three distinct groups. This classification system indicates a broader challenge in how AI systems are developed and trained, especially in fields requiring high-stakes decision-making, such as medicine. As AI becomes more prevalent in healthcare, understanding and addressing these vulnerabilities will be crucial to ensuring patient safety and the integrity of medical information dissemination.

πŸ“‘ Similar Coverage