‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies
Experts warn that ChatGPT Health is failing to recognize medical emergencies, potentially leading to harm or death.
A recent study has highlighted serious shortcomings in ChatGPT Health, a feature of the popular AI platform launched by OpenAI. Contrary to its promotional claims, the AI regularly fails to identify critical medical emergencies and often misses indications of suicidal ideation in users, prompting concerns from health experts about the potential for unnecessary harm or even fatal outcomes. The implications of this research are particularly alarming given that millions turn to AI for health advice daily, raising important questions about the reliability of automated health assessments.
The study, published in the journal Nature Medicine, is the first independent safety evaluation of ChatGPT Health, revealing that the AI misclassifies more than half of the medical scenarios presented to it as not requiring urgent care. This significant under-triaging is problematic as it suggests that users may not receive timely and necessary medical intervention when they need it most. The lead author of the study, Dr. Ashwin Ramaswamy, emphasized the basic safety dilemma posed by the platform, questioning whether it can appropriately guide someone experiencing a genuine medical emergency.
With OpenAI's increasing integration of health-related functions into ChatGPT, the findings call for urgent attention and remediation. As ChatGPT Health was rolled out to over 40 million users who seek health guidance, the risks associated with its use become more pronounced. Experts recommend that users exercise caution and be aware of the limitations of AI in health care, emphasizing that such tools should complement, rather than replace, professional medical advice. The study serves as a critical wake-up call for developers to ensure the safety and accuracy of health-related AI applications before wider adoption takes place.