Mar 6 • 07:01 UTC 🇧🇷 Brazil G1 (PT)

Does ChatGPT Work for Medical Emergencies? Study Lists Failures, Racial Bias, and Questions About AI Use

A study has found that ChatGPT Health underestimates the level of care needed in over half of medical emergencies and highlights issues like racial bias in its recommendations.

A recent study evaluating the use of ChatGPT Health for symptom analysis and medical advice has revealed significant shortcomings in its performance. According to the research conducted by medical professionals at the Icahn School of Medicine at Mount Sinai in New York, the AI tool recommended a lower level of care than needed in more than half of the emergency cases assessed. This raises crucial questions about the reliability of AI in critical healthcare situations where accurate assessment is vital.

The study, published in the prestigious scientific journal Nature, also pinpointed racial bias in the responses generated by the tool. In particular, the researchers noted that the AI's recommendations were influenced by external comments from family members, which could skew the assessment of symptoms. This bias could exacerbate healthcare disparities, leading to unequal treatment based on race or socioeconomic status, and complicates the potential for AI tools to be used effectively in diverse populations.

Ashwin Ramaswamy, the lead researcher, expressed concern about the implications of these findings, stressing that diagnostic errors by AI are particularly critical in severe medical cases. The results emphasize the need for more rigorous testing and oversight of AI tools like ChatGPT Health before they can be safely integrated into clinical practice, especially in emergency medical contexts where accurate and prompt decision-making is paramount.

📡 Similar Coverage