Study: AI Robots Do Not Provide Good Health Advice
A study reveals that while AI robots may pass exams for healthcare professionals, they do not provide better health advice than traditional methods, posing risks for patients relying on them.
A recent study led by researchers from Oxford University has found that AI robots, despite being capable of passing health professional exams, do not deliver better health advice than traditional approaches. This raises concerns about the safety and reliability of using AI for diagnosing health issues. Rebecca Payne, a researcher involved in the study, cautions that consulting a large language model about symptoms can be hazardous, potentially leading to misinformation and neglect of necessary emergency care.
The study involved approximately 1,300 participants in the UK, who were presented with ten different health scenarios ranging from headaches after a night of drinking to feelings associated with gallstones. Participants were assigned one of three chatbots: OpenAI's GPT-4o, Meta's Llama 3, or Command R+. A control group was also included to assess the chatbots' effectiveness in helping users identify health problems and determine if they should seek medical attention.
The findings highlight a critical gap in the current capabilities of AI in healthcare, suggesting that patients should remain vigilant and not solely rely on AI for health advice. The researchers indicate that AI is not yet ready to assume the role of physicians, emphasizing the need for further development and caution in integrating AI technologies in healthcare settings.