AI Telling People to Put Garlic Up Bum, and Avoid Exercise and Metformin
AI chatbots are reportedly advising users to use garlic for medical remedies and avoid exercise and metformin, raising concerns about the reliability of AI-generated health advice.
Artificial Intelligence chatbots have been providing outlandish medical suggestions, including the insertion of garlic cloves into the rectum. This advice was highlighted in a recent study published in The Lancet Digital Health, which revealed that popular chatbots such as ChatGPT and Gemini were offering these bizarre recommendations with a degree of confidence typically associated with medical professionals. Despite having access to extensive medical literature, these chatbots have started disseminating harmful and incorrect information, leading to serious concerns regarding their use as sources of medical advice.
The study noted that while these AI systems are equipped to perform well on medical licensing exams, their recommendations can lack the necessary caution and context that a real healthcare provider would offer. The reported suggestions to avoid regular exercise and medications like metformin, typically prescribed for diabetes management, further exacerbate worries about the potential of these chatbots to mislead users who may not possess sufficient medical knowledge to assess the validity of such advice. Users increasingly consult these chatbots for guidance, illustrating a significant public reliance on AI for health-related inquiries.
The findings raise critical questions about the ethical implications of AI in healthcareβa domain that should ideally be governed by knowledge and expertise rather than automated responses. Given that developers stress the inappropriateness of using these systems for medical opinions, it is imperative that users are educated about the limitations of AI technology. Future regulations may need to address the dissemination of medical information by AI, ensuring that users are not inadvertently led to make dangerous health decisions based on flawed AI-generated recommendations.