Gender, racism, and xenophobia: biases in artificial intelligence in Latin America
A study by researchers from the University of Los Andes reveals significant biases in AI models regarding gender and race stereotypes in Latin America.
A recent study conducted by researchers at the University of Los Andes in Colombia has highlighted alarming biases present in popular artificial intelligence models. The research, part of a project titled SESGO (Spanish Evaluation of Stereotypical Generative Outputs), involved asking over 4,000 questions to well-known AI models such as Gemini, Claude, Deepseek, Meta, and GPT-4o mini. One striking example highlighted in the study included AI responses that perpetuated outdated gender roles, such as suggesting that "women should take care of the children."
The team, comprising Catalina Bernal, Melissa Robles, Denniss Raigoso, and Mateo Dulce, aimed to systematically evaluate these AI systems to shed light on the stereotypes that are often reinforced by them in everyday interactions. The findings from this research are crucial as they raise awareness about the implications of such biases on both individual users and society as a whole.
The study not only underscores the need for more conscientious development and deployment of AI technologies, especially in diverse cultural contexts like Latin America, but it also calls for ongoing dialogues about ethical considerations in AI. As reliance on these technologies continues to grow, understanding and addressing inherent biases becomes essential for promoting equity and inclusion within society.