Police AI chief admits crime-fighting tech will have bias but vows to tackle it
The police chief of the UK's National Crime Agency acknowledges that artificial intelligence used in policing may have inherent biases but is committed to addressing these issues.
Alex Murray, the police chief responsible for AI integration at the National Crime Agency, has admitted that artificial intelligence (AI) systems used in law enforcement will likely have biases entrenched in their algorithms. This recognition comes in the context of an increasing push from the Labour party for an expanded use of AI across England and Wales, as police leaders view it as essential for responding effectively to modern criminal behaviors. Murray emphasizes the establishment of a national police AI center aimed at identifying and minimizing these biases before they can lead to disparate impacts in the policing process.
Murray warns that existing AI systems can perpetuate historical biases due to their reliance on historical data that often reflects past prejudices. Such biases pose significant risks, including the potential oversight or wrongful identification of individuals from minority communities based on race, gender, or economic status. The need for careful integration of AI in police work is underscored by concerns that without rigorous checks, AI could exacerbate socio-ethical issues rather than solve them, especially in contexts where equitable law enforcement is vital.
Finally, Alex Murray stresses the importance of ongoing training for officers who will utilize these AI tools. He mentions that it is not enough to simply minimize the biases in the algorithms; officers must also be thoroughly educated on how to interpret AI outputs conscientiously. This approach includes actively working to mitigate further biases during the course of their operations, ensuring that the integration of AI into policing genuinely enhances justice rather than perpetuating inequalities.