Recognition created by artificial intelligence is becoming increasingly difficult; Safer Internet Day reminds us of the risks of technology
On Safer Internet Day, participants struggled to distinguish between human-written text and that generated by artificial intelligence, highlighting the growing challenges posed by AI.
On Safer Internet Day, an event aimed at promoting safer online experiences, attendees participated in the Turing Test to gauge their ability to differentiate between texts composed by humans and those generated by AI. Results showed that 40% of the participants failed to identify human-written content, illustrating the increasingly blurred lines between human and AI communication. This underscores the challenges individuals face in the digital age, particularly with the rise of sophisticated AI technologies.
Maija Katkovska, the director of the Latvian Safer Internet Centre, emphasized that the inability to recognize AI-generated content extends beyond text, affecting young people's ability to discern manipulated or artificially created images. This inability poses significant internet safety risks, including potential harm from cyberbullying or digital harassment, as tools to collect public information on minors become more easily accessible. The Safer Internet Centre's current initiatives focus on addressing these risks and raising awareness among young users about their digital footprints.
In response to these challenges, Katkovska encouraged discussions about adolescent accounts and the importance of monitoring tools that can help young people navigate social media responsibly. The organization provides a plethora of educational materials aimed at increasing media literacy among youth, promoting a better understanding of online safety, and helping them manage their online presence as they strive to be mindful of what they post on the internet.