Artificial intelligence allows hackers to identify anonymous accounts on social media
A new study warns that artificial intelligence has made it significantly easier for malicious hackers to uncover the real identities of anonymous social media accounts.
Researchers Simon Lerman and Daniel Palecka have conducted a study highlighting that large language models (LLMs), the technology behind platforms like ChatGPT, are capable of matching anonymous users with their true identities on various platforms by analyzing published information. This advancement has raised concerns over privacy, as LLMs enable complex privacy attacks that were previously economically unfeasible.
In their experiment, the researchers provided an AI system with anonymous accounts and observed its capability to gather extensive data on these users. They illustrated a hypothetical user who shared personal insights about their academic struggles and details about walking their dog. Such information can be leveraged by hackers to construct profiles that lead to the identification of these anonymous users, emphasizing the risks associated with using social media anonymously.
The findings call for a fundamental reassessment of what can be considered private on the internet, suggesting that the ease of identifying users through AI poses serious implications for personal privacy and online anonymity. As the capabilities of AI technology continue to evolve, there are growing concerns among privacy advocates regarding the security of personal information and the potential for misuse in identity theft or harassment.