The 'common' AI-related crimes Australians fear most
A report reveals that a majority of Australians are concerned about being victims of AI-enabled crimes, particularly deepfakes and hacking incidents.
A recent report from the Australian Institute of Criminology has highlighted a significant level of concern among Australians regarding AI-related crimes. Over half of the adult population expressed fears about being harmed by AI technologies, with nearly the same percentage worrying about becoming victims of crimes facilitated by AI. These fears primarily revolve around the potential misuse of AI to track individuals' locations, gain unauthorized access to personal devices or accounts, and the dangers of impersonation or deception through technology.
The report indicates that concerns are particularly acute around AI-generated deepfake content, with more than 30% of respondents citing it as a major worry. This reflects a growing awareness of the implications of advanced AI technologies in everyday life and their potential to facilitate fraudulent activities that could affect individuals' privacy and safety. The issue emphasizes a need for increased public awareness and regulatory responses to safeguard citizens against these emerging threats.
Overall, as AI continues to permeate various aspects of life, the fears expressed by Australians underscore the necessity for proactive measures to address and mitigate risks associated with AI technologies. Whether through education, policy-making, or technological safeguards, there is a pressing need to build trust and ensure that AI developments proceed in a responsible manner that protects the public interest.