Selective Memory AI: Major Platforms Ignore Publishers' Rights
A recent study highlights how major AI models fail to credit their sources, particularly Canadian journalism, raising concerns over intellectual property rights.
A report by Laura Hazard Owen from Nieman Lab discusses a recent Canadian study that examined the performance of major AI models, such as ChatGPT, Claude, Gemini, and Grok. The study revealed a significant shortcoming in how these AI systems reference their news information back to original sources, showing a worrying trend of outdated acknowledgments, especially given that they rely heavily on Canadian news content for training. The findings indicated that while these models performed accurately in recognizing local politics and affairs, they seldom disclosed the original source to users.
The study, co-authored by McGill University professor Taylor Owen, pointed out systematic absorption of Canadian journalism by these AI models. In a test of 2,267 real news stories in both English and French, the results showed that while the AI models provided accurate responses in 74% of the cases, an alarming 92% of those expert answers lacked any attribution to the original source. The absence of proper sourcing raises ethical and legal questions about the use of content created by journalists and the potential infringement of intellectual property rights.
This situation creates an urgent need for greater accountability among AI systems and platforms that utilize journalistic content. As these technologies continue to develop, the implications for media and publishing rights become more pronounced, signaling a critical moment for regulators and the industry to address these challenges. The study highlights the importance of protecting the integrity of journalism in the age of artificial intelligence, urging major platforms to rethink their practices regarding sourcing and crediting original content creators.