Mar 18 • 18:32 UTC 🇨🇦 Canada National Post

Study finds AI trains itself on Canadian journalism but doesn’t always give credit

A study reveals that AI models trained on Canadian journalism often fail to credit their sources, raising concerns about the integrity of news reporting.

A recent study conducted by the Quebec-based Centre for Media, Technology and Democracy has highlighted how artificial intelligence models are utilizing Canadian journalism for training purposes without adequately acknowledging the sources. The research involved testing four major AI models with 2,267 authentic Canadian news stories, comprising both English and French content. It aimed to evaluate whether these models recognize and credit the original journalism they have absorbed in their training process.

In the initial phase of the study, researchers from McGill University found that although the AI models were capable of processing the news stories, they often failed to provide proper attribution. This lack of credit poses significant ethical dilemmas for the industry, suggesting that the valuable contributions of journalists might go unrecognized in the age of AI, potentially undermining the future of journalism as a profession.

The second part of the study involved querying the AI models about 140 specific recent articles sourced from seven Canadian outlets. Researchers aimed to determine if the AI-generated responses could serve as acceptable substitutes for traditional journalism and again assessed the extent to which the models credited their sources. The findings underscore the growing concerns about the use of journalistic content by AI and the implications for news credibilities, as misleading or unacknowledged information could proliferate, posing risks for informed public discourse.

📡 Similar Coverage