Mar 17 • 05:53 UTC 🇫🇮 Finland Ilta-Sanomat

BBC: Some Giants Have Deliberately Allowed Harmful Content on Their Platforms

Social media giants Meta and TikTok have reportedly allowed harmful content on their platforms intentionally to increase user engagement, as revealed by insiders in an interview with the BBC.

Recent reports from the BBC allege that social media giants Meta (owner of Facebook and Instagram) and TikTok have knowingly allowed harmful content to circulate on their platforms in a bid to boost user engagement. Insiders have claimed that the companies' leadership instructed employees to permit more harmful content, including misogynistic posts and conspiracy theories, to compete with TikTok's growing popularity. This decision is suggested to be driven by declining stock prices, as confirmed by an engineer working for Meta.

In the interview, Meta researcher Matt Motyl highlighted that especially Instagram's Reels feature has been a conduit for bullying, harassment, hate speech, and incitement to violence. Such revelations imply that the prioritization of engagement over user safety could have serious implications for public discourse, particularly regarding the mental well-being of users who are exposed to damaging content. The findings echo a wider concern about the responsibilities of social media companies in moderating content and protecting users from harmful influences.

The acknowledgement of this practice highlights a significant ethical dilemma facing social media platforms: balancing user engagement with the potential risks associated with allowing harmful content. As more users express concern over their safety and the content they encounter online, there is increasing pressure on these companies to reassess their content moderation policies, particularly in an environment where competition with platforms like TikTok is fierce.

📡 Similar Coverage