Mar 21 • 06:00 UTC 🇧🇷 Brazil G1 (PT)

How TikTok and Meta Ignored Safety to Win Engagement Battle, According to Former Employees

Former employees reveal that TikTok and Meta prioritized engagement over user safety by allowing harmful content on their platforms during the competition for user attention.

In a recent expose by the BBC, several whistleblowers and former employees from TikTok and Meta (the parent company of Facebook and Instagram) have disclosed that these social media giants compromised user safety by permitting increasingly harmful content in their feeds. Internal research reportedly indicated that provocative and controversial content generated higher user engagement, prompting executives to knowingly prioritize this type of material as they competed for user attention amid TikTok's explosive growth.

An engineer from Meta revealed that he received directives from company leadership to incorporate more borderline harmful content into user feeds as a strategy to vie against competitors like TikTok. This directive extended to content that could include misogyny and conspiracy theories, raising significant ethical concerns about the implications of such decisions on both user well-being and the overall social media environment.

The revelations underline a troubling trend in the social media industry where companies may sacrifice user safety to excel in user engagement metrics. With the increasing recognition of the impact of online content on mental health and societal issues, these admissions present a challenge to regulatory bodies and raise questions about accountability within the industry as a whole.

📡 Similar Coverage