Whistleblowers and former insiders say the explosive rise of TikTok triggered an intense competition among social media companies to capture user attention, pushing platforms such as Meta’s Facebook and Instagram to adjust their algorithms in ways that allowed more borderline or harmful content to circulate, as companies prioritized engagement and market share even while publicly committing to stronger safety measures

Social media companies made decisions that allowed more harmful content to appear in user feeds as they competed for engagement following TikTok’s rapid growth, whistleblowers and former insiders said in interviews with the BBC.
More than a dozen whistleblowers described how companies including TikTok and Meta, the owner of Facebook and Instagram, took risks with safety issues including violence, sexual blackmail and extremist content while competing for users’ attention.
A Meta engineer told the BBC that senior managers had instructed teams to allow more “borderline” harmful content in user feeds in order to compete with TikTok. The category includes material such as misogynistic posts and conspiracy theories.
“They sort of told us that it’s because the stock price is down,” the engineer said.
A TikTok employee also gave the broadcaster access to internal dashboards tracking user complaints and described how staff were sometimes instructed to prioritise cases involving politicians over reports of harmful posts involving children.
According to the employee, decisions were made to “maintain a strong relationship” with political figures to avoid potential regulation or bans.
The accounts appear in a BBC documentary, Inside the Rage Machine, which examines how social media companies responded to TikTok’s rise. TikTok’s short-video recommendation system transformed the industry and triggered an intense competition among platforms to replicate its success.
Matt Motyl, a former senior researcher at Meta, said Instagram Reels, the company’s short video feature launched in 2020, was introduced without sufficient safeguards.
Internal research shared with the BBC indicated that comments on Reels had significantly higher rates of harmful content than other areas of Instagram, including bullying, harassment, hate speech and incitement to violence.
Motyl said he provided the broadcaster with internal research documents highlighting harms linked to social media algorithms.
One study cited by the BBC said Facebook’s recommendation system offered content creators a “path that maximizes profits at the expense of their audience’s wellbeing” and noted that the “current set of financial incentives our algorithms create does not appear to be aligned with our mission” of bringing people closer together.
The document added that Facebook could “choose to be idle and keep feeding users fast-food, but that only works for so long”.
Meta rejected the whistleblowers’ claims.
“Any suggestion that we deliberately amplify harmful content for financial gain is wrong,” the company said in a statement.
TikTok also disputed the allegations, calling them “fabricated claims” and saying it invests heavily in technology designed to prevent harmful content from being viewed.
Ruofan Ding, a former machine learning engineer who worked on TikTok’s recommendation system from 2020 to 2024, described the algorithm as difficult to fully control.
“We have no control of the deep-learning algorithm in itself,” Ding said.
Engineers working on recommendation systems typically focus on technical signals rather than the content itself, he said.
“To us, all the content is just an ID, a different number.”
Ding said engineers relied on content moderation teams to remove harmful posts before algorithms could promote them, comparing the relationship to different teams responsible for components of a car.
“There’s the team that are responsible for the acceleration, the engine, right? So we expect the team working on the braking system was doing a good job,” he said.
However, as TikTok frequently updated its algorithm to improve engagement and gain market share, Ding said he began noticing more “borderline” content appearing after users had watched videos for extended periods.
Borderline content generally refers to material that may be harmful but does not violate the law, including racist, misogynistic or conspiratorial posts.
Teenagers interviewed by the BBC said tools designed to prevent unwanted content from appearing in their feeds were often ineffective and that violent or hateful material continued to be recommended.
One teenager, Calum, said he had been “radicalised by algorithm” from the age of 14 after being repeatedly shown inflammatory content.
“The videos energised me, but not really in a good way,” he said. “They just made me very kind of angry.”
A TikTok trust and safety employee identified as Nick told the BBC that the volume of cases handled by moderation teams made it difficult to protect users effectively, particularly teenagers and children.
“If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point you can decide, should I say something?” he said.
Nick said internal dashboards sometimes showed cases involving politicians receiving higher priority than complaints related to minors experiencing harassment or sexual exploitation.
He said such prioritisation reflected a desire to maintain a “strong relationship” with governments and political leaders.
TikTok rejected the suggestion that political content was prioritised over child safety.
“Specialist workflows for certain issues do not result in the deprioritisation of child safety cases, which are handled by dedicated teams within parallel review structures,” the company said.
Nick advised parents to keep children away from the platform.
“Delete it, keep them as far away as possible from the app for as long as possible,” he said.
The competition intensified in 2020 when Meta launched Instagram Reels in response to TikTok’s rapid growth during the COVID-19 pandemic.
Motyl said the goal was to replicate TikTok’s highly engaging format as quickly as possible.
“Meta’s products are used by north of three billion people and the more time they can keep you on there, the more ads they sell, the more money they make,” he said.
“But it’s very important that they get this stuff right, because when they don’t, really bad things happen.”
Brandon Silverman, founder of the social media analytics platform CrowdTangle, which Facebook acquired in 2016, said Meta leadership appeared deeply concerned about competitive pressure from TikTok.
“When he feels like there are potential competitive forces there’s no amount of money that is too much,” Silverman said of Meta Chief Executive Mark Zuckerberg.
Silverman said safety teams sometimes struggled to secure approval for additional staff while the company expanded its Reels operation.
A former Meta engineer identified as Tim said that as the company sought to compete with TikTok, restrictions on borderline content were relaxed.
“People started becoming paranoid and reactive and they were like, let’s just do whatever we can to catch up,” he said.
Internal research cited by the BBC suggested algorithms tend to promote content that provokes strong reactions.
“Given the disproportionate engagement, our algorithms presume that users like that content and want more of it,” the study said.
Silverman said Meta’s leadership eventually adopted a more defensive position regarding criticism of its role in online polarisation.
“Nobody’s saying you’re responsible for all polarisation,” he said. “We’re just saying you contribute to it, and probably in ways where you don’t have to.”
Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Discover more from Impact AI News
Subscribe to get the latest posts sent to your email.

