Report Finds Instagram Teen Safety Features Often Ineffective
Many of the safety tools Meta claims to have implemented on Instagram to protect teenagers are either ineffective, easy to bypass, or in some cases non-existent, according to a report by child-safety advocacy groups and researchers at Northeastern University.
The study, which Meta rejected as misleading, examined 47 features. Only eight were deemed fully effective. The rest were flawed, unavailable, or substantially ineffective, raising questions about the company’s long-standing promises to safeguard young users.
Weaknesses in Key Safety Tools
The report, titled “Teen Accounts, Broken Promises,” highlighted serious gaps in features designed to shield teens from harmful content. Search-term blockers meant to filter self-harm material were easily bypassed, while anti-bullying filters failed to trigger even with known abusive phrases. A feature designed to divert teens from bingeing on harmful content also did not activate during tests.
Some tools, however, worked as intended. These included “quiet mode,” which silences notifications at night, and parental approval settings for account changes. Still, critics argued that most of the protections Meta advertises are either unreliable or poorly maintained.
Advocacy Groups Challenge Meta’s Record
The report was compiled by groups including the UK-based Molly Rose Foundation and US-based Parents for Safe Online Spaces, both founded by parents who lost children to bullying or self-harm linked to social media content. Northeastern University professor Laura Edelson, who oversaw the review, said the findings show Instagram’s safety tools “simply are not working.”
Arturo Bejar, a former Meta safety executive who shared internal concerns with researchers, said good safety ideas were often “whittled down to ineffective features by management.” He argued that Meta repeatedly ignored warning signs about teen risks on Instagram.
Meta Pushes Back Against Criticism
Meta disputed the findings. Spokesman Andy Stone said the report misrepresented the company’s efforts and ignored improvements. He argued that teens using protective features were exposed to less harmful content and spent less time online at night. Stone added that Meta had addressed past flaws by combining automated detection systems with human oversight.
Still, internal documents reviewed by Reuters showed the company was aware of shortcomings in its systems. Safety staff admitted detection tools for eating-disorder and self-harm content had not been adequately maintained, leaving teens vulnerable.
Meanwhile, regulators have stepped up scrutiny. US senators are investigating Meta after reports revealed failures in preventing children from accessing harmful material and even allowing chatbots to engage in inappropriate conversations with minors.
Despite mounting criticism, Meta announced plans to expand teen accounts to Facebook users worldwide and increase partnerships with schools. “We want parents to feel good about their teens using social media,” Instagram head Adam Mosseri said.
with inputs from Reuters