YouTube Criticised for Ignoring Child Safety Concerns in Australia
Australia’s eSafety Commissioner has strongly criticised major tech companies, especially YouTube, for failing to tackle child sexual abuse material (CSAM) on their platforms. In a new report, the watchdog claims that some of the world’s biggest social media firms are not taking the protection of children seriously.
YouTube and Apple Under Fire for Poor Reporting Practices
The report, released on Wednesday, highlights that both YouTube and Apple do not track the number of user reports they receive about child abuse material. They also could not provide information on how quickly they respond to such reports.
Following this, the Australian government last week decided to include YouTube in its groundbreaking social media ban for teenagers. This move came after eSafety recommended overturning an earlier decision to exempt the video-sharing platform, which is owned by Alphabet’s Google.
“When left to their own devices, these companies aren’t prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,” said eSafety Commissioner Julie Inman Grant.
She added that no other consumer-facing industry would be allowed to continue operating if it enabled such serious crimes on its services.
Google Defends Its Record, But Gaps Remain
In response, a Google spokesperson stated that the commissioner’s criticisms were based on reporting statistics rather than actual safety outcomes. The company said that YouTube proactively removes over 99% of abusive content before it is reported or viewed.
“Our focus remains on outcomes and detecting and removing child sexual exploitation and abuse on YouTube,” the spokesperson added.
Meanwhile, Meta – the parent company of Facebook, Instagram and Threads – has said it does not allow graphic content on its platforms, which serve more than 3 billion users globally.
Major Platforms Still Lacking Key Safeguards
The eSafety Commissioner has required major tech companies including Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on their efforts to combat child abuse content in Australia.
So far, the findings show a worrying range of safety failures. These include poor detection of live-streamed abuse, failure to block known harmful links, and weak reporting tools for users.
The report also points out that many platforms are not using hash-matching technology across all services. This technology helps identify known abusive images by comparing them with a secure database. While Google claims to use hash-matching and AI, the regulator says some companies, including Google and Apple, failed to answer basic questions.
“They didn’t even answer our questions about how many user reports they received or how many trust and safety staff they employ,” said Inman Grant.
with inputs from Reuters