Content Moderators Demand Mental Health Support Amid Rising Online Trauma
Content moderators from the Philippines to Turkey are uniting to demand better mental health support as they face increasing exposure to disturbing online content. These workers, who remove harmful material from platforms like Meta and TikTok, report severe health issues, including anxiety, insomnia and, in some cases, suicidal thoughts.
A Filipino moderator shared how sleep has dropped from seven hours to four since starting the job, while others report a loss of appetite and increased stress. Due to non-disclosure agreements, moderators cannot share specific content details, but examples include videos of people being burned alive, images of dead babies in Gaza, and graphic photos from the Air India crash in June.
Pushing for Safer Working Conditions
Tech companies often outsource moderation to third parties, but they face growing pressure to address the mental toll on workers. Meta, which owns Facebook, WhatsApp, and Instagram, has previously settled lawsuits in the US and faces ongoing legal challenges in Kenya and Ghana over moderators’ mental health.
In April, content moderators launched the Global Trade Union Alliance of Content Moderators in Nairobi. Their first demand is for tech companies to adopt mental health protocols, including trauma training and exposure limits.
“They say we’re the ones protecting the internet, keeping kids safe online,” the Filipino moderator said. “But we are not protected enough.”
Rising Pressure in a Hazardous Job
Globally, tens of thousands of moderators spend up to 10 hours daily reviewing harmful content, often with significant psychological impacts. Berfin Sirin Tunc, a TikTok moderator in Turkey employed by Telus Digital, said she has suffered from nightmares, increased smoking, and a loss of focus.
Although some companies provide psychological support, moderators say it often lacks depth, offering only brief breathing exercises or short “wellness breaks.” However, workers report difficulties taking these breaks due to constant pressure to keep up with content queues.
“If you don’t return quickly, your team leader will ask where you are, saying the queue is growing,” Tunc shared, adding that supervisors treat them like machines.
In statements, Telus Digital and Meta stated that employee well-being is a priority, highlighting the availability of 24/7 healthcare support.
Calls for Protocols and Fair Treatment
Moderators report an increase in violent videos, particularly on Facebook, after policy changes to support “free expression.” Telus Digital said distressing content represents under 5% of reviewed material, but the impact on workers remains significant.
Job insecurity adds to this stress. Many fear losing work as companies adopt AI-powered moderation. Earlier this year, Meta ended its US fact-checking programme and cut 2,000 Barcelona-based moderation jobs.
In Turkey, Tunc fears being fired after friends were dismissed for union activities. Fifteen Turkish workers are suing Telus, alleging dismissal due to organising protests and joining a union. Telus stated it respects workers’ rights to organise and said dismissals were based on performance, according to a May report by Turkey’s Labour Ministry.
Toward a Healthier Moderation System
Moderators in low-income countries highlight that low wages, high workloads, and inadequate mental health support can be improved if companies adopt the Global Alliance’s eight protocols. These include exposure limits, realistic quotas, living wages, 24/7 counselling, and the right to unionise.
Telus Digital stated it is already aligned with these demands, while Meta conducts audits to ensure required support for moderators.
New EU regulations, including the Digital Services Act and AI Act, may provide stronger legal protections for moderators, requiring tech firms to address worker risks within supply chains.
“Someone has to do this job and protect social media,” said Tunc. “With better conditions, we can do this better. If you feel like a human, you can work like a human.”
with inputs from Reuters