New Zealand Tool Targets Extremism Support Through AI Intervention
People who display violent extremist tendencies on AI platforms may soon receive targeted support through a new tool under development in New Zealand. The initiative aims to redirect such individuals towards human and chatbot-based deradicalisation assistance instead of simply cutting off access.
This effort reflects growing concern over platform safety, especially as lawsuits increasingly accuse AI companies of failing to prevent or even enabling violent behaviour. Consequently, developers and policymakers are exploring more proactive and supportive responses.
Expanding Crisis Support Into Extremism Prevention
ThroughLine, a startup already working with major AI companies, currently helps redirect users flagged for risks such as self-harm, domestic violence, or eating disorders. Now, it is actively exploring ways to extend this system to address violent extremism as well.
The company has begun discussions with an anti-extremism initiative formed after New Zealand’s 2019 terrorist attack. This collaboration would involve expert guidance while ThroughLine builds a specialised intervention chatbot. Although the idea is still in development, the founder confirmed that no clear timeline has been established.
At present, ThroughLine operates a global network of 1,600 helplines across 180 countries. When AI systems detect signs of crisis, users are routed to appropriate local human-run services. However, the founder noted that the range of issues people now disclose has expanded significantly alongside the rise of AI chatbots.
Hybrid Model Combines AI and Human Support
The proposed extremism tool will likely use a hybrid approach. A chatbot trained specifically to respond to early signs of radicalisation would engage users, while real-world services would provide further support. Importantly, developers emphasise that the system will rely on expert input rather than generic training data.
Testing is already underway, though the release date remains uncertain. Meanwhile, experts believe the tool could also support moderators, parents, and caregivers who want to address harmful behaviour online more effectively.
Researchers argue that this approach acknowledges a key issue. Extremism is not only about harmful content but also about the relationships and interactions that shape behaviour. Therefore, the effectiveness of the tool will depend heavily on the quality of follow-up systems and support structures.
Balancing Safety With User Engagement
Developers continue to evaluate whether features such as alerting authorities should be included. They aim to avoid responses that might escalate harmful behaviour while still ensuring safety.
At the same time, there is concern that overly strict moderation could push individuals towards less regulated platforms. Studies suggest that users already migrate to alternative spaces when mainstream platforms increase enforcement.
The founder stressed that many individuals share sensitive thoughts with AI systems that they would not express elsewhere. Therefore, abruptly shutting down conversations may leave them unsupported and potentially at greater risk.
This evolving approach highlights a shift in strategy. Rather than simply blocking harmful users, platforms may increasingly focus on guiding them towards meaningful help and intervention.
With inputs from Reuters

