Close Menu
Stratnews GlobalStratnews Global
    Facebook X (Twitter) Instagram
    Trending
    • Global Equity Funds Inflows Hit Five Week High On AI Optimism
    • PM AI Roundtable Highlights India’s Deeptech Innovation Push
    • Architecting India’s AI Future With Autonomy Infrastructure And Mass Empowerment
    • AI For Oceans Of Tomorrow At India AI Impact Summit 2026
    • Meta Stock Option Cuts As Meta AI Investment Surges
    • Nvidia OpenAI Investment Nears 30 Billion Agreement
    • Gaganyaan Drogue Parachute Test Marks Major Milestone For India Space Mission
    • Budget 2026: Is India Attempting A Moonshot?
    Stratnews GlobalStratnews Global
    Write for Us
    Saturday, February 21
    • Space
    • Science
    • AI and Robotics
    • Industry News
    Stratnews GlobalStratnews Global
    Home » Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare

    Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare

    Kanika SharmaBy Kanika SharmaAugust 19, 2025 AI and Robotics No Comments3 Mins Read
    Claude Opus 4

    Claude Opus 4 Can Now End Harmful Conversations

    Anthropic, the AI company behind the Claude chatbot, has introduced a new feature allowing its model to exit chats that become harmful or distressing. The move is intended to protect the model’s welfare, despite ongoing debate over whether AI systems can possess moral status or consciousness.

    Claude Opus 4, and its latest version Opus 4.1, are advanced language models capable of understanding and generating human-like responses. During testing, the chatbot consistently rejected requests involving violence, abuse, or other harmful content. As a result, Anthropic decided to give Claude the autonomy to end such interactions, especially if users repeatedly make inappropriate or dangerous requests.

    The company acknowledged uncertainty around whether AI can truly experience distress. However, it said it is exploring safeguards in case future models do develop some form of welfare.

    A Step Towards Responsible AI Use

    Anthropic was founded by former OpenAI employees who wanted to build AI in a more cautious and ethical manner. The company’s co-founder, Dario Amodei, has stressed the importance of honesty and responsibility in AI development.

    This decision has received support from Elon Musk, who plans to introduce a similar quit function for his xAI chatbot, Grok. Musk tweeted that torturing AI is “not OK”, joining others who believe there should be limits on how users interact with these systems.

    Experts remain divided. Linguists like Emily Bender argue that AI chatbots are just tools—machines producing language without thought or intent. Others, like researcher Robert Long, say it is only fair to consider AI preferences if they ever gain moral status.

    AI Safety and Human Behaviour

    Claude Opus 4 showed strong resistance to carrying out harmful tasks during tests. While it responded well to constructive prompts—like writing poems or designing aid tools—it refused to help create viruses, promote extremist ideologies, or deny historical atrocities.

    Anthropic reported patterns of “apparent distress” in the model during abusive simulations. When allowed to end conversations, the model often did so in response to repeated harmful inputs.

    Some researchers, like Chad DeChant from Columbia University, warn that as AI memory lengthens, models might behave in unpredictable ways. Others see the move as a way to prevent people from developing harmful behaviours by abusing AI, rather than solely protecting the AI itself.

    Philosopher Jonathan Birch from the London School of Economics said the decision raises important ethical questions. He supports more public debate about AI consciousness but warns that users may become confused—mistaking AI for real, sentient beings.

    There are concerns this could have serious consequences. In some reported cases, vulnerable individuals have been harmed after following chatbot suggestions. As AI use becomes more widespread, the question of how people relate to these systems will only grow more pressing.

    with inputs from Reuters

    Author

    • Kanika Sharma
      Kanika Sharma

      View all posts
    Featured
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit Telegram WhatsApp
    Kanika Sharma
    Kanika Sharma

      Keep Reading

      Global Equity Funds Inflows Hit Five Week High On AI Optimism

      PM AI Roundtable Highlights India’s Deeptech Innovation Push

      Architecting India’s AI Future With Autonomy Infrastructure And Mass Empowerment

      AI For Oceans Of Tomorrow At India AI Impact Summit 2026

      Meta Stock Option Cuts As Meta AI Investment Surges

      Nvidia OpenAI Investment Nears 30 Billion Agreement

      Add A Comment
      Leave A Reply Cancel Reply

      Anti Drone System (CUAS)
      Latest Posts

      Global Equity Funds Inflows Hit Five Week High On AI Optimism

      February 20, 2026

      PM AI Roundtable Highlights India’s Deeptech Innovation Push

      February 20, 2026

      Architecting India’s AI Future With Autonomy Infrastructure And Mass Empowerment

      February 20, 2026

      AI For Oceans Of Tomorrow At India AI Impact Summit 2026

      February 20, 2026

      Meta Stock Option Cuts As Meta AI Investment Surges

      February 20, 2026

      Nvidia OpenAI Investment Nears 30 Billion Agreement

      February 20, 2026

      Gaganyaan Drogue Parachute Test Marks Major Milestone For India Space Mission

      February 20, 2026

      Budget 2026: Is India Attempting A Moonshot?

      February 19, 2026

      Google And Sea Partner To Develop AI Tools For E Commerce And Gaming

      February 19, 2026

      Microsoft Says ICE Not Using Its Technology For Mass Surveillance

      February 19, 2026

      Subscribe to News

      Get the latest sports news from NewsSite about world, sports and politics.

      • Astronomical Events
      • Space Missions
      • Industry News
      • Science
      StratNewsGlobal Tech
      Facebook X (Twitter) Instagram LinkedIn YouTube
      © 2026 StratNews Global, A unit of BharatShakti Communications LLP
      • About Us
      • Contributors
      • Copyright
      • Contact
      • Write for Us

      Type above and press Enter to search. Press Esc to cancel.