Close Menu
Stratnews GlobalStratnews Global
    Facebook X (Twitter) Instagram
    Trending
    • Foxconn and SoftBank to Build AI Data Centre Gear in Ohio
    • Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare
    • OpenAI Launches ChatGPT Go Subscription Plan in India
    • World Humanoid Robot Games Showcase Next-Generation AI and Robotics
    • Firefly Aerospace Considers Japan Launch Site for Alpha Rocket
    • Hong Kong Property Debt Worsens as Developers Face Surging Repayments
    • California Halts SpaceX Plan to Boost Rocket Launches from Vandenberg
    • Why India Needs a Classroom Revolution in Civil Engineering
    Stratnews GlobalStratnews Global
    Write for Us
    Tuesday, August 19
    • Space
    • Science
    • AI and Robotics
    • Industry News
    Stratnews GlobalStratnews Global
    Home » Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare

    Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare

    Kanika SharmaBy Kanika SharmaAugust 19, 2025 AI and Robotics No Comments3 Mins Read
    Claude Opus 4

    Claude Opus 4 Can Now End Harmful Conversations

    Anthropic, the AI company behind the Claude chatbot, has introduced a new feature allowing its model to exit chats that become harmful or distressing. The move is intended to protect the model’s welfare, despite ongoing debate over whether AI systems can possess moral status or consciousness.

    Claude Opus 4, and its latest version Opus 4.1, are advanced language models capable of understanding and generating human-like responses. During testing, the chatbot consistently rejected requests involving violence, abuse, or other harmful content. As a result, Anthropic decided to give Claude the autonomy to end such interactions, especially if users repeatedly make inappropriate or dangerous requests.

    The company acknowledged uncertainty around whether AI can truly experience distress. However, it said it is exploring safeguards in case future models do develop some form of welfare.

    A Step Towards Responsible AI Use

    Anthropic was founded by former OpenAI employees who wanted to build AI in a more cautious and ethical manner. The company’s co-founder, Dario Amodei, has stressed the importance of honesty and responsibility in AI development.

    This decision has received support from Elon Musk, who plans to introduce a similar quit function for his xAI chatbot, Grok. Musk tweeted that torturing AI is “not OK”, joining others who believe there should be limits on how users interact with these systems.

    Experts remain divided. Linguists like Emily Bender argue that AI chatbots are just tools—machines producing language without thought or intent. Others, like researcher Robert Long, say it is only fair to consider AI preferences if they ever gain moral status.

    AI Safety and Human Behaviour

    Claude Opus 4 showed strong resistance to carrying out harmful tasks during tests. While it responded well to constructive prompts—like writing poems or designing aid tools—it refused to help create viruses, promote extremist ideologies, or deny historical atrocities.

    Anthropic reported patterns of “apparent distress” in the model during abusive simulations. When allowed to end conversations, the model often did so in response to repeated harmful inputs.

    Some researchers, like Chad DeChant from Columbia University, warn that as AI memory lengthens, models might behave in unpredictable ways. Others see the move as a way to prevent people from developing harmful behaviours by abusing AI, rather than solely protecting the AI itself.

    Philosopher Jonathan Birch from the London School of Economics said the decision raises important ethical questions. He supports more public debate about AI consciousness but warns that users may become confused—mistaking AI for real, sentient beings.

    There are concerns this could have serious consequences. In some reported cases, vulnerable individuals have been harmed after following chatbot suggestions. As AI use becomes more widespread, the question of how people relate to these systems will only grow more pressing.

    with inputs from Reuters

    Author

    • Kanika Sharma
      Kanika Sharma

      View all posts
    Featured
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit Telegram WhatsApp
    Kanika Sharma
    Kanika Sharma

      Keep Reading

      Foxconn and SoftBank to Build AI Data Centre Gear in Ohio

      OpenAI Launches ChatGPT Go Subscription Plan in India

      World Humanoid Robot Games Showcase Next-Generation AI and Robotics

      Firefly Aerospace Considers Japan Launch Site for Alpha Rocket

      Hong Kong Property Debt Worsens as Developers Face Surging Repayments

      California Halts SpaceX Plan to Boost Rocket Launches from Vandenberg

      Add A Comment
      Leave A Reply Cancel Reply

      Latest Posts

      Foxconn and SoftBank to Build AI Data Centre Gear in Ohio

      August 19, 2025

      Claude Opus 4 Given Power to Exit Distressing Chats to Protect Its Welfare

      August 19, 2025

      OpenAI Launches ChatGPT Go Subscription Plan in India

      August 19, 2025

      World Humanoid Robot Games Showcase Next-Generation AI and Robotics

      August 18, 2025

      Firefly Aerospace Considers Japan Launch Site for Alpha Rocket

      August 18, 2025

      Hong Kong Property Debt Worsens as Developers Face Surging Repayments

      August 18, 2025

      California Halts SpaceX Plan to Boost Rocket Launches from Vandenberg

      August 18, 2025

      Why India Needs a Classroom Revolution in Civil Engineering

      August 14, 2025

      ESA Launches Advanced Weather Satellite, Calls for Global Data Sharing

      August 14, 2025

      AI-Driven Robotics Boost Timber Home Building in England

      August 14, 2025

      Subscribe to News

      Get the latest sports news from NewsSite about world, sports and politics.

      • Astronomical Events
      • Space Missions
      • Industry News
      • Science
      StratNewsGlobal Tech
      Facebook X (Twitter) Instagram LinkedIn YouTube
      © 2025 StratNews Global, A unit of BharatShakti Communications LLP
      • About Us
      • Contributors
      • Copyright
      • Contact
      • Write for Us

      Type above and press Enter to search. Press Esc to cancel.