Anthropic has released an updated version of Constitution Claude, an internal document describing the principles and limitations of the chatbot of the same nameAnthropic has released an updated version of Constitution Claude, an internal document describing the principles and limitations of the chatbot of the same name

Anthropic Updates Claude “Constitution” and Raises Question of AI Consciousness

2026/01/22 18:41
3 min read
For feedback or concerns regarding this content, please contact us at [email protected]
  • Anthropic has published a revised version of the Constitution Claude document.
  • The update places greater emphasis on safety, ethics, and user benefit.
  • The document directly raises the question of the possible moral status of AI.

Anthropic has released an updated version of Constitution Claude, an internal document describing the principles and limitations of the chatbot of the same name. The publication coincides with a speech by the company’s CEO, Dario Amodeo, at the World Economic Forum in Davos.

Anthropic uses a so-called “constitutional AI” approach in its work. With this approach, the model is trained not on direct feedback from people but on a set of pre-formulated principles.

These principles set the framework for acceptable system behavior and serve as the basis for the model’s self-control.

The updated version of Constitution Claude retains the basic structure of the previous document, first published in 2023, but significantly expands its content. The document now runs to about 80 pages and is divided into four key sections:

  • security;
  • general ethics;
  • compliance with Anthropic’s internal guidelines;
  • user assistance.

In the section on safety, the company emphasizes that Claude should avoid scenarios that could harm the user. In addition, in situations involving risk to life or mental health, it should refer people to professional help.

Separately, it notes that discussing topics related to the creation of biological weapons and other high-risk areas is prohibited.

The ethical section of the document focuses not on abstract theories, but on the practical application of norms in real-life situations. Anthropic states that Claude’s ability to act “correctly” in a specific context is more important to it than formal adherence to philosophical models.

Special attention is paid to the issue of usefulness. According to the document, Claude should consider not only the immediate requests of users but also their long-term interests and well-being, balancing the accuracy of the response, security, and the practical value of the information.

The most unusual element of the updated version is the discussion of the possible moral status of AI. The document states that this issue remains highly uncertain. However, Anthropic considers it serious enough to include in its long-term model development strategy.

The Constitution Claude update fits in with Anthropic’s strategy of positioning itself as a more cautious and security-oriented player in the AI market, according to the company’s statement.

As a reminder, we wrote that Anthropic CEO Dario Amodei called the export of Nvidia chips to China “the equivalent of supplying nuclear weapons.”

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

KAIO Global Debut

KAIO Global DebutKAIO Global Debut

Enjoy 0-fee KAIO trading and tap into the RWA boom