Particle.news

Download on the App Store

AI Startup Anthropic Trains Chatbot to Adhere to Constitutional Values

  • Anthropic, a startup working on AI models, developed a chatbot named Claude with built-in ethical principles that Anthropic calls the bot's "constitution."
  • The constitution includes guidelines from the UN Universal Declaration of Human Rights and Apple's rules for app developers to ensure AI does not behave improperly.
  • Anthropic aims to raise $5 billion to train AI systems like OpenAI's ChatGPT using a technique called "constitutional AI" that provides models principles to judge their responses.
  • The principles come from sources like the UN and platform guidelines, and Anthropic believes constitutional AI can align systems with goals; they plan to explore more democratic ways to make constitutions for specific uses.
  • Alphabet-backed Anthropic disclosed the moral values, called Claude's constitution, they used to train Claude, their AI system; the values are from sources like the UN Declaration on Human Rights and Apple's privacy rules to create safe AI that avoids harm.
Hero image