Skip to main content
All CollectionsAnthropic API & API ConsoleAPI Usage and Best Practices
I’m planning to launch a product using Claude. What steps should I take to ensure I’m not violating Anthropic’s Usage Policy?
I’m planning to launch a product using Claude. What steps should I take to ensure I’m not violating Anthropic’s Usage Policy?
Updated over a week ago

We founded Anthropic to put safety at the frontier of AI research and AI products. Our research informs our commercial products, and our models are some of the most reliably safe and resistant to abuse available today. While our API is still in a closed beta, we are working to improve our safety filters based on user feedback - and we expect our commercial customers will hold us accountable when our safety features are failing.

But we believe safety is a shared responsibility. Our features are not failsafe, and committed partners are a second line of defense. Depending on your use case, moderation steps will look different, but here are some additional safety recommendations:

  • Use Claude as a content moderation filter to identify and prevent violations.

  • For external-facing products, disclose to your users that they are interacting with an AI system.

  • For sensitive information and decision making, have a qualified professional review content prior to dissemination to consumers.

We encourage you to send us feedback or specific proposals to usersafety@anthropic.com. For existing commercial partners, we’d recommend joining our Discord server and exchanging ideas with fellow developers as well.

Did this answer your question?