Anthropic maintains a comprehensive Usage Policy to ensure the responsible deployment of our AI systems. Anthropic may enter into contracts with government customers that tailor use restrictions to that customer’s public mission and legal authorities if, in Anthropic’s judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms addressed by this Usage Policy.
For example, with carefully selected government entities, we may allow foreign intelligence analysis in accordance with applicable law. All other use restrictions in our Usage Policy, including those prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, remain.
At this time, this policy only applies to models that are at AI Safety Level 2 (ASL-2) under our Responsible Scaling Policy (RSP).
What government entities are eligible for Usage Policy modifications?
Our evaluation of whether to tailor use restrictions to the mission and legal authorities of a government entity aims to balance enabling beneficial uses of our products and services with mitigating potential harms, and includes:
Our assessment of the models’ suitability for the proposed use cases.
The legal authorities of the agency in question.
The extent of the agency's willingness to engage in ongoing dialogue with Anthropic.
The safeguards in place to prevent misuse and mitigate risks of mistakes.
The degree of independent and democratic oversight of the organizations and their uses of AI technologies, including legislative or regulatory constraints and other relevant public commitments.