All Collections
Trust & Safety
Responsible Use of Anthropic's Models: Guidelines for Organizations Serving Minors
Responsible Use of Anthropic's Models: Guidelines for Organizations Serving Minors
Updated over a week ago

At Anthropic, we recognize the unique vulnerabilities and needs of children in digital spaces. In order to create a safer digital environment and mitigate risks, organizations providing minors with the ability to directly interact with products that incorporate our API(s) should implement the following safeguards:

1. Additional Technical Measures

Organizations with products serving minors should implement additional safety features tailored to their unique use cases, as they are best situated to understand the specific ways their end users may interact with products that incorporate Anthropic's services. These safety measures may include, but are not limited to:

  • Age verification systems to ensure only intended users can access the product

  • Content moderation and filtering to block inappropriate or harmful content

  • Monitoring and reporting mechanisms to identify and address potential issues

  • Educational resources and guidance for minors on safe and responsible use of the product

In addition to these organization-specific measures, Anthropic may make available technical measures intended to tailor product experiences for certain end users, including minors. For example, we may provide a child-safety system prompt, which organizations serving minors should implement as part of a comprehensive suite of safety measures. It is important to note that, while helpful, these technical measures are not infallible and should be used in conjunction with the organization's own safety features to ensure a robust approach to child safety.

2. Regulatory Compliance

It is the responsibility of organizations to comply with all applicable child safety and data privacy regulations, such as the Children's Online Privacy Protection Act (COPPA) in the United States. Compliance with these regulations should be clearly stated on the organization’s website or similar public-facing documentation.

3. Disclosure Requirements

Organizations must disclose to their users that they are interacting with an AI system rather than a human.

Anthropic will periodically audit organizations for compliance with these safeguards. If your organization has a high violation rate and has not implemented these safety recommendations, we may ask you to implement them. Failure to implement these recommendations when requested, or a continued high violation rate, may lead to the suspension or termination of your account.

Did this answer your question?