Skip to main content
All CollectionsClaude.aiTroubleshooting
Claude is providing incorrect or misleading responses. What’s going on?
Claude is providing incorrect or misleading responses. What’s going on?
Updated over a week ago

In an attempt to be a helpful assistant, Claude can occasionally produce responses that are incorrect or misleading.

This is known as "hallucinating" information, and it’s a byproduct of some of the current limitations of frontier Generative AI models, like Claude. For example, in some subject areas, Claude might not have been trained on the most-up-to-date information and may get confused when prompted about current events. Another example is that Claude can display quotes that may look authoritative or sound convincing, but are not grounded in fact. In other words, Claude can write things that might look correct but are very mistaken.

Users should not rely on Claude as a singular source of truth and should carefully scrutinize any high-stakes advice given by Claude.

You can use the thumbs down button to let us know if a particular response was unhelpful, or write to us at feedback@anthropic.com with your thoughts or suggestions.

To learn more about how Anthropic’s technology works and our research on developing safer, steerable, and more reliable models, we recommend visiting: https://www.anthropic.com/research

Did this answer your question?