Skip to main content
API Docs
Release Notes
How to Get Support
English
Français
Deutsch
Bahasa Indonesia
Italiano
日本語
한국어
Português
Pусский
简体中文
Español
繁體中文
English
API Docs
Release Notes
How to Get Support
English
Français
Deutsch
Bahasa Indonesia
Italiano
日本語
한국어
Português
Pусский
简体中文
Español
繁體中文
English
Discover answers and insights from the Anthropic team
Search for articles...
Search results for:
token
Cost and Usage Reporting in Console
choose specific models, months, or API keys Visual Representation: A chart with input and output
token
… You can see the chart,
token
cost, and tool use costs, which will update based on your selections
How do I log out of all active sessions?
To remove a
token
and log out of Claude Code, click the trash can icon
What kinds of documents can I upload to Claude.ai?
Text extraction only, except for multimodal PDFs Note: Additional
token
limits apply to these limits
Building Custom Connectors via Remote MCP Servers
some solutions like Cloudflare provide remote MCP server hosting with built-in autoscaling, OAuth
token
Using the GitHub Integration
are central to your current task or project, but avoid selecting unnecessary files to keep within
token
About the Development Partner Program
tokens
Cache read: -$0.09 per million tokens To be eligible for this discount, OAuth must… Key information Only Claude Code input and output
tokens from
Anthropic’s first-party API are shared… standard API pricing on Claude 3.5 Sonnet, Claude 3.7 Sonnet, and Claude 4.0 Sonnet Claude Code input
tokens
How large is the Context Window on paid Claude.ai plans?
Claude can ingest 200K+
tokens (about
500 pages of text or more) when using a paid Claude.ai plan
How large is the Anthropic API’s context window?
The Anthropic API can ingest 200K+
tokens (about
500 pages of text or more)
What is the maximum prompt length?
The context window for Claude Pro and our API is currently 200k+
tokens (about
500 pages of text or 100
Anthropic MCP Directory Policy
MCP servers should be frugal with their use of
tokens.
… The amount of
tokens a
given tool call uses should be roughly commensurate with the complexity or impact