Claude vs Gemini: Anthropic vs Google API Pricing 2026
Compare Anthropic Claude and Google Gemini API pricing. See Claude Opus, Sonnet, Haiku vs Gemini 2.5 Pro, Flash on cost, benchmarks, and value.
Pricing Comparison
| Model | Tier | Input $/1M | Output $/1M | ||
|---|---|---|---|---|---|
| Anthropic (Claude) | |||||
| Claude Opus 4.6 | Flagship | $5.00 | $25.00 | ||
| Claude Sonnet 4.6 | Mid-tier | $3.00 | $15.00 | ||
| Claude Opus 4.5 | Flagship | $5.00 | $25.00 | ||
| Claude Sonnet 4.5 | Mid-tier | $3.00 | $15.00 | ||
| Claude Haiku 4.5 | Budget | $1.00 | $5.00 | ||
| Claude Opus 4.1 | Flagship | $15.00 | $75.00 | ||
| Claude Opus 4 | Flagship | $15.00 | $75.00 | ||
| Claude Sonnet 4 | Mid-tier | $3.00 | $15.00 | ||
| Claude Sonnet 3.7 | Mid-tier | $3.00 | $15.00 | ||
| Claude Haiku 3.5 | Budget | $0.80 | $4.00 | ||
| Claude 3 Haiku | Budget | $0.25 | $1.25 | ||
| Google (Gemini) | |||||
| Gemini 2.5 Pro | Flagship | $1.25 | $10.00 | ||
| Gemini 2.5 Flash | Mid-tier | $0.30 | $2.50 | ||
| Gemini 2.5 Flash-Lite | Budget | $0.10 | $0.40 | ||
| Gemini 2.0 Flash | Budget | $0.10 | $0.40 | ||
| Gemini 3.1 Pro Preview | Flagship | $2.00 | $12.00 | ||
| Gemini 3 Flash Preview | Mid-tier | $0.50 | $3.00 | ||
| Gemini 2.0 Flash-Lite | Budget | $0.07 | $0.30 | ||
Quality Benchmarks
Comparing Claude Opus 4.6 vs Gemini 3.1 Pro Preview — the highest-rated model from each provider by Arena ELO.
Key Differences
Price Range (Input $/1M)
Max Context Window
Models Available
Batch Pricing
Prompt Caching
Vision Support
Our Verdict
Google (Gemini)
Anthropic (Claude)
Google (Gemini)
Claude leads on quality benchmarks; Gemini offers dramatically better pricing and larger context windows.
Gemini's pricing advantage is dramatic. Gemini 2.5 Flash at $0.15/$0.60 delivers Arena ELO of 1330 compared to the nearest Claude equivalent (Claude 3.5 Haiku at $0.80/$4.00) at about 85% less cost. Even at the flagship tier, Gemini 2.5 Pro ($1.25/$10.00) is significantly cheaper than Claude Opus 4.6 ($5.00/$25.00). Claude's advantage is raw quality. Claude Opus 4.6 holds the highest Arena ELO (1440) and HumanEval score (97.2%) of any model in our database. For applications where output quality directly impacts user experience or revenue, the premium may be justified. Google also wins on context windows: up to 2M tokens vs Anthropic's consistent 200K. For processing very long documents or codebases, Gemini has a clear structural advantage. Anthropic counters with batch and caching pricing that can significantly reduce costs for repetitive workloads.
Anthropic's Claude and Google's Gemini represent two distinct approaches to the AI API market. Anthropic focuses on a curated lineup with consistently high quality, while Google competes aggressively on price and context window size. Both are strong contenders for production workloads in 2026. At the flagship tier, Claude Opus 4.6 leads on quality benchmarks but costs significantly more than Gemini 2.5 Pro. At the mid tier, the comparison gets more interesting: Claude Sonnet 4.6 ($3.00/$15.00) delivers top-tier quality, while Gemini 2.5 Flash ($0.15/$0.60) offers remarkable performance at 95% less cost. The right choice depends heavily on whether you optimize for quality ceiling or cost efficiency.
Frequently Asked Questions
Is Gemini cheaper than Claude?+
Yes, substantially. Gemini 2.5 Flash costs $0.15/$0.60 per million tokens, while the cheapest Claude model (Claude 3 Haiku) costs $0.25/$1.25. At every tier, Gemini pricing is lower.
Which has better quality, Claude or Gemini?+
Claude Opus 4.6 leads in Arena ELO (1440 vs 1390 for Gemini 2.5 Pro) and HumanEval coding benchmarks (97.2% vs 95.0%). Claude is generally preferred for coding and instruction following.
What are the context window differences?+
Gemini offers much larger context windows: up to 2M tokens (Gemini 1.5 Pro) and 1M tokens (Gemini 2.x). All Claude models have a consistent 200K token context window.
Do both support batch pricing?+
Anthropic offers batch processing at 50% off through Message Batches API. Google does not currently offer a comparable batch discount, though Gemini's base prices are already very competitive.
Which is better for long document processing?+
Gemini, due to its larger context windows (up to 2M tokens). If your documents fit within 200K tokens, Claude is an excellent choice with potentially higher output quality.