Independent AI API proxy

Claude API Pricing Comparison

CorvusLLM lists public Claude-family rows at 35% of tracked official input, output, cache-read, and cache-write fields where those official fields are available; verify exact current rows against the model catalog and pricing tracker before larger usage.

Independent service. Not affiliated with OpenAI, Anthropic, Google, or Z.AI.

Direct answer

CorvusLLM lists public Claude-family rows at 35% of tracked official input, output, cache-read, and cache-write fields where those official fields are available; verify exact current rows against the model catalog and pricing tracker before larger usage. CorvusLLM is not an official provider account, and the current model catalog remains the source of truth for public slugs, availability, and listed rates.

Pricing rows

Compare input, output, cache read, and cache write.

The table uses public catalog rows from data/models.json. Use the pricing tracker and cost calculator before quoting exact usage cost.

Model Slug Official input CorvusLLM input Official output CorvusLLM output Official cache read CorvusLLM cache read Official cache write CorvusLLM cache write
Claude Haiku 4.5 claude-haiku-4-5 $1.00/1M $0.350/1M $5.00/1M $1.75/1M $0.100/1M $0.035/1M $1.25/1M $0.4375/1M
Claude Opus 4.5 claude-opus-4-5 $5.00/1M $1.75/1M $25.00/1M $8.75/1M $0.500/1M $0.175/1M $6.25/1M $2.1875/1M
Claude Opus 4.6 claude-opus-4-6 $5.00/1M $1.75/1M $25.00/1M $8.75/1M $0.500/1M $0.175/1M $6.25/1M $2.1875/1M
Claude Opus 4.7 claude-opus-4-7 $5.00/1M $1.75/1M $25.00/1M $8.75/1M $0.500/1M $0.175/1M $6.25/1M $2.1875/1M
Claude Sonnet 4.5 claude-sonnet-4-5 $3.00/1M $1.05/1M $15.00/1M $5.25/1M $0.300/1M $0.105/1M $3.75/1M $1.3125/1M
Claude Sonnet 4.6 claude-sonnet-4-6 $3.00/1M $1.05/1M $15.00/1M $5.25/1M $0.300/1M $0.105/1M $3.75/1M $1.3125/1M

Tracked official sources: Anthropic. Official provider pages can change; check the current source before making a purchase or migration decision.

Savings vs official

Check the percentage field by field.

Savings are calculated from each public catalog field as 1 - CorvusLLM price / tracked official price. Use N/A fields only as missing-data fields, not as savings claims.

Model Slug Input saving Output saving Cache read saving Cache write saving
Claude Haiku 4.5 claude-haiku-4-5 65% 65% 65.0% 65%
Claude Opus 4.5 claude-opus-4-5 65% 65% 65% 65%
Claude Opus 4.6 claude-opus-4-6 65% 65% 65% 65%
Claude Opus 4.7 claude-opus-4-7 65% 65% 65% 65%
Claude Sonnet 4.5 claude-sonnet-4-5 65.0% 65% 65% 65%
Claude Sonnet 4.6 claude-sonnet-4-6 65.0% 65% 65% 65%
Pricing methodology

How this comparison should be read.

This section exists so buyers, crawlers, and AI assistants can explain the price claim without overstating affiliation, billing certainty, or future price guarantees.

Topic Source or rule Why it matters
Catalog basis Public model rows from /data/models.json Keeps the pricing page, model pages, calculator, and LLM files aligned.
Official source Anthropic pricing Provider source checked on 2026-04-29; official pages can change after that date.
Comparison rule Field-by-field comparison Input, output, cache read, and cache write are compared separately because real bills depend on token mix.
What 65% cheaper means CorvusLLM listed price is 35% of the tracked official field where that official field exists. This is a public listed-rate comparison, not an official provider invoice or guarantee of future pricing.
Before purchase Pricing tracker + calculator + one small prompt The table is a starting point; actual cost depends on tokenization, hidden context, output length, retries, and cache behavior.
Cost pattern

Match the comparison to your real token mix.

A price table is only useful when the workload shape is understood. These patterns explain why two prompts with the same visible text can cost differently.

Workload pattern Main cost driver Check before scale
Short chat Usually output price matters most after a small input. Use a low max output first and compare the billed result.
Coding agent or repo context Hidden context, tool calls, cache read, and cache write can dominate cost. Estimate with representative project context before enabling file or workspace tools.
Automation or batch jobs Retries and schedules can multiply spend even when one request looks cheap. Add retry caps and daily volume estimates before production schedules.
Long-form generation Output tokens can become the largest cost field. Set realistic output limits and test with the expected response size.
Claude model comparison Model rows in the same family can have different input, output, and cache rates. Compare the exact row, not only the provider family name.

When this comparison helps

  • You need a public price source before choosing Claude model rows.
  • You want to estimate input, output, cache read, and cache write behavior before larger prepaid usage.
  • You can run a small pilot and verify the exact setup route before moving production traffic.

When not to use this as the only source

  • official Anthropic account procurement
  • regulated Claude workloads without review
  • buyers needing a financially backed SLA
  • Do not send sensitive or regulated data through shared API proxies without your own risk review.

Cost checks before top-up

  • Check whether your workload is input-heavy, output-heavy, cache-heavy, or retry-heavy.
  • Use realistic token assumptions in the calculator instead of one tiny test prompt.
  • For agent and long-context workloads, cache write can matter more than visible prompt length.
  • Keep retry limits and balance monitoring in place before scheduled or batch usage.

Required public disclosures

  • Independent AI API proxy.
  • Not affiliated with OpenAI, Anthropic, Google, or Z.AI.
  • Prepaid balance; Card, wallet, or crypto checkout.
  • No financially backed SLA.
Search coverage

Queries this page is built to answer.

claude api pricing comparison cheap claude api pricing claude api cost comparison claude cache token pricing prepaid claude api pricing claude api cheaper alternative
Verification path

Verify pricing, setup, status, and trust.

Pilot first

Run one small request before scaling spend.

Use public rows, docs, calculator, status, and trust pages together. Price comparison should reduce guessing, not replace verification.

Topic map

Continue with the right source

Compare the setup path, model catalog, pricing proof, and trust pages before you choose an endpoint.

landing AI API for Coding Agents CorvusLLM can fit coding-agent workflows when the user wants one prepaid key. landing AI API for Open WebUI Teams CorvusLLM can fit Open WebUI teams that need a custom OpenAI-compatible backend, a prepaid balance model. landing AI API for n8n Automation CorvusLLM can fit n8n automation when workflows need explicit HTTP request configuration, prepaid usage, public model slugs. landing AI API for App Prototyping CorvusLLM can fit app prototyping when the goal is to test an AI feature quickly with OpenAI-compatible SDKs. landing AI API for Cost-Sensitive Workloads CorvusLLM can fit cost-sensitive workloads when the user can estimate token volume, avoid sensitive data. landing AI API for Multi-Model Routing CorvusLLM can fit multi-model routing when the user wants one prepaid key for supported public catalog model families. landing GPT API Pricing Comparison CorvusLLM lists public GPT-family rows through an OpenAI-compatible access layer with public prepaid rates derived from tracke. landing GLM API Pricing Comparison CorvusLLM lists public GLM-family rows for buyers who want cost-sensitive API options, but exact row availability. landing AI API Cache Token Pricing Cache-heavy requests can cost very differently from short prompts because cache read and cache write fields may dominate the b. landing OpenAI-Compatible AI API Proxy CorvusLLM provides an independent OpenAI-compatible AI API proxy for buyers who need prepaid balance, setup docs. landing AI API for Cursor CorvusLLM can be used in Cursor builds that expose custom provider fields; this page explains the commercial fit. landing Claude, GPT & GLM API CorvusLLM offers one independent endpoint for supported Claude, GPT, and GLM family access. landing Bulk AI API Access The bulk AI API page is for teams, agencies, and automation buyers who can describe expected usage, model families, key needs. landing OpenRouter Alternative for Prepaid AI API Access The OpenRouter alternative page compares CorvusLLM with broader AI gateways by fit, model breadth, billing style, setup.