Independent AI API proxy

Claude API models through CorvusLLM

Use one CorvusLLM key with supported Claude model rows for coding agents, research, writing, and automation through an OpenAI-compatible proxy route.

Independent service. Not affiliated with OpenAI, Anthropic, Google, or Z.AI.

6 public Claude rowsCurrent indexable model catalog from CorvusLLM data.
Prepaid balanceTop up once and route requests by model slug.
Card, wallet, or crypto checkoutAvailable methods appear before order creation.
Current catalog

Claude model rows

These rows mirror the public model catalog and pricing tracker. Always confirm live availability before moving production traffic.

Claude Haiku 4.5

Fast Claude row for lightweight automation, summarization, and extraction.

model: claude-haiku-4-5
  • CorvusLLM input: $0.350/1M - official input: $1.00/1M
  • CorvusLLM output: $1.75/1M - official output: $5.00/1M
  • CorvusLLM cache read: $0.035/1M - official cache read: $0.100/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

Claude Opus 4.5

Highest-capability Claude row for deep coding, research, and multi-step reasoning.

model: claude-opus-4-5
  • CorvusLLM input: $1.75/1M - official input: $5.00/1M
  • CorvusLLM output: $8.75/1M - official output: $25.00/1M
  • CorvusLLM cache read: $0.175/1M - official cache read: $0.500/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

Claude Opus 4.6

Highest-capability Claude row for deep coding, research, and multi-step reasoning.

model: claude-opus-4-6
  • CorvusLLM input: $1.75/1M - official input: $5.00/1M
  • CorvusLLM output: $8.75/1M - official output: $25.00/1M
  • CorvusLLM cache read: $0.175/1M - official cache read: $0.500/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

Claude Opus 4.7

Highest-capability Claude row for deep coding, research, and multi-step reasoning.

model: claude-opus-4-7
  • CorvusLLM input: $1.75/1M - official input: $5.00/1M
  • CorvusLLM output: $8.75/1M - official output: $25.00/1M
  • CorvusLLM cache read: $0.175/1M - official cache read: $0.500/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

Claude Sonnet 4.5

Balanced Claude row for coding agents, writing, analysis, and day-to-day assistant work.

model: claude-sonnet-4-5
  • CorvusLLM input: $1.05/1M - official input: $3.00/1M
  • CorvusLLM output: $5.25/1M - official output: $15.00/1M
  • CorvusLLM cache read: $0.105/1M - official cache read: $0.300/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

Claude Sonnet 4.6

Balanced Claude row for coding agents, writing, analysis, and day-to-day assistant work.

model: claude-sonnet-4-6
  • CorvusLLM input: $1.05/1M - official input: $3.00/1M
  • CorvusLLM output: $5.25/1M - official output: $15.00/1M
  • CorvusLLM cache read: $0.105/1M - official cache read: $0.300/1M
  • Pricing reference checked on 2026-04-29 from official provider pricing.

What Claude is best for

  • Coding assistants and agent loops
  • Longer reasoning and analysis work
  • Writing, summarization, and extraction workflows

How pricing works

CorvusLLM deducts usage from prepaid balance using its listed input, output, cache-read, and cache-write rates where applicable. Official prices are shown as comparison references, not invoices from the provider.

Data handling warning

Do not send sensitive or regulated data through shared API proxies. Use direct providers or enterprise-reviewed infrastructure for regulated workloads.

Setup paths

Use the exact base URL and slug

Start with the setup guide that matches your tool, then switch only the model parameter when changing rows in this family.

Compare pricing

Check cost before top-up

Use the calculator and source-linked tracker together before moving real workloads onto a third-party proxy route.

Verify before you buy

Use proof pages for risk checks

These pages explain service limits, support paths, and operational evidence without implying official provider affiliation.

Compare a small test before scaling.

Use a low-risk prompt, confirm the exact model slug, then compare output quality and billed usage before larger workloads.