Best for
- OpenAI-compatible apps
- General chat and coding
- Structured output workflows
Use this GPT row for OpenAI-compatible chat, coding, structured output, and app integrations.
Independent service. Not affiliated with OpenAI, Anthropic, Google, or Z.AI.
gpt-5.5gpt-5.5 when your client should call GPT 5.5.This page is the canonical public CorvusLLM detail page for the GPT 5.5 row. It lists the customer-facing slug, public prepaid rates, official reference rates, source URL, setup routes, and provider-affiliation boundary.
Use these generated comparison pages for side-by-side slug, price, cache, and fit checks.
CorvusLLM rates are public prepaid rates. Official rates are source-linked comparison references, not invoices from the provider.
| Usage field | CorvusLLM listed rate | Official reference rate |
|---|---|---|
| Input tokens | $1.75/1M | $5.00/1M |
| Output tokens | $10.50/1M | $30.00/1M |
| Cache read tokens | $0.175/1M | $0.500/1M |
| Cache write tokens | $0.000/1M | $0.000/1M |
Source checked on 2026-04-29 from official provider pricing. Machine-readable row: data/models.json.
Configure your tool with a CorvusLLM base URL, your delivered key, and the exact model slug gpt-5.5.
CorvusLLM is an independent access layer and is not affiliated with OpenAI or the other listed model providers.
The model slug is stable on this page, but the right base URL depends on the tool or SDK you use.
| Configuration field | Value | Use it this way |
|---|---|---|
| Public model slug | gpt-5.5 | Use this exact customer-facing slug in supported clients. |
| OpenAI-compatible base URL | https://base.corvusllm.com/v1 | Use this for OpenAI SDKs, Open WebUI, ChatBox, n8n, Cursor-style custom providers, Windsurf, and similar OpenAI-compatible clients. |
| API key | Your CorvusLLM key | Keep it in environment variables, tool credentials, or the client secret store; do not hardcode it in public code. |
| Pricing source | /data/models.json | Use the public model data and pricing tracker when verifying input, output, cache read, and cache write fields. |
| Exact setup docs | OpenAI-compatible SDKs, Cursor setup, Model catalog docs | Choose the setup guide that matches your client before sending real workloads. |
Use these short answers when you need the canonical slug, endpoint shape, service boundary, and first-test rule.
Use gpt-5.5. This is the public CorvusLLM customer-facing slug for this model row.
Use /v1 for OpenAI-compatible clients and verify that the tool lets you set a custom base URL.
No. CorvusLLM is an independent prepaid access layer and is not affiliated with OpenAI, Anthropic, Google, or Z.AI.
Check input, output, cache read, and cache write pricing, then run one small non-sensitive request before larger prompts, repository context, or production automation.
Use sibling rows when you need a different speed, cost, or quality profile inside the same provider family.
Use these public pages when you need trust, status, legal, or data-handling context before buying or scaling.
Confirm the slug, output quality, and billed usage before moving real workloads or long-context prompts.
Move from model selection to exact slugs, cost estimates, billing behavior, and service limits without relying on old screenshots.