Latency troubleshooting

Fix request timeout errors

Timeouts usually come from large context, long output targets, tool-heavy agent loops, client-side timeout limits, or temporary provider latency. Reduce the request to a small non-streaming text call first, then add context, output length, streaming, and tools back one at a time.

request timeout gateway timeout long request failed stream ended request took too long

Direct answer

Timeouts usually come from large context, long output targets, tool-heavy agent loops, client-side timeout limits, or temporary provider latency. Reduce the request to a small non-streaming text call first, then add context, output length, streaming, and tools back one at a time.

Use this page for public troubleshooting only.

Private order, key, and balance details belong in the customer portal or support. Public docs can explain the diagnostic path, not reveal account-specific state.

Error phrases this guide covers

Search tools, logs, and support tickets do not always use the same wording. Treat these phrases as the same troubleshooting family before changing unrelated settings.

request timeout gateway timeout long request failed stream ended request took too long

Fast check before changing everything

Run the smallest check that isolates the failing layer. If the small request works, the problem is usually the client configuration, hidden context, permissions, or advanced feature path rather than the whole account.

Small timeout isolation request
curl https://base.corvusllm.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_CORVUSLLM_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-5.5","messages":[{"role":"user","content":"Reply with one short sentence."}],"max_tokens":40,"stream":false}'

Common causes

  • The prompt includes many files, long chat history, images, or pasted logs.
  • The client has a shorter timeout than the model route needs for the selected request.
  • Streaming hides progress from some clients until the first chunk arrives, so the app thinks the request is dead.
  • Tool definitions or agent loops make the request much heavier than a normal chat message.

Fix steps

  1. Retry the same model with a tiny non-streaming text prompt and low output limit.
  2. If that works, halve the project context or chat history and test again.
  3. Increase the client timeout only after you confirm the small request succeeds.
  4. Disable streaming and tool calls until plain chat is reliable, then re-enable one advanced feature at a time.

Verify before retrying production traffic

  • Measure whether the failure happens before the first token or during a long answer.
  • Check whether another model family answers the same short prompt quickly.
  • Confirm the app did not send duplicate background retries that make latency look worse.
Do not use expensive retry loops as a diagnostic tool.

Use one small request first. Large retries can spend balance, hide the original cause, and create confusing logs.

Diagnostic decision tree

Work through these checks in order. The goal is to isolate the failing layer before editing unrelated settings or sending another expensive request.

Check Action Pass result Fail result
Minimal request Run the smallest check from this page with the same key, endpoint shape, and one public model slug. The account and basic route probably work; move to client settings, hidden context, tools, or retries. Fix auth, base URL, balance, model slug, or current route health before testing advanced features.
Client final URL Inspect the actual URL or provider profile the client sends, not only the visible settings field. Continue with request body, model slug, payload size, and feature compatibility checks. Correct host/base/full-endpoint confusion before changing keys or model families.
Balance movement Compare dashboard balance before and after one tiny diagnostic request. If charged and no answer arrives, collect the support packet before retrying large prompts. If not charged, focus first on request rejection, wrong endpoint, auth, or client-side failure.
Feature isolation Disable streaming, tools, images, file context, long history, and automation loops for one retry. Re-enable one feature at a time until the failing layer is identified. Keep the request small and do not use production retries as the diagnostic method.
Route health Check Service Status and try a tiny prompt on one nearby public model row if your workflow allows it. Use a documented fallback only if quality and cost are acceptable. Wait, switch safely, or contact support with timestamps instead of hammering the failing route.

Prevent it next time

Set a clear request-size budget for production clients. Large context, tools, and streaming should have explicit retry limits, because otherwise one stuck job can create several expensive and slow attempts.

Minimum support packet

Collect these details before opening support. This avoids exposing secrets while giving enough context to match logs and reproduce the public failure path.

Field Why support needs it
Timestamp Use UTC or include timezone so logs can be matched accurately.
Endpoint path Include /v1, /anthropic, or the exact client route shape involved.
Public model slug Send the customer-facing slug, not a private key, upstream account name, or hidden route.
Exact error text Include the visible request timeout message and any HTTP status shown by the client.
Minimal request result State whether the tiny check on this page works with the same key.
Balance movement State whether balance changed after the failed request or only after retries.
Client and feature flags Name the tool, SDK, streaming setting, image input, tools, file context, or automation loop involved.

When to contact support

Contact support when a minimal reproducible check still fails, when the dashboard history does not match what your client received, or when usage appears charged but no usable answer reached the client.

  • Include timestamp, endpoint path, public model slug, exact error wording, and whether the same key works on a minimal request.
  • Include whether the dashboard balance changed and whether the client retried in the background.
  • Do not send secrets, full API keys, regulated data, or private production prompts in public support messages.

Open the support bot after collecting the reproducible details.

Use these pages to verify the exact base URL, model slug, billing behavior, service status, or broader troubleshooting route before changing unrelated settings.

Service Status Use this supporting source to verify the next setup, billing, model, trust, or status step. API overview Use this supporting source to verify the next setup, billing, model, trust, or status step. Tool creation failed Diagnose CorvusLLM tool creation, function calling, agent file write, workspace patch, streaming tool, and schema-size errors. Docs hub Use this supporting source to verify the next setup, billing, model, trust, or status step. Troubleshooting hub Use this supporting source to verify the next setup, billing, model, trust, or status step. Models & Slugs Use this supporting source to verify the next setup, billing, model, trust, or status step. Billing, Balance & Cache Use this supporting source to verify the next setup, billing, model, trust, or status step. Environment overview Use this supporting source to verify the next setup, billing, model, trust, or status step. Trust Center Use this supporting source to verify the next setup, billing, model, trust, or status step. Model not found Diagnose CorvusLLM model not found, unknown model, no such model, and 404 slug errors with safe checks, canonical slug fixes, and retry guidance. Invalid API key Diagnose CorvusLLM invalid API key, unauthorized, 401, and 403 errors with Bearer auth checks, dashboard key verification, and client-profile fixes. Wrong base URL Diagnose CorvusLLM wrong base URL, double /v1, wrong endpoint path, Claude Code /anthropic, and OpenAI-compatible routing mistakes.
Topic map

Continue with the right source

Open the exact setup, model, billing, and troubleshooting pages instead of guessing configuration values.

status Checking current status The status page shows customer-facing live checks for website, checkout, customer login, and API compatibility routes. docs API base URLs and request paths. API Overview: Base URLs, authentication, request formats, OpenAI-compatible vs Anthropic-native paths. docs Fix tool creation and file-action errors Tool Creation Failed: Diagnose tool creation, function calling, agent file write, workspace patch, schema-size. docs Set up CorvusLLM without guessing. Overview: The clean start page: base URLs, model overview, environment overview, and where to begin. docs Most CorvusLLM issues are the same four mistakes. Troubleshooting: Clear fixes for wrong base URLs, bad model slugs, out-of-balance errors, delivery questions. docs Use the canonical customer slug and keep it simple. Models & Slugs: Every customer-facing model with one customer slug, provider family, pricing. docs bills against the customer key balance and stops at zero. Billing, Balance & Cache: How prepaid balance works, how same-key top-ups work, how usage deductions, out-of-balance behavior. docs Choose the path once, then stay consistent. Environment Overview: Every supported environment at a glance: which base URL to use, where to paste the key. trust Trust Center The Trust Center explains affiliation, data handling, support, refunds, compatibility evidence, pricing methodology. trust How to Verify CorvusLLM Before You Buy The verification checklist shows how to test CorvusLLM claims, endpoint setup, pricing data, status, legal pages. trust Proof of Operations Proof of Operations collects public evidence assets, published data, operational boundaries. faq Frequently Asked Questions CorvusLLM FAQ and help center with searchable answers about pricing, refunds, delivery, API setup, Cursor, Claude Code. docs Buy, paste, test, and verify your balance in minutes. Quickstart: The shortest safe path from purchase to a working request and a visible balance in the dashboard. models AI Models The CorvusLLM model catalog directory helps users find current customer-facing model families, public slugs, pricing context.
Browse docs
On this page