Install requirement
Install Open WebUI, have admin access to the instance, and decide which model family rows should be visible to your users.
Connect Open WebUI to supported GLM models through CorvusLLM with a documented endpoint shape, prepaid balance, current public model slugs, and clear service boundaries.
Independent service. Not affiliated with OpenAI, Anthropic, Google, or Z.AI.
CorvusLLM can fit Open WebUI users who need an independent prepaid access layer for supported GLM models, a clear base URL, current public model slugs, pricing proof, and setup documentation before they send real prompts.
Open Open WebUI admin settings, create or edit an OpenAI-compatible connection, then paste the CorvusLLM base URL, key, and the model slugs you want exposed.
Install Open WebUI, have admin access to the instance, and decide which model family rows should be visible to your users.
GLM models are usually a fit for cost-sensitive automation, general chat, multilingual assistant workflows, and OpenAI-compatible clients that can accept GLM-family slugs.
GLM support should be checked against the live public catalog before production use because not every tool labels GLM models the same way.
base_url = "https://base.corvusllm.com/v1"
api_key = "YOUR_CORVUSLLM_KEY"
model = "glm-5.1"
| Field | Use this | Why it matters |
|---|---|---|
| Where to edit | Open WebUI Admin Panel -> Settings -> Connections or OpenAI-compatible provider settings. | Use the tool-owned settings area instead of hardcoding keys in prompts or visible node text. |
| Base URL | https://base.corvusllm.com/v1 | Use the OpenAI-compatible route for SDKs, Open WebUI, Cursor-style custom providers, ChatBox, n8n, Windsurf, and similar clients. |
| API key | Your CorvusLLM key | Keep it in the tool secret store, credentials area, or server environment variables. |
| Model slug | glm-5.1 or glm-5 | Use public GLM catalog rows and verify availability before larger runs. |
| First test | One small non-sensitive prompt | Confirm endpoint, slug, latency, response format, and billed usage before production or repository-wide context. |
This table separates a real custom-endpoint setup from cases where the tool version, secret handling, or model field will make the connection fail.
| Check | Good state | Avoid this |
|---|---|---|
| Required control | Open WebUI exposes a custom base URL, custom provider, or compatible API host field. | If Open WebUI hides endpoint controls, use the docs fallback instead of guessing hidden settings. |
| Endpoint shape | OpenAI-compatible custom endpoint support is available and can be set to https://base.corvusllm.com/v1. | Do not mix OpenAI-compatible setup values with another request format. |
| Secret handling | The CorvusLLM key is stored in Open WebUI settings, credentials, or server environment variables. | Do not paste API keys into public repositories, prompts, screenshots, or client-side production code. |
| Model slug | The selected value is a public GLM slug from the model catalog. | Do not use private upstream route names, provider-account names, or guessed model aliases. |
| First request | A small non-sensitive prompt succeeds before larger project context, automation, or team rollout. | Do not start with private repositories, regulated data, or high-volume automation before a small test. |
These examples are only starting points. The model catalog and data/models.json are the source of truth for availability, pricing, cache fields, and public slugs.
These checks turn common setup errors into crawlable answers and send private account cases to the right support path only after public checks are exhausted.
| Symptom | First check | Public source |
|---|---|---|
| Custom base URL field is missing | Confirm your Open WebUI version exposes custom provider controls. | Open WebUI docs |
| Model not found | Use a public GLM slug from the current catalog and avoid hidden aliases. | GLM catalog |
| Authentication or balance error | Confirm the delivered CorvusLLM key, account balance, and that the key was not pasted with extra spaces. | Troubleshooting |
| Unexpected cost | Estimate input, output, cache read, and cache write before sending large context or loops. | Cost calculator |
| Timeout or provider unavailable | Retry a tiny non-streaming prompt, then check customer-facing status before changing credentials. | Service Status |
This is a commercial setup page, not a private account console. Use the supporting pages for exact pricing, operational status, and trust details.
Do not send sensitive or regulated data through shared API proxies. CorvusLLM forwards prompts to upstream model providers for processing and keeps request metadata for billing, abuse prevention, and support diagnostics.
These answers keep setup pages useful for search, AI answers, and first-time buyers without replacing the exact docs.
Open WebUI should use https://base.corvusllm.com/v1 for this GLM setup path. Check the dedicated docs page when the tool changes its custom-provider settings.
No. This page is for using a CorvusLLM prepaid key as an independent access layer. CorvusLLM is not affiliated with Z.AI, and exact supported rows should be verified in the public catalog.
Do not send sensitive or regulated data through shared API proxies without a separate risk review. Start with non-sensitive tests and move larger workloads only after checking trust, status, and billing behavior.
Provider+tool pages are linked together so developers and crawlers can move from a broad query to the exact setup route.
Confirm base URL, key, public slug, latency, output quality, and billed usage before larger prompts, repository context, or automated workflows.
Compare the setup path, model catalog, pricing proof, and trust pages before you choose an endpoint.