Lumen Docs

LLM providers

Configure the LLMs that projects can use.

Endpoints

All under /providers/*.

GET /providers

Any authenticated user. Returns enabled providers with no secrets. Used by UIs to show "which providers are available" without leaking keys.

Response 200:

{
  "providers": [
    {
      "id": "uuid",
      "name": "DeepSeek",
      "slug": "deepseek",
      "models": ["deepseek-v4-flash", "deepseek-v4-pro"],
      "isDefault": true,
      "isEnabled": true
    }
  ]
}

GET /providers/models

Any authenticated user. Flattened list of every available model across providers.

Response 200:

{
  "models": [
    {
      "id": "deepseek/deepseek-v4-flash",
      "model": "deepseek-v4-flash",
      "provider": "DeepSeek",
      "providerSlug": "deepseek",
      "providerId": "uuid",
      "isDefault": true
    }
  ]
}

Used by the project settings "AI Configuration" tab to populate the model dropdown.

GET /providers/admin

Requires engineer or superadmin. Returns providers with masked API keys (sk-abc12345...xyz9) for the admin UI.

POST /providers

Requires engineer or superadmin. Create a provider.

Request:

{
  "name": "OpenAI",
  "slug": "openai",                              // lowercase, kebab-case, unique
  "type": "official" | "custom",
  "baseUrl": "https://api.openai.com/v1",
  "apiKey": "sk-...",
  "models": ["gpt-4o", "gpt-4o-mini"],           // at least 1
  "isDefault": true                               // optional
}

Setting isDefault: true unsets any existing default. Exactly one provider can be default at a time.

PATCH /providers/:id

Requires engineer or superadmin. Update any field. Leave apiKey empty to keep the existing key.

DELETE /providers/:id and POST /providers/:id/delete

Requires engineer or superadmin. Both supported — use POST if Cloudflare proxy in between, DELETE directly otherwise.

POST /providers/:id/test

Sends a tiny "ping" completion to verify reachability + credentials.

Request:

{
  "baseUrl": "https://...",           // optional — override stored
  "apiKey": "sk-...",                 // optional — override stored
  "model": "gpt-4o"                   // required
}

Response 200:

{
  "ok": true,
  "latencyMs": 534,
  "status": 200,
  "sample": "Pong! Lumen here."     // first ~120 chars of the reply
}

Response on failure:

{
  "ok": false,
  "latencyMs": 12000,
  "status": 0,
  "error": "Request timed out after 30s"
}

Pass id="draft" with full credentials to test a provider config before saving.

Provider resolution for chat

When a chat request includes modelId, the resolver walks:

  1. Slug/model format (modelId = "deepseek/deepseek-v4-flash"):
    • Look up provider by slug deepseek
    • Use its baseUrl + apiKey with model deepseek-v4-flash
  2. Bare model name (modelId = "deepseek-v4-flash"):
    • Search all enabled providers where models array contains the name
    • Use the first match
  3. No modelId provided OR no match:
    • Use the default provider (isDefault = true)
  4. No default provider:
    • Use any enabled provider (first by createdAt)
  5. No providers at all:
    • Fall back to env vars LLM_BASE_URL, LLM_API_KEY, LLM_MODEL

If all 5 fail, chat errors with LLM_API_ERROR because the fallback sends an empty key.

Source: apps/api/src/services/llm.ts resolveProvider().

Permission model

Historically gated by userRole = 'admin'. Now gated by requirePlatformRole(['engineer', 'superadmin']):

  • Engineer — designated role for managing LLM infrastructure
  • Superadmin — bypasses every gate
  • Admin — does not manage LLM providers. They're platform organization managers, not platform infrastructure managers. Different responsibility.

This split lets an org have admins (who manage people) and engineers (who manage tech) as distinct roles.

Gotchas

  1. Default toggle — If no provider is marked default, chat falls back to the first enabled provider by createdAt. This is OK for single-provider setups but means the "first one you add" effectively becomes default. Be explicit and toggle isDefault in the UI.

  2. API key escaping — When setting the API key directly in SQL via bash, the $ in bcrypt-like prefixes ($2b$12$... for passwords, less relevant here) or special chars in keys can get eaten. Always prefer the API endpoint (PATCH /providers/:id) or the Edit modal in the UI.

  3. Reranker is hardcoded — The BGE reranker runs inside lumen-embedder, not as an LLM provider. To swap rerankers you change the embedder service, not the provider table.

  4. No streaming test — The /test endpoint sends a non-streaming completion. Some providers don't support streaming or require different auth for streams; test flow doesn't catch that. Real chat tests it for real.