Lumen Docs

Chat

SSE streaming chat endpoint and supporting conversation APIs.

POST /chat/projects/:projectId (stream)

Server-Sent Events (SSE) endpoint. Requires use tier on the project.

Request:

{
  "message": "What's the network diagram in the latest doc?",
  "conversationId": "uuid-or-omit",   // omit to start a new conversation
  "modelId": "deepseek-v4-flash",     // optional — falls back to project settings, then default provider
  "mode": "docs"                      // "docs" | "web" | "deep"
}

Response stream (Content-Type: text/event-stream):

event: meta
data: {"conversationId":"uuid","messageId":"uuid","model":"deepseek-v4-flash"}

event: sources
data: {"chunks":[{"id":"c1","documentId":"d1","documentName":"network.pdf","pageNumber":3,"snippet":"..."}]}

event: text
data: {"delta":"The "}

event: text
data: {"delta":"network "}

event: text
data: {"delta":"diagram "}

...

event: done
data: {"messageId":"uuid","totalTokens":420}

Errors also flow as SSE events:

event: error
data: {"type":"error","message":"LLM_API_ERROR","detail":"401 Authentication Failed"}

Modes

| Mode | Requires | What happens | |---|---|---| | docs | — | RAG over project documents only (default) | | web | TAVILY_API_KEY env var set | Question is sent to Tavily for web search; results injected as context | | deep | TAVILY_API_KEY env var set | Tavily advanced search for research-depth tasks |

GET /chat/config

Returns which modes are enabled. Frontend uses this to gray out web / deep if the key isn't set.

Response 200:

{
  "docs": true,
  "web": true,
  "deep": true
}

GET /chat/projects/:projectId/conversations

List conversations for a project. Requires use tier.

Response 200:

{
  "conversations": [
    {
      "id": "uuid",
      "title": "How to expose Supabase...",
      "createdAt": "...",
      "updatedAt": "...",
      "messageCount": 8,
      "_count": {"messages": 8}
    }
  ]
}

GET /chat/conversations/:id

Fetch one conversation with all messages.

Response 200:

{
  "conversation": {
    "id": "uuid",
    "title": "...",
    "projectId": "uuid",
    "messages": [
      {
        "id": "uuid",
        "role": "user" | "assistant",
        "content": "...",
        "createdAt": "...",
        "citations": [
          {"index": 1, "chunkId": "...", "documentName": "...", "pageNumber": 3}
        ]
      }
    ]
  }
}

GET /chat/chunks/:chunkId/source

Resolve a citation chunk to its source for the document sidebar. Returns the chunk text, context, and location.

Response 200:

{
  "documentId": "uuid",
  "documentName": "network-diagram.pdf",
  "pageNumber": 3,
  "chunkId": "uuid",
  "snippet": "<the chunk content>",
  "context": "<surrounding paragraph for context>"
}

GET /chat/recent

User's recent conversations across projects. For the home page "recent" list.

Response 200:

{
  "conversations": [
    {
      "id": "uuid", "title": "...", "projectId": "uuid",
      "project": {"id": "...", "name": "...", "color": "..."},
      "updatedAt": "..."
    }
  ]
}

Client (apps/web/lib/api.ts)

Async generator over SSE:

for await (const evt of api.chatStream(projectId, message, conversationId, modelId, "docs")) {
  switch (evt.type) {
    case "meta":
      setCurrentConversation(evt.data.conversationId);
      break;
    case "sources":
      setCitations(evt.data.chunks);
      break;
    case "text":
      appendToMessage(evt.data.delta);
      break;
    case "done":
      markStreamComplete();
      break;
    case "error":
      showError(evt.data.message);
      break;
  }
}

Failure modes

| Symptom | Cause | Fix | |---|---|---| | 401 Authentication Fails (auth header format should be Bearer sk-...) | Provider API key empty in DB | Edit provider, re-enter key | | LLM_API_ERROR on stream error event | Upstream LLM returned non-2xx | Check provider base URL and model name | | No meta event ever fires | Project access denied (resolver returned null) | Check user's tier via resolver | | Stream hangs before first token | Embedder cold start | Give it ~500ms; should resolve | | Wrong model used | modelId not matched | Check LLM providers resolution logic |

See runbooks for diagnostic steps.