Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Chatbot API Flow

This guide explains how the public chatbot API fits together from an integrator's point of view. It focuses on the flow and data model so you can build a client without guessing how responses should be interpreted.

For the full schema, field-level validation, and complete endpoint reference, see the API reference.

Flow at a glance

  1. Your backend calls POST /api/v1/chat/auth with the chatbot API key and a chatbot_id.
  2. The API returns a visitor Bearer token.
  3. Your client calls GET /api/v1/chat/config to load branding and startup configuration.
  4. Your client sends user input to POST /api/v1/chat/messages and receives a streamed response.
  5. The stream emits structured content as part_delta and part events, then ends with done or error.
  6. If the assistant returns an actionable marker, your client collects the required data and sends it to POST /api/v1/chat/actions.
  7. You can optionally read message history, save a conversation rating, or delete the visitor's data.

Auth and config

The API has two authentication layers:

StepWho calls itPurpose
POST /api/v1/chat/authYour backendExchange the API key for a visitor token
Visitor-scoped endpointsYour client or trusted backendUse the returned Bearer token for config, messages, actions, history, rating, and deletion

Keep the API key on your server. Do not expose it in browser code, mobile apps, or public frontend bundles.

/auth uses HTTP Basic Auth with:

  • the API key as the username
  • an empty password

When the token expires, call /auth again to create a new visitor token. There is no refresh token flow.

Example: create a visitor token

curl -X POST "https://api.dialogintelligens.dk/api/v1/chat/auth" \
  -u "$CHATBOT_API_KEY:" \
  -H "Content-Type: application/json" \
  -d '{
    "chatbot_id": "shop-bot"
  }'

The response contains a JWT token:

{
  "token": "eyJhbGciOiJIUzI1NiIs..."
}

After that, your client can load config with the Bearer token. GET /api/v1/chat/config returns the public UI configuration for the chatbot, including:

  • name
  • avatar
  • welcome message
  • theme colors
  • popup messages

How message data is structured

Every conversation item is a message. A message contains ordered parts.

For user input, message.parts is how you send text and attachments in one request.

For assistant output, message.parts is also how you render the response. Parts are already structured for you, so the client should render them directly instead of trying to parse raw text.

Mental model

TermMeaning
messageOne user, assistant, or system entry in the conversation
partA top-level renderable unit inside a message
blockA section inside a rich_text part or table cell
spanInline formatted text inside a paragraph or bullet item
message
`-- parts[]
    |-- rich_text
    |   `-- blocks[]
    |       |-- paragraph
    |       |   `-- spans[] -> text | bold | strike | link
    |       `-- bullet_list
    |-- image
    |-- table
    |-- products
    `-- show_contact_form

Parts

Common assistant part types are:

  • rich_text for formatted text
  • image for image content
  • table for structured tables
  • products for product cards
  • marker parts such as show_contact_form or request_image_upload

The important detail is that all of these are siblings in the same parts array. That means a single assistant message can look like:

  1. Some text
  2. A marker telling the client to open a form
  3. More text after the marker

Blocks and spans

Inside a rich_text part:

  • blocks describe larger sections such as paragraphs and bullet lists
  • spans describe inline formatting inside those blocks, such as plain text, bold text, struck text, and links

This lets a client render formatted content without having to parse markdown or custom marker syntax.

Example: a message with text and an action

{
  "message_id": "msg_123",
  "role": "assistant",
  "parts": [
    {
      "type": "rich_text",
      "part_id": "part_1",
      "blocks": [
        {
          "type": "paragraph",
          "spans": [{ "type": "text", "text": "Need help with your order?" }]
        }
      ]
    },
    {
      "type": "show_contact_form",
      "part_id": "part_2",
      "fields": [
        { "key": "name", "label": "Your name", "required": true },
        { "key": "email", "label": "Email address", "type": "email", "required": true }
      ]
    },
    {
      "type": "rich_text",
      "part_id": "part_3",
      "blocks": [
        {
          "type": "paragraph",
          "spans": [{ "type": "text", "text": "Fill in the form and we will follow up." }]
        }
      ]
    }
  ]
}

In this example, the form is not separate from the message. It is one ordered part of the message.

How streaming works

POST /api/v1/chat/messages returns a Server-Sent Events stream. Because this is a POST endpoint, the browser's built-in EventSource API is not a good fit. Use fetch() directly or a helper such as @microsoft/fetch-event-source.

The stream uses five public event types:

EventMeaning
statusLifecycle updates such as connected or processing
part_deltaIncremental updates for a streamed part
partA finalized structured part
doneThe final assembled assistant message
errorA terminal stream error

Typical flow:

status(connected)
status(processing)
part_delta / part ...
done or error

part_delta is useful when you want progressive rendering for content such as:

  • rich text
  • products
  • tables

part gives you a finalized structured part. Marker parts typically arrive this way because they do not need progressive updates.

done.message is the final source of truth for the assistant response. If you rendered draft content while streaming, replace or reconcile it with the message from done.

Example: consume the message stream

import { fetchEventSource } from "@microsoft/fetch-event-source";
 
await fetchEventSource("https://api.dialogintelligens.dk/api/v1/chat/messages", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${token}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    message: {
      parts: [{ type: "text", text: "What is your return policy?" }],
    },
    context: {
      page: window.location.href,
    },
  }),
  onmessage(event) {
    const data = JSON.parse(event.data);
 
    switch (event.event) {
      case "status":
        updateStatus(data.status);
        break;
      case "part_delta":
        applyPartDelta(data.part_id, data.delta);
        break;
      case "part":
        renderFinalPart(data.part);
        break;
      case "done":
        replaceDraftMessage(data.message);
        break;
      case "error":
        showError(data.message, {
          code: data.code,
          retryable: data.retryable,
        });
        break;
    }
  },
});

Error handling

POST /api/v1/chat/messages has two different error channels, depending on whether the SSE stream has started. Non-2xx HTTP responses use ApiError; terminal SSE error events use StreamErrorEvent.

For the full error-code catalog and handling examples, see Chatbot API Errors.

How markers and actions work together

Markers are assistant parts that tell the client to do something beyond rendering text.

The key link is part_id:

  • the assistant sends a marker part with a part_id
  • your client renders the related UI
  • when the user completes the action, your client sends that same part_id to /actions

Marker behavior

Marker partWhat the client doesFollow-up
show_contact_formRender the fields from the markerSubmit action.type = "contact_form" to /actions
show_support_ticketRender the ticket form and attachment rules from the markerSubmit action.type = "support_ticket" to /actions
request_image_uploadPrompt the user to upload an image that matches the marker constraintsSend a later /messages request with an image part
request_human_agentShow a handoff option in your UIRequest livechat handover
customHandle your own key and optional payloadClient-defined flow

Only show_contact_form and show_support_ticket are submitted to POST /api/v1/chat/actions.

Example: submit an action

{
  "part_id": "part_2",
  "action": {
    "type": "contact_form",
    "fields": {
      "name": "Jane Doe",
      "email": "jane@example.com",
      "message": "I need help with my order"
    }
  }
}

The same pattern applies to support_ticket, but with the support ticket payload shape defined in the API reference.

Livechat end-to-end

Livechat is the handover path from the AI conversation to a human agent. The public API keeps the visitor experience, your backend setup, outbound webhooks, and agent-system writes separated by authentication type.

There are three livechat actors:

ActorAuthenticationWhat it does
Setup backendChatbot API key with HTTP Basic AuthConfigures livechat, webhooks, agent profiles, and queue/list views
Visitor clientVisitor Bearer token from /api/v1/chat/authRequests handover, polls state, sends customer messages, closes, and rates
External agent appLivechat session Bearer token from webhookReads context and sends agent messages, typing updates, and agent-side close

The livechat endpoints in this guide are the public chatbot API endpoints under /api/v1/chat. The full request and response schemas are in the API reference.

Setup before handover

Before visitors can request a human agent, configure livechat and webhook delivery from a trusted backend using the chatbot API key. Keep this key server-side.

At minimum:

  • Enable livechat for the chatbot.
  • Configure webhook delivery with an HTTPS endpoint and signing secret.
  • Subscribe to the required livechat webhook events.
  • Create or update each agent profile with PUT /api/v1/chat/livechat/agents/{agent_id}.
  • Set attachment retention and availability rules for the chatbot.

Availability is what the visitor sees in GET /api/v1/chat/config under livechat.

Config fieldHow the client should use it
enabledShow the handover UI only when this is true.
configuredIndicates livechat can be made available for this chatbot.
availability_statuslive means the visitor can request handover now; offline means do not start it.
availability_reasonExplains why livechat is live or offline, such as working hours or a manual override.
attachments_enabledWhether visitor and agent livechat messages can include attachments.
max_attachment_size_bytesPer-attachment size limit for livechat uploads.

If working hours are empty, livechat is treated as always on while enabled. If working hours exist, availability is calculated in the configured timezone. Manual overrides can force livechat live or offline without changing the weekly schedule.

If you use waiting-room forms, GET /api/v1/chat/config returns form references in forms. Fetch the referenced form, submit values before or during the waiting state, and the materialized session values are included in livechat webhooks and agent context responses.

Visitor handover flow

A visitor can request a human handover after your client has a visitor Bearer token.

curl -X POST "https://api.dialogintelligens.dk/api/v1/chat/livechat/handover" \
  -H "Authorization: Bearer $VISITOR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "platform": "website",
    "source": "manual_button",
    "part_id": "part_human_1"
  }'

part_id is optional. Include it when the handover came from a request_human_agent marker so you can connect the livechat session back to the assistant part that prompted it.

On success, the visitor enters waiting:

{
  "livechat_session_id": "livechat_session_42",
  "status": "waiting",
  "platform": "website",
  "language": "english",
  "requested_at": "2025-06-15T14:30:10Z",
  "closed_at": null,
  "closed_by": null,
  "closed_by_agent_id": null,
  "close_reason": null
}

At the same time, Dialoge creates a livechat.handover.requested webhook event for your configured endpoint. That event contains the visitor, session values, detected language, prior conversation context, livechat session id, and a short-lived livechat session token for the agent system.

While the visitor is waiting, your client should:

  • Poll GET /api/v1/chat/livechat/state to detect assignment, agent typing, closure, and feedback availability.
  • Poll GET /api/v1/chat/livechat/messages to merge new livechat messages into the transcript.
  • Continue using the normal AI message stream if you want the assistant to keep helping while the visitor waits.

Once state becomes active, stop sending customer input to POST /api/v1/chat/messages. Send customer livechat messages to POST /api/v1/chat/livechat/messages instead:

{
  "message": {
    "parts": [
      { "type": "text", "text": "Here is the invoice you asked for." },
      {
        "type": "file",
        "data": "JVBERi0xLjQKJeLjz9MKMSAwIG9iago8PC...",
        "mime": "application/pdf",
        "filename": "invoice.pdf"
      }
    ]
  },
  "context": {
    "page": "https://shop.example.com/orders/123"
  }
}

Customer typing indicators use POST /api/v1/chat/livechat/typing with { "is_typing": true } and later { "is_typing": false }. Send typing events only while the livechat state is active.

The visitor can close the livechat session with POST /api/v1/chat/livechat/close:

{
  "reason": "Conversation completed"
}

Closing is idempotent. After closure, livechat message and typing writes are rejected. If the state response shows feedback.status: "pending", submit the visitor's post-session rating once with POST /api/v1/chat/livechat/feedback.

External agent flow

The handover webhook is the bridge into your agent system. Handle livechat.handover.requested, verify its signature, then store the event_id as your idempotency key. Automatic retries and manual replays use the same event_id, so processing the same event twice should not create a second ticket or duplicate assignment.

The webhook payload includes livechat_session.token. Use that token as:

Authorization: Bearer <livechat_session.token>

for agent-side calls:

EndpointPurpose
GET /api/v1/chat/livechat/agent/contextFetch latest session values and full transcript for the agent.
POST /api/v1/chat/livechat/agent/messagesSend a human-agent message to the visitor.
POST /api/v1/chat/livechat/agent/typingShow or clear the agent typing indicator for the visitor.
POST /api/v1/chat/livechat/agent/closeClose the session as the agent.

The livechat session token is not the chatbot API key and not the visitor token. It is scoped to one livechat session and is short-lived. Treat it as a secret for the active assignment only.

Agent messages include your configured agent_id:

{
  "agent_id": "00000000-0000-4000-8000-0000000000a1",
  "message": {
    "parts": [{ "type": "text", "text": "Hi, I'm Alice. I can help from here." }]
  }
}

The first successful agent message or typing update assigns that agent to the session when no agent is already active. If another agent is already assigned, the API returns a conflict. Agent-authored messages appear in history with role: "agent" and include the configured display name and avatar.

External systems that maintain their own queue can also list sessions with the public livechat session endpoints using the chatbot API key. Use the turn filter to find sessions where the agent system should act next, and use the notification count endpoint for lightweight badge counts.

Webhook events and signing

Livechat webhooks are signed JSON POST requests. Verify the signature against the raw request body before trusting the payload:

import { createHmac, timingSafeEqual } from "node:crypto";
 
function verifyDialogeWebhook({
  rawBody,
  timestamp,
  signature,
  signingSecret,
}: {
  rawBody: string;
  timestamp: string;
  signature: string;
  signingSecret: string;
}) {
  const expected = createHmac("sha256", signingSecret)
    .update(`${timestamp}.${rawBody}`)
    .digest("hex");
  const expectedBuffer = Buffer.from(expected, "hex");
  const signatureBuffer = Buffer.from(signature, "hex");
 
  return (
    expectedBuffer.length === signatureBuffer.length &&
    timingSafeEqual(expectedBuffer, signatureBuffer)
  );
}

The relevant headers are:

HeaderMeaning
x-di-eventEvent type, such as livechat.handover.requested.
x-di-timestampTimestamp used in the HMAC input.
x-di-signatureLowercase hex HMAC-SHA256 digest.
x-di-delivery-idUnique HTTP delivery attempt id.
x-di-attempt1-based delivery attempt number.
x-di-environmentAPI key environment, live or test.

Important livechat events:

Event typeWhen it is emitted
livechat.handover.requestedVisitor requests a human agent.
livechat.waiting.customer.message.createdVisitor sends a message while waiting for an agent.
livechat.waiting.assistant.message.createdAssistant sends a message while the visitor is waiting.
livechat.customer.message.createdVisitor sends a message during active livechat.
livechat.customer.typing.startedVisitor starts typing during active livechat.
livechat.customer.typing.stoppedVisitor stops typing during active livechat.
livechat.session.closedVisitor or agent closes the livechat session.
livechat.feedback.submittedVisitor submits post-session livechat feedback.
visitor.data.deletedVisitor requests deletion of their chatbot data.

Only some events are optional subscriptions. Required livechat events are always kept in the subscription set when webhook delivery is enabled so the downstream system can maintain a coherent session state.

Attachments

Livechat message parts can include image and file inputs when attachments are enabled. The API stores the attachment for the configured retention period and returns message parts with signed download URLs:

{
  "type": "file",
  "attachment_id": "attachment_123",
  "filename": "invoice.pdf",
  "mime_type": "application/pdf",
  "size_bytes": 48231,
  "url": "https://api.dialogintelligens.dk/api/v1/chat/livechat/attachments/attachment_123?token=...",
  "url_expires_at": "2025-06-15T14:46:00Z"
}

Do not store signed attachment URLs as permanent file links. Store attachment_id and refetch the message or agent context when you need a fresh URL. Expired or deleted attachments return an API error instead of file bytes.

Livechat state model

Livechat sessions move through this lifecycle:

inactive -> waiting -> active -> closed

inactive means the visitor has no current livechat session. waiting means handover was requested and no agent has joined yet. active means a human agent is assigned and customer input belongs on the livechat message endpoint. closed is terminal for that session.

Use state_version and updated_at from state/session responses to reconcile polling updates. Use closed_by, closed_by_agent_id, and close_reason to explain why a session ended. Use feedback.status to decide whether the visitor can rate the session.

Webhook event replay

Outbound webhook events are persisted before delivery. API-key callers can list delivery state, inspect attempts, and replay unexpired events through the /api/v1/chat/events endpoints.

Stored webhook events expire 30 days after creation. After expiry, events are no longer listable, fetchable, or replayable. Expired events that were never delivered are marked dead before cleanup.

History, feedback, and deletion

Once a visitor is authenticated, you can also use:

  • GET /api/v1/chat/messages to load paginated message history
  • POST /api/v1/chat/rate to save a 1-5 conversation rating and optional feedback
  • DELETE /api/v1/chat to remove all data for the current visitor

These endpoints use the same Bearer token as config, messages, and actions.

When webhook delivery is configured, DELETE /api/v1/chat also emits a signed visitor.data.deleted webhook after Dialoge has removed the visitor data. The payload includes chatbot.id, visitor.id, visitor.session_id, deletion.reason: "user_request", and deletion counts. Use this event to remove matching visitor/session data in your own systems. Like other outbound webhooks, deletion events are persisted, signed, retried, and visible through the webhook event listing and replay endpoints until they expire.