# Diverge Docs
> Everything needed to integrate the Diverge chatbot, track chatbot analytics, and work against the chatbot API.
import { ApiReference } from "../components/ApiReference";
## Chatbot Analytics
This document covers the analytics events emitted by the chatbot script and how to consume them in
your own tracking setup, for example Google Tag Manager.
### Prerequisites
The chatbot script must be loaded on the page:
```html
```
No additional scripts or configuration are needed. Analytics events are emitted automatically.
### How It Works
The chatbot dispatches `CustomEvent`s on the global `window` object whenever key interactions
occur. These events are intentionally decoupled from any specific analytics platform, so you keep
full control over how to forward and map the data.
### Available Events
| Event Name | Fired When | Extra Detail Fields |
| ------------------------------ | ----------------------------------------------- | --------------------- |
| `chat_open` | The user opens the chat window | None |
| `chat_response` | The chatbot returns a response | `response_latency_ms` |
| `chat_recommendation` | The chatbot returns product recommendations | None |
| `chat_product_click` | The user clicks a product link in the chat | None |
| `chat_contact_form_shown` | The chatbot shows the built-in contact form | None |
| `chat_conversation_classified` | The conversation is classified after a response | `classification` |
All events include `chat_type: "shopping_assistant"` in their `detail` payload.
### Listening to Events
```js
window.addEventListener("chat_open", (e) => {
console.log("Chat opened", e.detail);
});
window.addEventListener("chat_response", (e) => {
console.log("Response latency:", e.detail.response_latency_ms, "ms");
});
window.addEventListener("chat_recommendation", (e) => {
console.log("Recommendation shown", e.detail);
});
window.addEventListener("chat_product_click", (e) => {
console.log("Product clicked", e.detail);
});
window.addEventListener("chat_contact_form_shown", (e) => {
console.log("Contact form shown", e.detail);
});
window.addEventListener("chat_conversation_classified", (e) => {
console.log("Classification:", e.detail.classification);
});
```
### Google Tag Manager Integration
Push events into the GTM `dataLayer` by listening to the chatbot events and mapping them:
```html
```
Then create corresponding triggers in GTM using **Custom Event** with the event names above.
### Event Detail Payloads
Every event carries a `detail` object accessible via `e.detail`:
**`chat_open`**
```json
{ "chat_type": "shopping_assistant" }
```
**`chat_response`**
```json
{ "chat_type": "shopping_assistant", "response_latency_ms": 1234 }
```
**`chat_recommendation`**
```json
{ "chat_type": "shopping_assistant" }
```
**`chat_product_click`**
```json
{ "chat_type": "shopping_assistant" }
```
**`chat_contact_form_shown`**
```json
{ "chat_type": "shopping_assistant" }
```
**`chat_conversation_classified`**
```json
{ "chat_type": "shopping_assistant", "classification": "Produktinformation" }
```
### postMessage from Chatbot Iframe
The chatbot iframe sends `postMessage` events to the parent window. You can listen to these
instead of, or in addition to, the `CustomEvent`s above. Use this when you need raw access to
iframe messages or when the chatbot script is not loaded on your page.
**Origin:** `https://chatbot.dialogintelligens.dk` (or `http://localhost:3002` in development)
**Payload format:** Each message has an `action` field. Some actions include extra fields.
| Action | Extra Fields | When |
| ------------------------ | ------------------------------------- | ------------------------------------- |
| `productClick` | None | User clicks a product in the chat |
| `navigate` | `url` | Parent should navigate to product URL |
| `userMessageSubmitted` | None | User submitted a message |
| `firstMessageSent` | `chatbotID` | User sent their first message |
| `assistantFirstToken` | None | First token of AI response arrived |
| `productRecommendation` | Optional `urls` | Product recommendations shown |
| `contactFormShown` | None | Built-in contact form shown |
| `conversationClassified` | `emne` | Conversation classified by AI |
| `purchaseReported` | `chatbotID`, `totalPrice`, `currency` | User reported a purchase |
| `expandChat` | None | Chat expanded |
| `collapseChat` | None | Chat collapsed |
| `closeChat` | None | Chat closed |
| `toggleSize` | None | User toggled chat size |
**Example listener:**
```js
const CHATBOT_ORIGIN = "https://chatbot.dialogintelligens.dk";
window.addEventListener("message", (event) => {
if (event.origin !== CHATBOT_ORIGIN) return;
const data = event.data ?? {};
if (data.action !== "productClick") return;
window.dataLayer = window.dataLayer || [];
window.dataLayer.push({
event: "chat_product_click",
chat_type: "shopping_assistant",
});
});
```
**Mapping to CustomEvents:** The chatbot script on the parent translates some postMessage actions
into CustomEvents. If the script is loaded, you get both the raw `postMessage` and the
`CustomEvent`. Actions that map to CustomEvents:
`productClick` -> `chat_product_click`,
`assistantFirstToken` -> `chat_response`,
`productRecommendation` -> `chat_recommendation`,
`contactFormShown` -> `chat_contact_form_shown`,
`conversationClassified` -> `chat_conversation_classified`.
### Notes
* Events are dispatched automatically. No initialization or opt-in is required.
* The `response_latency_ms` value is the round-trip time in milliseconds from when the user sends
a message until the chatbot response arrives.
* The `chat_conversation_classified` event fires a few seconds after each response, once the
backend finishes classifying the conversation. The `classification` value is the topic or
category assigned by the AI, for example `"Ordre"`, `"Produktinformation"`, or `"Reklamation"`,
or `null` if classification failed.
* The implementation is decoupled from GTM on purpose. You are responsible for listening, mapping,
and pushing data to your analytics platform.
* Events are standard `CustomEvent`s, so they work with any analytics tool that can listen to DOM
events.
## Chatbot API Errors
The chatbot API has two error channels:
* Non-2xx HTTP responses use the standard `ApiError` envelope.
* `POST /api/v1/chat/messages` can also return a `200` SSE stream that later ends with a terminal
`error` event.
Handle these separately. `ApiError.error.code` and `StreamErrorEvent.code` are different enums.
### HTTP API errors
Before a stream starts, endpoints return non-2xx HTTP responses with this shape:
```json
{
"error": {
"code": "validation",
"message": "Validation failed",
"params": [{ "field": "message.parts.0.text", "message": "Text is required" }]
}
}
```
`params` is only present when the API can point to specific invalid fields.
| Code | Typical status | Meaning | Client handling |
| -------------- | -------------- | ------------------------------------------------------------- | ------------------------------------------------------------------- |
| `unauthorized` | `401` | Missing, expired, or invalid authentication credentials. | Request a new visitor token or ask the user to restart the session. |
| `forbidden` | `403` | Authenticated, but not allowed to access the resource. | Stop the action and show a permission error. |
| `not_found` | `404` | The target resource does not exist or is unavailable. | Show a not-available state; do not retry automatically. |
| `rate_limited` | `429` | Too many requests in a short period. | Back off before retrying. |
| `validation` | `422` | The request body or query parameters failed validation. | Fix the highlighted fields from `params` before retrying. |
| `internal` | `500` | Unexpected server failure before the response stream started. | Show a generic error and allow retry if the action is safe. |
### Stream errors
`POST /api/v1/chat/messages` is different because it streams the assistant response. Once the HTTP
response has started, the server cannot switch to a non-2xx HTTP error. Instead, the stream ends with
an SSE `error` event:
```text
event: error
data: {"code":"generation_failed","message":"Failed to generate a response. Please try again.","retryable":true}
```
The parsed event data has this shape:
```json
{
"code": "generation_failed",
"message": "Failed to generate a response. Please try again.",
"retryable": true
}
```
| Code | When it happens | Client handling |
| ------------------- | ---------------------------------------------------- | ------------------------------------------------------------ |
| `generation_failed` | Response generation failed after the stream started. | Show `message`; offer retry only when `retryable` is `true`. |
After receiving a stream `error`, treat the stream as complete and disconnect. It will not be
followed by a `done` event.
### Handling both channels
Check the HTTP response first. Only start reading stream events after `response.ok` is true.
```ts
const response = await fetch("https://api.dialogintelligens.dk/api/v1/chat/messages", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify(requestBody),
});
if (!response.ok) {
const apiError = await response.json();
handleApiError(apiError.error.code, apiError.error.message, apiError.error.params);
return;
}
await readSse(response.body, (event) => {
if (event.event === "error") {
handleStreamError(event.data.code, event.data.message, event.data.retryable);
return;
}
handleStreamEvent(event);
});
```
Keep a generic fallback for unknown future error codes, but do not treat stream error codes as
`ApiError` codes.
## Chatbot API Flow
This guide explains how the public chatbot API fits together from an integrator's point of view.
It focuses on the flow and data model so you can build a client without guessing how responses
should be interpreted.
For the full schema, field-level validation, and complete endpoint reference, see the
[API reference](/api).
### Flow at a glance
1. Your backend calls `POST /api/v1/chat/auth` with the chatbot API key and a `chatbot_id`.
2. The API returns a visitor Bearer token.
3. Your client calls `GET /api/v1/chat/config` to load branding and startup configuration.
4. Your client sends user input to `POST /api/v1/chat/messages` and receives a streamed response.
5. The stream emits structured content as `part_delta` and `part` events, then ends with `done`
or `error`.
6. If the assistant returns an actionable marker, your client collects the required data and sends
it to `POST /api/v1/chat/actions`.
7. You can optionally read message history, save a conversation rating, or delete the visitor's
data.
### Auth and config
The API has two authentication layers:
| Step | Who calls it | Purpose |
| ------------------------ | ------------------------------ | ------------------------------------------------------------------------------------------ |
| `POST /api/v1/chat/auth` | Your backend | Exchange the API key for a visitor token |
| Visitor-scoped endpoints | Your client or trusted backend | Use the returned Bearer token for config, messages, actions, history, rating, and deletion |
Keep the API key on your server. Do not expose it in browser code, mobile apps, or public
frontend bundles.
`/auth` uses HTTP Basic Auth with:
* the API key as the username
* an empty password
When the token expires, call `/auth` again to create a new visitor token. There is no refresh
token flow.
#### Example: create a visitor token
```bash
curl -X POST "https://api.dialogintelligens.dk/api/v1/chat/auth" \
-u "$CHATBOT_API_KEY:" \
-H "Content-Type: application/json" \
-d '{
"chatbot_id": "shop-bot"
}'
```
The response contains a JWT token:
```json
{
"token": "eyJhbGciOiJIUzI1NiIs..."
}
```
After that, your client can load config with the Bearer token. `GET /api/v1/chat/config`
returns the public UI configuration for the chatbot, including:
* name
* avatar
* welcome message
* theme colors
* popup messages
### How message data is structured
Every conversation item is a `message`. A message contains ordered `parts`.
For user input, `message.parts` is how you send text and attachments in one request.
For assistant output, `message.parts` is also how you render the response. Parts are already
structured for you, so the client should render them directly instead of trying to parse raw text.
#### Mental model
| Term | Meaning |
| --------- | -------------------------------------------------------- |
| `message` | One user, assistant, or system entry in the conversation |
| `part` | A top-level renderable unit inside a message |
| `block` | A section inside a `rich_text` part or table cell |
| `span` | Inline formatted text inside a paragraph or bullet item |
```text
message
`-- parts[]
|-- rich_text
| `-- blocks[]
| |-- paragraph
| | `-- spans[] -> text | bold | strike | link
| `-- bullet_list
|-- image
|-- table
|-- products
`-- show_contact_form
```
#### Parts
Common assistant part types are:
* `rich_text` for formatted text
* `image` for image content
* `table` for structured tables
* `products` for product cards
* marker parts such as `show_contact_form` or `request_image_upload`
The important detail is that all of these are siblings in the same `parts` array. That means a
single assistant message can look like:
1. Some text
2. A marker telling the client to open a form
3. More text after the marker
#### Blocks and spans
Inside a `rich_text` part:
* `blocks` describe larger sections such as paragraphs and bullet lists
* `spans` describe inline formatting inside those blocks, such as plain text, bold text, struck
text, and links
This lets a client render formatted content without having to parse markdown or custom marker
syntax.
#### Example: a message with text and an action
```json
{
"message_id": "msg_123",
"role": "assistant",
"parts": [
{
"type": "rich_text",
"part_id": "part_1",
"blocks": [
{
"type": "paragraph",
"spans": [{ "type": "text", "text": "Need help with your order?" }]
}
]
},
{
"type": "show_contact_form",
"part_id": "part_2",
"fields": [
{ "key": "name", "label": "Your name", "required": true },
{ "key": "email", "label": "Email address", "type": "email", "required": true }
]
},
{
"type": "rich_text",
"part_id": "part_3",
"blocks": [
{
"type": "paragraph",
"spans": [{ "type": "text", "text": "Fill in the form and we will follow up." }]
}
]
}
]
}
```
In this example, the form is not separate from the message. It is one ordered part of the message.
### How streaming works
`POST /api/v1/chat/messages` returns a Server-Sent Events stream. Because this is a `POST`
endpoint, the browser's built-in `EventSource` API is not a good fit. Use `fetch()` directly or a
helper such as `@microsoft/fetch-event-source`.
The stream uses five public event types:
| Event | Meaning |
| ------------ | ------------------------------------------------- |
| `status` | Lifecycle updates such as connected or processing |
| `part_delta` | Incremental updates for a streamed part |
| `part` | A finalized structured part |
| `done` | The final assembled assistant message |
| `error` | A terminal stream error |
Typical flow:
```text
status(connected)
status(processing)
part_delta / part ...
done or error
```
`part_delta` is useful when you want progressive rendering for content such as:
* rich text
* products
* tables
`part` gives you a finalized structured part. Marker parts typically arrive this way because they
do not need progressive updates.
`done.message` is the final source of truth for the assistant response. If you rendered draft
content while streaming, replace or reconcile it with the message from `done`.
#### Example: consume the message stream
```ts
import { fetchEventSource } from "@microsoft/fetch-event-source";
await fetchEventSource("https://api.dialogintelligens.dk/api/v1/chat/messages", {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
message: {
parts: [{ type: "text", text: "What is your return policy?" }],
},
context: {
page: window.location.href,
},
}),
onmessage(event) {
const data = JSON.parse(event.data);
switch (event.event) {
case "status":
updateStatus(data.status);
break;
case "part_delta":
applyPartDelta(data.part_id, data.delta);
break;
case "part":
renderFinalPart(data.part);
break;
case "done":
replaceDraftMessage(data.message);
break;
case "error":
showError(data.message, {
code: data.code,
retryable: data.retryable,
});
break;
}
},
});
```
### Error handling
`POST /api/v1/chat/messages` has two different error channels, depending on whether the SSE
stream has started. Non-2xx HTTP responses use `ApiError`; terminal SSE `error` events use
`StreamErrorEvent`.
For the full error-code catalog and handling examples, see
[Chatbot API Errors](/guides/chatbot-api-errors).
### How markers and actions work together
Markers are assistant parts that tell the client to do something beyond rendering text.
The key link is `part_id`:
* the assistant sends a marker part with a `part_id`
* your client renders the related UI
* when the user completes the action, your client sends that same `part_id` to `/actions`
#### Marker behavior
| Marker part | What the client does | Follow-up |
| ---------------------- | ---------------------------------------------------------------------- | ----------------------------------------------------- |
| `show_contact_form` | Render the fields from the marker | Submit `action.type = "contact_form"` to `/actions` |
| `show_support_ticket` | Render the ticket form and attachment rules from the marker | Submit `action.type = "support_ticket"` to `/actions` |
| `request_image_upload` | Prompt the user to upload an image that matches the marker constraints | Send a later `/messages` request with an `image` part |
| `request_human_agent` | Show a handoff option in your UI | Request livechat handover |
| `custom` | Handle your own `key` and optional `payload` | Client-defined flow |
Only `show_contact_form` and `show_support_ticket` are submitted to `POST /api/v1/chat/actions`.
#### Example: submit an action
```json
{
"part_id": "part_2",
"action": {
"type": "contact_form",
"fields": {
"name": "Jane Doe",
"email": "jane@example.com",
"message": "I need help with my order"
}
}
}
```
The same pattern applies to `support_ticket`, but with the support ticket payload shape defined in
the API reference.
### Livechat end-to-end
Livechat is the handover path from the AI conversation to a human agent. The public API keeps the
visitor experience, your backend setup, outbound webhooks, and agent-system writes separated by
authentication type.
There are three livechat actors:
| Actor | Authentication | What it does |
| ------------------ | --------------------------------------------- | ---------------------------------------------------------------------------- |
| Setup backend | Chatbot API key with HTTP Basic Auth | Configures livechat, webhooks, agent profiles, and queue/list views |
| Visitor client | Visitor Bearer token from `/api/v1/chat/auth` | Requests handover, polls state, sends customer messages, closes, and rates |
| External agent app | Livechat session Bearer token from webhook | Reads context and sends agent messages, typing updates, and agent-side close |
The livechat endpoints in this guide are the public chatbot API endpoints under `/api/v1/chat`.
The full request and response schemas are in the [API reference](/api).
#### Setup before handover
Before visitors can request a human agent, configure livechat and webhook delivery from a trusted
backend using the chatbot API key. Keep this key server-side.
At minimum:
* Enable livechat for the chatbot.
* Configure webhook delivery with an HTTPS endpoint and signing secret.
* Subscribe to the required livechat webhook events.
* Create or update each agent profile with `PUT /api/v1/chat/livechat/agents/{agent_id}`.
* Set attachment retention and availability rules for the chatbot.
Availability is what the visitor sees in `GET /api/v1/chat/config` under `livechat`.
| Config field | How the client should use it |
| --------------------------- | ------------------------------------------------------------------------------------- |
| `enabled` | Show the handover UI only when this is true. |
| `configured` | Indicates livechat can be made available for this chatbot. |
| `availability_status` | `live` means the visitor can request handover now; `offline` means do not start it. |
| `availability_reason` | Explains why livechat is live or offline, such as working hours or a manual override. |
| `attachments_enabled` | Whether visitor and agent livechat messages can include attachments. |
| `max_attachment_size_bytes` | Per-attachment size limit for livechat uploads. |
If working hours are empty, livechat is treated as always on while enabled. If working hours exist,
availability is calculated in the configured timezone. Manual overrides can force livechat live or
offline without changing the weekly schedule.
If you use waiting-room forms, `GET /api/v1/chat/config` returns form references in `forms`.
Fetch the referenced form, submit values before or during the waiting state, and the materialized
session values are included in livechat webhooks and agent context responses.
#### Visitor handover flow
A visitor can request a human handover after your client has a visitor Bearer token.
```bash
curl -X POST "https://api.dialogintelligens.dk/api/v1/chat/livechat/handover" \
-H "Authorization: Bearer $VISITOR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"platform": "website",
"source": "manual_button",
"part_id": "part_human_1"
}'
```
`part_id` is optional. Include it when the handover came from a `request_human_agent` marker so you
can connect the livechat session back to the assistant part that prompted it.
On success, the visitor enters `waiting`:
```json
{
"livechat_session_id": "livechat_session_42",
"status": "waiting",
"platform": "website",
"language": "english",
"requested_at": "2025-06-15T14:30:10Z",
"closed_at": null,
"closed_by": null,
"closed_by_agent_id": null,
"close_reason": null
}
```
At the same time, Dialoge creates a `livechat.handover.requested` webhook event for your
configured endpoint. That event contains the visitor, session values, detected language, prior
conversation context, livechat session id, and a short-lived livechat session token for the agent
system.
While the visitor is waiting, your client should:
* Poll `GET /api/v1/chat/livechat/state` to detect assignment, agent typing, closure, and feedback
availability.
* Poll `GET /api/v1/chat/livechat/messages` to merge new livechat messages into the transcript.
* Continue using the normal AI message stream if you want the assistant to keep helping while the
visitor waits.
Once state becomes `active`, stop sending customer input to `POST /api/v1/chat/messages`. Send
customer livechat messages to `POST /api/v1/chat/livechat/messages` instead:
```json
{
"message": {
"parts": [
{ "type": "text", "text": "Here is the invoice you asked for." },
{
"type": "file",
"data": "JVBERi0xLjQKJeLjz9MKMSAwIG9iago8PC...",
"mime": "application/pdf",
"filename": "invoice.pdf"
}
]
},
"context": {
"page": "https://shop.example.com/orders/123"
}
}
```
Customer typing indicators use `POST /api/v1/chat/livechat/typing` with `{ "is_typing": true }`
and later `{ "is_typing": false }`. Send typing events only while the livechat state is `active`.
The visitor can close the livechat session with `POST /api/v1/chat/livechat/close`:
```json
{
"reason": "Conversation completed"
}
```
Closing is idempotent. After closure, livechat message and typing writes are rejected. If the state
response shows `feedback.status: "pending"`, submit the visitor's post-session rating once with
`POST /api/v1/chat/livechat/feedback`.
#### External agent flow
The handover webhook is the bridge into your agent system. Handle `livechat.handover.requested`,
verify its signature, then store the `event_id` as your idempotency key. Automatic retries and
manual replays use the same `event_id`, so processing the same event twice should not create a
second ticket or duplicate assignment.
The webhook payload includes `livechat_session.token`. Use that token as:
```text
Authorization: Bearer
```
for agent-side calls:
| Endpoint | Purpose |
| ------------------------------------------- | -------------------------------------------------------------- |
| `GET /api/v1/chat/livechat/agent/context` | Fetch latest session values and full transcript for the agent. |
| `POST /api/v1/chat/livechat/agent/messages` | Send a human-agent message to the visitor. |
| `POST /api/v1/chat/livechat/agent/typing` | Show or clear the agent typing indicator for the visitor. |
| `POST /api/v1/chat/livechat/agent/close` | Close the session as the agent. |
The livechat session token is not the chatbot API key and not the visitor token. It is scoped to
one livechat session and is short-lived. Treat it as a secret for the active assignment only.
Agent messages include your configured `agent_id`:
```json
{
"agent_id": "00000000-0000-4000-8000-0000000000a1",
"message": {
"parts": [{ "type": "text", "text": "Hi, I'm Alice. I can help from here." }]
}
}
```
The first successful agent message or typing update assigns that agent to the session when no agent
is already active. If another agent is already assigned, the API returns a conflict. Agent-authored
messages appear in history with `role: "agent"` and include the configured display name and avatar.
External systems that maintain their own queue can also list sessions with the public livechat
session endpoints using the chatbot API key. Use the `turn` filter to find sessions where the
agent system should act next, and use the notification count endpoint for lightweight badge counts.
#### Webhook events and signing
Livechat webhooks are signed JSON `POST` requests. Verify the signature against the raw request
body before trusting the payload:
```ts
import { createHmac, timingSafeEqual } from "node:crypto";
function verifyDialogeWebhook({
rawBody,
timestamp,
signature,
signingSecret,
}: {
rawBody: string;
timestamp: string;
signature: string;
signingSecret: string;
}) {
const expected = createHmac("sha256", signingSecret)
.update(`${timestamp}.${rawBody}`)
.digest("hex");
const expectedBuffer = Buffer.from(expected, "hex");
const signatureBuffer = Buffer.from(signature, "hex");
return (
expectedBuffer.length === signatureBuffer.length &&
timingSafeEqual(expectedBuffer, signatureBuffer)
);
}
```
The relevant headers are:
| Header | Meaning |
| ------------------ | -------------------------------------------------- |
| `x-di-event` | Event type, such as `livechat.handover.requested`. |
| `x-di-timestamp` | Timestamp used in the HMAC input. |
| `x-di-signature` | Lowercase hex HMAC-SHA256 digest. |
| `x-di-delivery-id` | Unique HTTP delivery attempt id. |
| `x-di-attempt` | 1-based delivery attempt number. |
| `x-di-environment` | API key environment, `live` or `test`. |
Important livechat events:
| Event type | When it is emitted |
| -------------------------------------------- | ------------------------------------------------------- |
| `livechat.handover.requested` | Visitor requests a human agent. |
| `livechat.waiting.customer.message.created` | Visitor sends a message while waiting for an agent. |
| `livechat.waiting.assistant.message.created` | Assistant sends a message while the visitor is waiting. |
| `livechat.customer.message.created` | Visitor sends a message during active livechat. |
| `livechat.customer.typing.started` | Visitor starts typing during active livechat. |
| `livechat.customer.typing.stopped` | Visitor stops typing during active livechat. |
| `livechat.session.closed` | Visitor or agent closes the livechat session. |
| `livechat.feedback.submitted` | Visitor submits post-session livechat feedback. |
| `visitor.data.deleted` | Visitor requests deletion of their chatbot data. |
Only some events are optional subscriptions. Required livechat events are always kept in the
subscription set when webhook delivery is enabled so the downstream system can maintain a coherent
session state.
#### Attachments
Livechat message parts can include `image` and `file` inputs when attachments are enabled. The API
stores the attachment for the configured retention period and returns message parts with signed
download URLs:
```json
{
"type": "file",
"attachment_id": "attachment_123",
"filename": "invoice.pdf",
"mime_type": "application/pdf",
"size_bytes": 48231,
"url": "https://api.dialogintelligens.dk/api/v1/chat/livechat/attachments/attachment_123?token=...",
"url_expires_at": "2025-06-15T14:46:00Z"
}
```
Do not store signed attachment URLs as permanent file links. Store `attachment_id` and refetch the
message or agent context when you need a fresh URL. Expired or deleted attachments return an API
error instead of file bytes.
#### Livechat state model
Livechat sessions move through this lifecycle:
```text
inactive -> waiting -> active -> closed
```
`inactive` means the visitor has no current livechat session. `waiting` means handover was
requested and no agent has joined yet. `active` means a human agent is assigned and customer input
belongs on the livechat message endpoint. `closed` is terminal for that session.
Use `state_version` and `updated_at` from state/session responses to reconcile polling updates.
Use `closed_by`, `closed_by_agent_id`, and `close_reason` to explain why a session ended. Use
`feedback.status` to decide whether the visitor can rate the session.
### Webhook event replay
Outbound webhook events are persisted before delivery. API-key callers can list delivery state,
inspect attempts, and replay unexpired events through the `/api/v1/chat/events` endpoints.
Stored webhook events expire 30 days after creation. After expiry, events are no longer listable,
fetchable, or replayable. Expired events that were never delivered are marked `dead` before cleanup.
### History, feedback, and deletion
Once a visitor is authenticated, you can also use:
* `GET /api/v1/chat/messages` to load paginated message history
* `POST /api/v1/chat/rate` to save a `1`-`5` conversation rating and optional feedback
* `DELETE /api/v1/chat` to remove all data for the current visitor
These endpoints use the same Bearer token as `config`, `messages`, and `actions`.
When webhook delivery is configured, `DELETE /api/v1/chat` also emits a signed
`visitor.data.deleted` webhook after Dialoge has removed the visitor data. The payload
includes `chatbot.id`, `visitor.id`, `visitor.session_id`, `deletion.reason: "user_request"`, and
deletion counts. Use this event to remove matching visitor/session data in your own systems. Like
other outbound webhooks, deletion events are persisted, signed, retried, and visible through the
webhook event listing and replay endpoints until they expire.
## Chatbot Integration
This document covers how to open and interact with the chatbot from your own custom elements,
buttons, links, search bars, or any other UI.
### Prerequisites
The chatbot script must be loaded on the page:
```html
```
### Option 1: JavaScript API (`window.DialogIntelligens`)
The chatbot script exposes a global `window.DialogIntelligens` object with the following methods:
| Method | Description |
| --------------- | ---------------------------------------------------- |
| `open()` | Opens the chat window |
| `open(message)` | Opens the chat window and sends a pre-filled message |
| `hide()` | Hides the chat button and iframe completely |
| `show()` | Shows the chat button (closed state) |
| `destroy()` | Removes the chatbot from the page entirely |
#### Examples
**Custom button:**
```html
```
**Link/anchor:**
```html
Need help?
```
**Open with a pre-filled message:**
```html
```
### Option 2: URL Parameters
Append `?chat=open` to any page URL where the chatbot is loaded. The chatbot will automatically
open on page load and the parameters are stripped from the URL.
```text
https://example.com/page?chat=open
```
You can also include a `chatbot_message` parameter to send a pre-filled message when the chat opens:
```text
https://example.com/page?chat=open&chatbot_message=I+need+help+with+my+order
```
Using `chatbot_message` alone (without `chat=open`) also works. The chat will open automatically:
```text
https://example.com/page?chatbot_message=What+are+your+opening+hours%3F
```
This is useful for email campaigns, QR codes, FAQ links, or any link where you want the chat to
open with a specific question.
### Option 3: Inline Search Bar Widget
An inline search bar can be embedded anywhere on the page. When the user types a question and
presses Enter or clicks send, the chatbot opens and receives the message.
#### Setup
1. Load the search bar script **after** the chatbot script:
```html
```
2. Add a container element where you want the search bar to appear:
```html
```
Or use a class for multiple instances:
```html
```
#### Manual initialization with custom config
```html
```
### Option 4: Send a Message via `postMessage`
You can open the chatbot and send a pre-filled message by posting a message directly to the
chatbot iframe. This is how the inline search bar works internally.
```html
```
### Option 5: Inline Chatbot Embed
Use the inline embed when the chatbot should render directly in the page instead of as a floating widget.
#### Setup
```html
```
If `#chatbot-placeholder` is missing, the script appends the chatbot next to the script tag.
#### Inline Options
Set `window.__CHATBOT_INLINE_OVERRIDES__` before loading `chatbotinline.js` to override inline-only behavior for that embed instance.
```html
```
Supported inline override keys:
* `inlineWelcomeMessage`: text shown above the input before the first question in inline/minimal mode
### Notes
* `window.DialogIntelligens` is available as soon as the chatbot script loads.
* The `open()` method is safe to call even if the chatbot is already open. It will not toggle it
closed.
* `open(message)` opens the chat and sends the message. If the chat is already open, the message
is still delivered.
* The `?chat=open` and `?chatbot_message=` URL parameters only trigger once per page load and are
removed from the URL bar.
* When using `postMessage` to send a message, use a timeout of roughly `1000ms` to let the iframe
initialize after opening.