Conversations are the foundation of interactive AI experiences in ChatBotKit, providing a structured way to manage ongoing dialogues between users and AI bots. Each conversation maintains its own context, history, and state, allowing for natural, context-aware interactions that can span multiple messages and sessions.
A conversation serves as a container for messages, maintaining the dialogue history and configuration that determines how the AI responds. Conversations can be associated with bots, contacts, tasks, and spaces, providing flexible organization and management capabilities for different use cases.
Creating Conversations
Creating a conversation initializes a new interactive session with specific configuration options that control the AI's behavior. You can create a conversation by referencing an existing bot (which provides the backstory, model, and other settings) or by providing the configuration directly in the request.
To create a conversation, send a POST request to the conversation creation endpoint with your desired configuration:
POST /api/v1/conversation/create Content-Type: application/json { "name": "Customer Support Session", "description": "Support conversation for user inquiry", "botId": "bot_abc123", "contactId": "contact_xyz789" }http
When creating a conversation, you can specify several key parameters:
- name: A descriptive name for the conversation (optional)
- description: Additional context about the conversation's purpose (optional)
- botId: Reference to an existing bot that provides configuration (optional)
- contactId: Link to a contact record for tracking user interactions (optional)
- taskId: Associate the conversation with a specific task (optional)
- spaceId: Organize the conversation within a space (optional)
- messages: Include initial messages to start the conversation (optional)
Configuration Options
If you don't reference a bot, you can provide configuration directly:
- backstory: Instructions that define the AI's personality and behavior
- model: The language model to use (e.g., "gpt-4", "claude-3-5-sonnet")
- datasetId: Reference to a dataset for knowledge retrieval
- skillsetId: Reference to a skillset for extended capabilities
- privacy: Enable privacy mode to prevent data retention
- moderation: Enable content moderation for safety
Including Initial Messages
You can initialize a conversation with messages by including a messages array:
POST /api/v1/conversation/create Content-Type: application/json { "botId": "bot_abc123", "messages": [ { "type": "user", "text": "Hello, I need help with my order" } ] }http
The API will return the created conversation ID and any processed messages, allowing you to immediately continue the interaction.
Important Notes:
- Conversations inherit configuration from their associated bot if a botId is provided, but you can override specific settings by providing them directly
- Each conversation maintains its own message history and context
- Conversations can be organized using contacts, tasks, and spaces for different tracking and filtering needs
- Privacy mode prevents message content from being stored, useful for sensitive conversations
Listing Conversations
Retrieving a list of conversations allows you to access and manage all conversations associated with your account. The list endpoint provides powerful filtering, pagination, and ordering capabilities to help you find and organize conversations efficiently.
To list conversations, send a GET request to the list endpoint:
GET /api/v1/conversation/listhttp
This returns all conversations for the authenticated user, ordered by creation date (most recent first) by default.
Pagination and Ordering
The list endpoint supports cursor-based pagination for efficient retrieval of large conversation sets:
GET /api/v1/conversation/list?take=20&order=deschttp
Available parameters for pagination and ordering include:
- cursor: A pagination cursor from the previous response to fetch the next page of results
- take: Number of conversations to retrieve (default and maximum depend on your plan)
- order: Sort order, either "asc" (ascending) or "desc" (descending) by creation date
The response includes an items array containing conversation objects, each with their ID, name, description, configuration, and timestamps. If there are more results available, the response will include a cursor for fetching the next page.
Filtering by Relationships
You can filter conversations by their associated resources using query parameters:
GET /api/v1/conversation/list?botId=bot_abc123&contactId=contact_xyz789http
Supported filter parameters include:
- botId: Filter by associated bot
- contactId: Filter by associated contact
- taskId: Filter by associated task
Filtering by Metadata
Conversations with custom metadata can be filtered using meta queries. This allows you to organize and retrieve conversations based on your own custom fields and values.
Response Format
Each conversation in the response includes:
- Core identifiers (id)
- Basic information (name, description)
- Resource relationships (botId, contactId, taskId, spaceId, datasetId, skillsetId)
- Configuration (backstory, model, privacy, moderation)
- Metadata (meta)
- Timestamps (createdAt, updatedAt)
Best Practices:
- Use pagination for large conversation sets to improve performance
- Apply filters to narrow results when searching for specific conversations
- Consider the order parameter based on your use case (recent conversations vs. oldest first)
- Store cursors for efficient navigation through paginated results
Fetching a Conversation
Retrieving a specific conversation provides access to its complete configuration, including all settings, relationships, and metadata. This is useful when you need to inspect a conversation's current state, verify its configuration, or retrieve details for display or modification.
To fetch a conversation, send a GET request with the conversation ID:
GET /api/v1/conversation/{conversationId}/fetchhttp
Replace {conversationId} with the actual ID of the conversation you want to
retrieve. The conversation ID is returned when you create a conversation or
can be obtained from the list endpoint.
Response Details
The response includes the complete conversation object with all configuration and relationship information, including references to associated resources, conversation settings, and metadata.
Use Cases
Fetching a conversation is commonly used to:
- Verify the current configuration before sending messages
- Display conversation details in a user interface
- Retrieve the conversation state for analytics or monitoring
- Check which bot, dataset, or skillset is associated
- Access custom metadata for application-specific logic
Security Note: You can only fetch conversations that belong to your account. Attempting to access another user's conversation will result in an authorization error.
Receiving AI Responses
The receive endpoint enables you to request and receive AI-generated responses within a conversation. This endpoint is essential for real-time chat interactions where you need the AI to process the conversation history and generate an appropriate response based on the context, backstory, and any configured datasets or skillsets.
Unlike the send endpoint which adds user messages and triggers processing, the receive endpoint focuses specifically on getting the AI's response, giving you fine-grained control over the conversation flow and allowing you to customize behavior with extensions and runtime configurations.
Basic Usage
To receive an AI response, send a POST request to the receive endpoint. The endpoint returns a streaming response containing the AI-generated message:
POST /api/v1/conversation/{conversationId}/receive Content-Type: application/json {}http
The response is delivered as a server-sent events (SSE) stream, allowing you to process the AI's response as it is generated in real-time, providing a smooth user experience with progressive text rendering.
Extending Conversations with Runtime Configuration
One of the most powerful features of the receive endpoint is the ability to extend and customize the conversation at runtime without modifying the underlying bot or conversation configuration. You can provide extensions that temporarily augment the conversation with additional context, data sources, and capabilities:
POST /api/v1/conversation/{conversationId}/receive Content-Type: application/json { "extensions": { "backstory": "Additional context: The user is asking about enterprise pricing.", "datasets": [ { "name": "Pricing Information", "description": "Enterprise pricing and plans", "records": [ { "text": "Enterprise plan starts at $500/month for 10 users", "meta": {} } ] } ], "skillsets": [ { "name": "Sales Tools", "description": "Tools for sales conversations", "abilities": [ { "name": "check_inventory", "description": "Check product inventory status", "instruction": "fetch https://api.example.com/inventory", "meta": {} } ] } ] } }http
Extension Capabilities
The extensions object supports multiple types of runtime customizations:
Backstory Extensions: Add or override conversation instructions temporarily without modifying the bot's base configuration. This is useful for providing conversation-specific context, handling special cases, or adapting behavior based on user attributes or session data.
Dataset Extensions: Inject additional knowledge into the conversation dynamically. This allows you to provide context-specific information without permanently adding it to your datasets, ideal for user-specific data, session-specific context, or temporary information that may change frequently.
Skillset Extensions: Temporarily grant the AI access to additional capabilities and tools for specific conversations. This enables you to provide specialized functionality based on user permissions, conversation type, or specific workflow requirements without permanently modifying the bot's skillset.
Feature Extensions: Enable or disable specific conversation features at runtime, such as tool calling, code interpretation, or image understanding, allowing fine-grained control over AI capabilities on a per-interaction basis.
Function Calling
The receive endpoint supports function calling, enabling the AI to invoke predefined functions during response generation. Define available functions in your request:
POST /api/v1/conversation/{conversationId}/receive Content-Type: application/json { "functions": [ { "name": "get_weather", "description": "Get current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City name" } }, "required": ["location"] } } ] }http
When the AI determines that a function call is needed, the response stream will include function call requests that your application should handle and respond to, enabling dynamic, interactive conversations with external data sources and services.
Streaming Response Format
The receive endpoint returns responses as a streaming SSE (Server-Sent Events) format, allowing you to process the AI's response progressively as it is generated. This provides a better user experience compared to waiting for the complete response.
The stream emits tagged events that indicate different types of responses:
- result: Contains chunks of the AI-generated text response
- error: Indicates an error occurred during processing
- done: Signals the end of the response stream
Important Notes:
- The receive endpoint is typically used in conjunction with the send endpoint in client-side applications
- Extensions are temporary and do not modify the underlying bot or conversation configuration
- Function responses must be handled by your application and fed back into the conversation
- The endpoint supports both API key authentication and conversation session tokens
- Response streaming requires proper SSE handling in your client application
Security Considerations:
When using extensions, be mindful of the data you inject into conversations. Extensions allow powerful runtime customization but should be used carefully to avoid exposing sensitive information or granting unintended capabilities. Always validate and sanitize any user-provided data before including it in extensions.
Updating a Conversation
Modifying a conversation allows you to change its configuration, update relationships, or adjust settings after creation. This is useful for adapting the conversation's behavior, correcting information, or changing associations as your application's needs evolve.
To update a conversation, send a POST request with the conversation ID and the fields you want to modify:
POST /api/v1/conversation/{conversationId}/update Content-Type: application/json { "name": "Updated Support Session", "backstory": "You are an expert technical support assistant...", "model": "gpt-4-turbo" }http
Replace {conversationId} with the actual ID of the conversation you want to
update. You only need to include the fields you want to change; all other
fields will remain unchanged.
Updateable Fields
You can update the following conversation properties:
Basic Information:
- name: Change the conversation's display name
- description: Update the conversation's description
Relationships:
- botId: Change the associated bot (null to remove association)
- contactId: Change the associated contact (null to remove)
- taskId: Change the associated task (null to remove)
- spaceId: Change the associated space (null to remove)
- datasetId: Change the dataset for knowledge retrieval (null to remove)
- skillsetId: Change the skillset for capabilities (null to remove)
Configuration:
- backstory: Modify the AI's instructions and behavior
- model: Switch to a different language model
- privacy: Enable or disable privacy mode
- moderation: Enable or disable content moderation
Metadata:
- meta: Update or add custom metadata fields
Example: Changing AI Behavior
You can modify the conversation's backstory to change how the AI responds:
POST /api/v1/conversation/{conversationId}/update Content-Type: application/json { "backstory": "You are a specialized product expert who helps users find the perfect product. Be enthusiastic and knowledgeable about product features." }http
Example: Switching Models
To use a different language model for better performance or cost optimization:
POST /api/v1/conversation/{conversationId}/update Content-Type: application/json { "model": "claude-3-5-sonnet" }http
Example: Updating Relationships
Associate the conversation with a different bot or dataset:
POST /api/v1/conversation/{conversationId}/update Content-Type: application/json { "botId": "bot_new123", "datasetId": "dataset_xyz789" }http
Metadata Management
The update operation intelligently merges metadata. If you provide a meta object, it will merge with existing metadata rather than replacing it entirely, preserving fields you don't explicitly update.
Important Considerations:
- Updating a conversation does not affect its existing message history
- Configuration changes apply to future messages in the conversation
- Changing the model or backstory will change how the AI responds going forward
- Updates are applied immediately and affect the next interaction
- You can only update conversations that belong to your account
Deleting a Conversation
Deleting a conversation permanently removes it along with all associated messages and data. This operation is irreversible and should be used carefully, typically for cleanup, privacy compliance, or when a conversation is no longer needed.
To delete a conversation, send a POST request with the conversation ID:
POST /api/v1/conversation/{conversationId}/delete Content-Type: application/json {}http
Replace {conversationId} with the actual ID of the conversation you want to
delete. The request body should be an empty JSON object.
What Gets Deleted
When you delete a conversation, the following data is permanently removed:
- The conversation record itself
- All messages within the conversation
- Any associated metadata and configuration
- Message history and context
- File attachments and other associated data
- Related usage statistics for that conversation
Response
Upon successful deletion, the API returns the ID of the deleted conversation:
{ "id": "conv_abc123" }json
This confirms which conversation was deleted and can be used for logging or auditing purposes.
Data Relationships
Deleting a conversation does not affect:
- The bot referenced by the conversation (if any)
- The contact associated with the conversation (if any)
- The task linked to the conversation (if any)
- Any datasets or skillsets referenced by the conversation
- Other conversations or resources in your account
Only the conversation itself and its direct contents (messages) are removed.
Use Cases
Common scenarios for deleting conversations include:
- Privacy Compliance: Removing user data upon request (GDPR, CCPA)
- Cleanup: Removing test or obsolete conversations
- Data Management: Pruning old conversations to manage storage
- Error Correction: Removing conversations created by mistake
- User-Initiated Deletion: Allowing users to delete their conversation history
Bulk Deletion
To delete multiple conversations, you'll need to call the delete endpoint for each conversation individually. Consider implementing rate limiting and error handling when performing bulk deletions to avoid overwhelming the API.
Warning: This operation is permanent and cannot be undone. Ensure you have proper authorization checks and confirmation flows in your application before allowing conversation deletion. Consider implementing a soft-delete pattern in your application if you need the ability to recover deleted conversations.
Security Note: You can only delete conversations that belong to your account. Attempting to delete another user's conversation will result in an authorization error.
Sending Messages to a Conversation
The send endpoint allows you to send a user message to a conversation and add it to the conversation history. The message is processed and events may be generated, but this endpoint does not produce an AI response. To receive the AI's response, you need to call the receive route separately. This design provides flexibility in controlling conversation flow and separating message sending from response generation.
To send a message to a conversation, use a POST request with streaming support:
POST /api/v1/conversation/{conversationId}/send Content-Type: application/json { "text": "What are your business hours?" }http
Replace {conversationId} with the actual ID of the conversation. The text
field is required and contains the user's message.
How Send Works
The send endpoint adds your message to the conversation and processes it, but does not generate an AI response. It may generate events and perform some processing based on the message content, but to receive a message from the AI agent, you need to call the receive route separately. This allows you to have more control over the conversation flow and separate the message sending from the response generation phases.
The response is delivered as a stream of JSON lines (JSONL), where each line represents an event related to message processing.
Advanced Features
The send endpoint supports several advanced features for enhanced functionality:
Function Calling:
You can enable the AI to call functions during the conversation:
POST /api/v1/conversation/{conversationId}/send Content-Type: application/json { "text": "What's the weather in San Francisco?", "functions": [ { "name": "get_weather", "description": "Get current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City name" } } }, "result": {} } ] }http
When the AI determines a function call is appropriate, it will include function call information in the streaming response. The result object is used to return the function execution results.
Extensions (Trusted Sessions Only):
For trusted API sessions, you can temporarily extend the conversation's capabilities:
- extensions.backstory: Add additional instructions for this message only
- extensions.datasets: Provide inline dataset records for context
- extensions.skillsets: Add temporary abilities for this interaction
- extensions.features: Enable specific features for this message
Response Structure
The final result event includes the ID of the created message and usage statistics for the operation.
Message Flow
When you send a message:
- Your message is added to the conversation history
- The message is processed and events may be generated
- The message ID is returned in the result event
- No AI response is generated (use the receive route to get the AI response)
- The conversation is ready for further interactions
Best Practices
- Handle Streaming Properly: Implement proper streaming parsing in your client to handle JSONL responses
- Handle Errors Gracefully: Watch for error events in the stream and display appropriate messages
- Respect Rate Limits: Be aware of message and token rate limits for your account
Important Notes:
- The conversation maintains full message history for context
- The send operation adds your message to the conversation but does not generate an AI response
- To receive an AI response, call the receive route after sending
- Token usage is tracked and counted against your account limits
- Streaming responses can be interrupted if the connection is lost
Exporting Conversations
The export endpoint enables you to retrieve conversations and their complete message histories in bulk, supporting multiple output formats for different use cases. This capability is essential for data analysis, backup purposes, training data preparation, compliance requirements, and migrating conversations between systems.
Unlike the standard list endpoint that returns basic conversation metadata, the export endpoint provides comprehensive conversation data including full message histories, making it ideal for scenarios where you need complete conversation records rather than just metadata summaries.
Supported Export Formats
The export endpoint supports three output formats, each optimized for different use cases:
JSON Format (application/json): Returns conversations as a structured
JSON array, ideal for programmatic processing, API integrations, and when you
need to work with conversation data in JavaScript or other modern applications.
This format provides the most structured and easily parseable output.
JSONL Format (application/jsonl): Delivers conversations as JSON Lines
(newline-delimited JSON), where each line represents a single conversation.
This format is optimized for streaming large datasets, processing data
line-by-line, and integration with data pipeline tools that expect JSONL input.
It's particularly useful for large-scale exports that might exceed memory limits
if loaded entirely at once.
CSV Format (text/csv): Exports conversations in comma-separated values
format, ideal for spreadsheet applications, data analysis tools, and situations
where human readability and Excel compatibility are priorities. This format
flattens the conversation structure for easier tabular analysis.
Basic Export Request
To export conversations, send a GET request with the desired format specified in the Accept header:
GET /api/v1/conversation/export Accept: application/jsonhttp
For JSONL format:
GET /api/v1/conversation/export Accept: application/jsonlhttp
For CSV format:
GET /api/v1/conversation/export Accept: text/csvhttp
Pagination and Filtering
The export endpoint supports pagination and filtering to manage large datasets efficiently:
Cursor-based Pagination: Use the cursor parameter to paginate through
large result sets. The response includes a cursor that you can use to fetch
the next page of results:
GET /api/v1/conversation/export?cursor=eyJpZCI6ImNvbnZfYWJjMTIzIn0&take=100 Accept: application/jsonhttp
Ordering: Control the sort order of exported conversations using the order
parameter. Use desc for most recent first (default) or asc for oldest first:
GET /api/v1/conversation/export?order=asc&take=50 Accept: application/jsonhttp
Record Limits: Specify the number of conversations to retrieve per request
using the take parameter. This helps manage export size and processing time:
GET /api/v1/conversation/export?take=25 Accept: application/jsonhttp
Metadata Filtering: Filter conversations by metadata using the meta
parameter with deep object notation. This allows you to export only conversations
matching specific criteria:
GET /api/v1/conversation/export?meta[tier]=premium&meta[region]=us-east Accept: application/jsonhttp
Export Data Structure
Each exported conversation includes comprehensive information:
- Basic Information: ID, name, description, creation and update timestamps
- Configuration: Bot settings, model configuration, privacy and moderation settings
- Associations: Contact ID, task ID, space ID for organizational relationships
- Message History: Complete conversation messages with type, text, entities, and metadata
- Metadata: Custom metadata fields for tracking and categorization
The exact structure depends on the output format, but all formats include complete conversation data suitable for archival, analysis, or migration purposes.
Use Cases
Data Backup and Archival: Regularly export conversations for backup purposes, ensuring you have offline copies of important conversation data.
Compliance and Audit: Export conversation records for compliance reviews, legal discovery, or audit requirements where complete conversation histories are needed.
Training Data Preparation: Extract conversations to create training datasets for fine-tuning language models or improving AI performance.
Analytics and Reporting: Export conversation data for analysis in business intelligence tools, spreadsheets, or custom analytics platforms.
System Migration: Transfer conversations between systems, environments, or accounts using the export functionality.
Quality Assurance: Review conversation quality by exporting samples for manual review, analysis, or testing purposes.
Performance Considerations
Exporting large volumes of conversations can be resource-intensive. Follow these best practices:
Use Pagination: Don't attempt to export all conversations at once. Use the
take parameter to limit export size and process data in manageable chunks.
Implement Incremental Exports: Export conversations periodically (e.g., daily) rather than attempting to export your entire history at once.
Choose Appropriate Formats: Use JSONL for large exports to enable streaming and line-by-line processing rather than loading everything into memory.
Schedule Off-Peak Exports: Run large export operations during off-peak hours to minimize impact on system performance.
Filter Effectively: Use metadata filters to export only the conversations you actually need rather than exporting everything and filtering locally.
Important Notes:
- Exports include all messages in each conversation, which can result in large data volumes for conversations with extensive histories
- The default sort order is newest conversations first (desc), which is typically most useful for incremental exports
- CSV format may not preserve all data structures perfectly due to flattening; use JSON or JSONL for complete data fidelity
- Exported data includes sensitive information; ensure proper security measures when storing or transmitting exports
- Large exports may take significant time to complete; implement appropriate timeout handling in your client code
Complete Conversation Interaction
The complete endpoint provides a full round-trip conversation interaction, sending a user message and receiving the AI's complete response through a streaming connection. Unlike the send endpoint which only sends the message, complete handles both sending and receiving in a single operation, making it ideal for traditional request-response chat patterns.
To complete a conversation interaction, send a POST request. The API supports
both streaming and non-streaming responses. For streaming, include the
Accept: application/jsonl header; otherwise, the response defaults to
non-streaming JSON:
POST /api/v1/conversation/{conversationId}/complete Content-Type: application/json Accept: application/jsonl { "text": "Can you explain how your pricing works?" }http
Replace {conversationId} with the actual conversation ID. The text field
contains the user's message and is required.
How Complete Works
The complete endpoint orchestrates a full conversation turn:
- Send Phase: Your message is added to the conversation and processed
- Processing: The AI analyzes the message with full conversation context
- Receive Phase: The AI generates and streams its response
- Result: Both messages are saved to the conversation history
This two-phase approach ensures that both the user's message and the AI's response are properly recorded and contribute to the ongoing conversation context.
Streaming Response Events
The complete endpoint delivers a stream of events as JSONL (JSON Lines), with three main event types:
send_result Event:
Emitted after the user's message is processed, containing:
- id: The ID of the user's message
- text: The user's message text
- entities: Extracted entities from the user's message
- usage: Token usage for processing the user's message
receive_result Event:
Emitted after the AI's response is complete, containing:
- id: The ID of the AI's response message
- text: The AI's complete response text
- usage: Cumulative token usage for the entire interaction
Streaming Tokens:
Between send_result and receive_result, the AI's response is streamed as individual tokens (word pieces), allowing you to display the response incrementally as it's generated.
Advanced Features
The complete endpoint supports advanced features for enhanced functionality:
Function Calling:
Enable the AI to call functions during the interaction:
POST /api/v1/conversation/{conversationId}/complete Content-Type: application/json { "text": "What's my account balance?", "functions": [ { "name": "get_account_balance", "description": "Retrieve current account balance", "parameters": { "type": "object", "properties": { "account_id": { "type": "string", "description": "Account identifier" } } } } ] }http
Extensions (Trusted Sessions Only):
For API sessions with trusted status, you can extend conversation capabilities for a single interaction:
- extensions.backstory: Additional instructions for this specific interaction
- extensions.datasets: Inline dataset records to provide context
- extensions.skillsets: Temporary abilities for this message
- extensions.features: Enable specific features for this interaction
When to Use Complete vs Send
Use Complete When:
- You want a traditional request-response chat pattern
- You need both messages saved in a single operation
- You want separated send and receive events in the stream
- Your application requires explicit confirmation of both phases
Use Send When:
- You only need to send a message without waiting for a response
- You're implementing a fire-and-forget pattern
- You have a different mechanism for receiving responses
Error Handling
The complete endpoint includes comprehensive error handling. If an error occurs during either the send or receive phase, an error event will be included in the stream with details about what went wrong. Your client should handle these error events gracefully and provide appropriate feedback to users.
Performance Considerations
- The complete operation can take up to 800 seconds for long-running generations
- Token streaming provides immediate feedback while generation continues
- Both send and receive phases count toward token usage limits
- Rate limits apply to both message count and token usage
Best Practices:
- Implement proper JSONL streaming parsing in your client
- Handle all three event types (send_result, receive_result, and tokens)
- Display tokens incrementally for better user experience
- Watch for error events and handle them appropriately
- Store message IDs for reference and conversation management
- Monitor usage data to track conversation costs
Initiating Bot Messages
The conversation initiate endpoint allows you to programmatically generate bot responses based on provided text, enabling advanced automation scenarios where the AI needs to respond to extracted information, processed data, or system-generated content rather than direct user input. This functionality is particularly useful for integrations where you're processing messages through external systems before presenting them to the AI, or when you want the bot to respond to structured data that's been formatted into natural language.
Unlike the standard send endpoint which expects user messages, the initiate endpoint treats the provided text as context that should trigger a bot response. This allows you to create sophisticated workflows where data from various sources (forms, APIs, databases, sensors) is transformed into conversational context that the bot can meaningfully respond to, maintaining the natural dialogue flow while working with programmatically generated content.
The endpoint supports entity extraction, allowing you to annotate specific portions of the input text with entity information that the AI can leverage for more accurate and contextually appropriate responses. This is especially valuable when dealing with structured data that contains important entities like dates, names, locations, or custom business-specific entities that should be preserved and understood by the conversational AI.
POST /api/v1/conversation/{conversationId}/initiate Content-Type: application/json { "text": "The customer's order #12345 shipped yesterday to New York and is expected to arrive on Friday", "entities": [ { "begin": 18, "end": 23 }, { "begin": 50, "end": 58 } ] }http
The response streams back the bot's generated reply in real-time, maintaining the same streaming format as other conversation endpoints. The bot will process the provided text as context and generate an appropriate response based on its configuration, backstory, and any connected knowledge sources or tools.
Use Cases:
- Processing form submissions through AI before presenting to users
- Converting structured data into conversational responses
- Integrating with external systems that generate context for bot responses
- Creating automated customer service workflows with data enrichment
- Building intelligent notification systems with contextual AI responses
Important Notes:
- The provided text is treated as contextual information for bot response generation
- Entity annotations help the AI understand and preserve important information
- The endpoint maintains conversation context and history like standard message endpoints
- Responses are streamed in real-time for optimal user experience
- This endpoint is designed for integration scenarios, not direct user messaging
Batch Creating Messages
The batch message creation endpoint enables efficient addition of multiple messages to a conversation in a single atomic operation, providing significant performance advantages and transactional guarantees when working with conversation history imports, data migrations, chat transcript ingestion, or programmatic conversation initialization scenarios. This functionality is essential for building robust integrations that need to efficiently populate conversations with historical context, migrate chat data from other platforms, or initialize conversations with pre-existing dialogue history.
Unlike creating messages individually through repeated single-message API calls, the batch endpoint processes all messages together, reducing network overhead, minimizing database round trips, and ensuring consistent ordering of messages within the conversation. This approach is particularly valuable when dealing with data import scenarios where maintaining message sequence and timestamp accuracy is critical for conversation coherence and historical accuracy.
The batch creation process supports up to 100 messages per request, allowing you to efficiently populate conversations with substantial dialogue history while maintaining system performance and reliability. Each message in the batch can have its own type (user, bot, context, system), text content, entities, metadata, and optional identifiers for maintaining references to source systems during migration or import operations.
POST /api/v1/conversation/{conversationId}/message/batch/create Content-Type: application/json { "items": [ { "type": "user", "text": "Hello, I need help with my order", "meta": { "originalTimestamp": "2024-01-15T10:30:00Z", "sourceSystem": "zendesk" } }, { "type": "bot", "text": "I'd be happy to help you with your order. Can you provide your order number?", "meta": { "originalTimestamp": "2024-01-15T10:30:15Z", "sourceSystem": "zendesk" } }, { "type": "user", "text": "Sure, it's order #12345", "meta": { "originalTimestamp": "2024-01-15T10:30:45Z", "sourceSystem": "zendesk" } } ] }http
Message Structure
Each message in the batch must include:
- type: Message type (user, bot, context, system, instruction, tool_request, tool_response)
- text: The message content (required, supports markdown and rich text)
- entities: Optional array of detected entities (names, dates, locations, etc.)
- meta: Custom metadata for storing additional context or source system references
- id: Optional external identifier for maintaining references during migrations
Use Cases
Chat History Import: Migrate existing chat transcripts from legacy systems, customer support platforms, or other conversational AI solutions into ChatBotKit conversations while preserving message order, timestamps, and contextual information.
Conversation Initialization: Set up conversations with pre-existing context by adding historical messages that provide the AI with necessary background information before the user's current inquiry, enabling more contextually aware and relevant responses.
Testing and Development: Quickly create test conversations with specific dialogue patterns for testing bot behavior, training conversational flows, or demonstrating capabilities in development and staging environments.
Data Migration: Transfer conversation data between systems, accounts, or organizational units while maintaining message integrity, sequence, and associated metadata for compliance and historical accuracy.
Bulk Operations: Process large volumes of messages efficiently when importing transcripts, synchronizing external systems, or performing data transformations that generate multiple related messages.
Performance and Limitations
- Maximum of 100 messages per batch request
- Messages are processed atomically - all succeed or all fail
- Message ordering within the batch is preserved in the conversation
- Each message counts toward account usage limits
- PII detection and content moderation apply to all messages in the batch
- Timestamps are automatically assigned based on creation time unless preserved in metadata
Best Practices:
- Use batch creation for 10+ messages to realize performance benefits
- Include source system identifiers in metadata for traceability
- Preserve original timestamps in metadata when importing historical data
- Order messages chronologically in the batch array for natural conversation flow
- Consider breaking very large imports into multiple batch requests
- Validate message content and structure before batch submission to avoid rollback
Important Notes:
- Batch creation is a write operation that cannot be undone
- Messages appear immediately in the conversation after successful creation
- The endpoint does not trigger automated bot responses (use send endpoint for that)
- Entity detection runs on all message text during batch processing
- Messages are subject to content moderation if enabled on the conversation