Event Metrics

Event metrics represent aggregated analytical data derived from the detailed event logs captured throughout the ChatBotKit platform. While event logs provide granular, individual records of every action and interaction, event metrics transform this raw data into meaningful time-series aggregations that reveal patterns, trends, and usage characteristics over time. This aggregated perspective is essential for understanding system behavior at scale, monitoring resource consumption, analyzing performance trends, and making data-driven decisions about capacity planning and optimization.

The metrics system continuously processes event data to generate daily aggregates across multiple dimensions including message volume, token consumption, API call frequency, integration activity, and resource utilization. These aggregations provide a high-level view of platform usage that complements the detailed event logs, enabling efficient analysis of long-term trends without requiring exhaustive processing of millions of individual event records.

Event metrics are particularly valuable for several critical use cases. For operational monitoring, metrics reveal usage patterns and load characteristics that inform scaling decisions and capacity planning. For billing and cost management, token consumption and API usage metrics support accurate usage tracking and cost attribution. For performance optimization, metrics identify anomalies, bottlenecks, and opportunities for improvement. For business intelligence, usage trends and adoption patterns guide product development and user engagement strategies.

Fetching Event Metric Series

The event metric series endpoint retrieves time-series data showing how specific metrics evolve over time, providing a historical view of platform activity that enables trend analysis, anomaly detection, and forecasting. The series data spans the last 90 days by default, offering sufficient history for identifying patterns while maintaining query performance.

Each data point in the series includes a timestamp (as Unix epoch milliseconds) and the aggregated total for that time period, typically representing daily summations of the requested metric. This time-series format is ideal for visualization in dashboards, integration with monitoring systems, and analysis using time-series analytics tools.

GET /api/v1/event/metric/series/fetch?type=message_count Accept: application/json

http

The metric series functionality supports various metric types that track different aspects of platform activity:

  • message_count: Total number of messages exchanged in conversations
  • token_usage: Aggregate token consumption across all AI model interactions
  • conversation_count: Number of conversation sessions created
  • integration_calls: Frequency of integration endpoint invocations
  • api_requests: Volume of API requests across all endpoints

The response provides an array of time-series data points, each representing a specific day's aggregated metric value. This format enables straightforward visualization and analysis, allowing you to identify usage patterns such as daily peaks, weekly cycles, growth trends, and seasonal variations. The time-series data is particularly useful for creating monitoring dashboards, generating usage reports, detecting anomalies, and forecasting future resource requirements based on historical patterns.

Response Structure:

{ "values": [ { "date": 1732060800000, "total": 1247 }, { "date": 1732147200000, "total": 1589 } ] }

json

The date field uses Unix epoch timestamps (milliseconds since January 1, 1970 UTC), which can be easily converted to standard date formats in any programming language or data analysis tool. The total field represents the aggregated sum of the metric for that specific day, providing a clear measure of daily activity levels.

Use Cases for Metric Series Data:

  • Capacity Planning: Identify growth trends to predict future infrastructure needs and plan resource scaling
  • Cost Analysis: Track token consumption and API usage patterns to understand and optimize operational costs
  • Performance Monitoring: Detect unusual spikes or drops in activity that might indicate issues or opportunities
  • Business Intelligence: Analyze user engagement patterns and adoption trends to inform product strategy
  • Billing Validation: Verify usage charges against historical consumption patterns and identify unexpected changes

Listing Event Metrics

The event metrics listing endpoint provides comprehensive access to individual metric records with powerful filtering, pagination, and querying capabilities. Unlike the time-series endpoint which aggregates metrics into daily summaries, the list endpoint returns detailed metric records that include all associated metadata, resource relationships, and specific metric values, enabling granular analysis and detailed auditing of platform activity.

This endpoint is particularly useful when you need to investigate specific metric records, understand the detailed composition of aggregated values, or perform complex filtering based on resource associations, metric types, or custom metadata fields. The flexible filtering system allows you to narrow results to specific conversations, integrations, bots, datasets, or any combination of platform resources.

GET /api/v1/event/metric/list?order=desc&take=50&type=token_usage Accept: application/json

http

The listing endpoint supports extensive filtering by resource associations, allowing you to retrieve metrics for specific platform components:

GET /api/v1/event/metric/list?botId=bot_abc123&conversationId=conv_xyz789&order=desc&take=100

http

Each metric record in the response includes comprehensive information:

  • Identifiers: Unique metric ID, name, and description
  • Metric Type: Category of metric (message_count, token_usage, conversation_count, etc.)
  • Metric Value: Numerical value representing the measured quantity
  • Resource Associations: References to related conversations, bots, datasets, integrations, and other platform resources
  • Timestamps: Creation and update timestamps for temporal analysis
  • Custom Metadata: Additional context and attributes captured with the metric

The endpoint uses cursor-based pagination for efficient traversal of large result sets, with configurable page sizes and ordering (ascending or descending by creation date). This pagination approach ensures consistent results even when new metrics are being created concurrently, preventing duplicate or missing records in paginated queries.

Filtering Capabilities:

The list endpoint supports filtering by multiple dimensions simultaneously, enabling precise queries for specific analysis scenarios:

  • By Resource Type: Filter metrics associated with specific bots, conversations, datasets, integrations, or other platform resources
  • By Metric Type: Focus on specific metric categories like token usage, message counts, or custom metric types
  • By Time Range: Use cursor-based pagination with ordering to retrieve metrics within specific time windows
  • By Metadata: Filter using custom metadata fields to query metrics with specific attributes or characteristics

Common Use Cases:

  • Detailed Usage Analysis: Investigate specific usage patterns or anomalies by examining individual metric records with full context
  • Resource Attribution: Track metrics associated with specific bots, conversations, or integrations to understand component-level performance
  • Cost Reconciliation: Verify billing calculations by auditing detailed token usage and API call metrics
  • Debugging: Investigate unexpected behavior by examining metric records related to specific resources or time periods
  • Compliance Auditing: Generate detailed reports showing exactly which resources consumed which resources when

The list endpoint complements the time-series endpoint by providing drill-down capabilities. While time-series data reveals high-level trends and patterns, the list endpoint enables investigation of the underlying detail, helping you understand what drives those trends and validate aggregate calculations against source data.