Metrics
Creating Metrics
Learn how to create metrics for your agents
Create Metric API
Create a new metric to evaluate agent performance.
API Endpoint
Method | Endpoint |
---|---|
POST | https://new-prod.vocera.ai/test_framework/v1/metrics-external/ |
Authentication
Include your API key in the request headers:
Header | Description |
---|---|
X-VOCERA-API-KEY | Your API key obtained from the dashboard |
Request Parameters
Parameter | Type | Required | Description |
---|---|---|---|
name | string | Yes | Name of the metric |
description | string | Yes | Description of what the metric evaluates |
audio_enabled | boolean | No | Whether audio analysis is enabled. Defaults to true |
prompt_enabled | boolean | No | Whether to use a custom prompt. Defaults to true |
prompt | string | No | Custom evaluation prompt template when prompt_enabled is true |
agent | integer | Yes* | ID of the agent to evaluate |
assistant_id | string | Yes* | The assistant ID associated with agent to evaluate |
eval_type | string | Yes | Type of metric (binary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum) |
enum_values | array | No | List of possible values when eval_type is “enum” |
display_order | integer | No | Order for displaying the metric. Defaults to next available order |
* Only either of agent
or assistant_id
is required
Example Request Body
Response
A successful request returns the created metric details.
Response Fields
Field | Type | Description |
---|---|---|
id | integer | Unique identifier for the metric |
agent | integer | ID of the associated agent |
name | string | Name of the metric |
description | string | Description of the metric |
eval_type | string | Type of metric (binary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum) |
enum_values | array | List of possible enum values when eval_type is “enum” |
audio_enabled | boolean | Whether audio analysis is enabled |
prompt_enabled | boolean | Whether custom prompt is enabled |
prompt | string | Custom evaluation prompt template when prompt_enabled is true |
display_order | integer | Order for displaying the metric |
overall_score | number | Total testsets passed (null if never run in testset) |
total_score | number | Reviewed in total testsets |