Create Metric API

Create a new metric to evaluate agent performance.

API Endpoint

MethodEndpoint
POSThttps://new-prod.vocera.ai/test_framework/v1/metrics-external/

Authentication

Include your API key in the request headers:

HeaderDescription
X-VOCERA-API-KEYYour API key obtained from the dashboard

Request Parameters

ParameterTypeRequiredDescription
namestringYesName of the metric
descriptionstringYesDescription of what the metric evaluates
audio_enabledbooleanNoWhether audio analysis is enabled. Defaults to true
prompt_enabledbooleanNoWhether to use a custom prompt. Defaults to true
promptstringNoCustom evaluation prompt template when prompt_enabled is true
agentintegerYes*ID of the agent to evaluate
assistant_idstringYes*The assistant ID associated with agent to evaluate
eval_typestringYesType of metric (binary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum)
enum_valuesarrayNoList of possible values when eval_type is “enum”
display_orderintegerNoOrder for displaying the metric. Defaults to next available order

* Only either of agent or assistant_id is required

Example Request Body

{

    "name": "Appointment Booked",

    "description": "Was the appointment booked successfully?",

    "audio_enabled": true,

    "prompt_enabled": true,

    "prompt": "Go through this transcript and check if appointment was booked successfully:\n\n\n{transcript}\n",

    "agent": 1,

    "eval_type": "binary_qualitative",

    "enum_values": [],

    "display_order": 10

}

Response

A successful request returns the created metric details.

Response Fields

FieldTypeDescription
idintegerUnique identifier for the metric
agentintegerID of the associated agent
namestringName of the metric
descriptionstringDescription of the metric
eval_typestringType of metric (binary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum)
enum_valuesarrayList of possible enum values when eval_type is “enum”
audio_enabledbooleanWhether audio analysis is enabled
prompt_enabledbooleanWhether custom prompt is enabled
promptstringCustom evaluation prompt template when prompt_enabled is true
display_orderintegerOrder for displaying the metric
overall_scorenumberTotal testsets passed (null if never run in testset)
total_scorenumberReviewed in total testsets

Example Response

{

    "id": 38,

    "agent": 1,

    "name": "Appointment Booked",

    "description": "Was the appointment booked successfully?",

    "eval_type": "binary_qualitative",

    "enum_values": [],

    "audio_enabled": true,

    "prompt_enabled": true,

    "prompt": "Go through this transcript and check if appointment was booked successfully:\n\n\n{transcript}\n",

    "display_order": 10,

    "overall_score": null,

    "total_score": 0

}

Code Examples

curl -X POST https://new-prod.vocera.ai/test_framework/v1/metrics-external/ \

  -H "X-VOCERA-API-KEY: <your-api-key-here>" \

  -H "Content-Type: application/json" \

  -d '{

    "name": "Example Metric Name",

    "description": "Description of what this metric evaluates", 

    "audio_enabled": true,

    "prompt_enabled": true,

    "prompt": "Custom prompt template for evaluation:\n\n{transcript}\n",

    "agent": 123,

    "eval_type": "binary_qualitative",

    "enum_values": [],

    "display_order": 1

  }'