Create Bulk Metrics API

Create multiple metrics simultaneously for an agent.

API Endpoint

MethodEndpoint
POSThttps://new-prod.vocera.ai/test_framework/v1/metrics-external/bulk_create/

Authentication

Include your API key in the request headers:

HeaderDescription
X-VOCERA-API-KEYYour API key obtained from the dashboard

Request Body

The request body should be an array of metric objects. Each metric object has the following parameters:

ParameterTypeRequiredDescription
namestringYesName of the metric
descriptionstringYesDescription of what the metric evaluates
agentintegerYesID of the agent to evaluate
eval_typestringYesType of metric (binary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum)
enum_valuesarrayNoList of possible values when eval_type is “enum”
audio_enabledbooleanNoWhether audio analysis is enabled. Defaults to false
prompt_enabledbooleanNoWhether to use a custom prompt. Defaults to false
promptstringNoCustom evaluation prompt template when prompt_enabled is true
evaluation_triggerstringNoTrigger type for evaluation (always, automatic, custom)
evaluation_trigger_promptstringNoCustom trigger prompt when evaluation_trigger is “custom”

Example Request Body

[

    {

        "name": "Introduction Clarity",

        "description": "This metric evaluates whether the AI voice agent clearly introduces itself...",

        "agent": 1,

        "eval_type": "binary_workflow_adherence"

    },

    {

        "name": "User Response Handling",

        "description": "This metric assesses whether the AI voice agent waits for user responses...",

        "agent": 1,

        "eval_type": "binary_workflow_adherence"

    }

]

Response

The API returns an array of created metric objects.

Response Fields

FieldTypeDescription
idintegerUnique identifier for the metric
agentintegerID of the associated agent
namestringName of the metric
descriptionstringDescription of the metric
eval_typestringType of metric
enum_valuesarrayList of possible enum values
audio_enabledbooleanWhether audio analysis is enabled
prompt_enabledbooleanWhether custom prompt is enabled
promptstringCustom evaluation prompt template
display_orderintegerOrder for displaying the metric
overall_scorenumberTotal testsets passed
total_scorenumberReviewed in total testsets

Example Response

[

    {

        "id": 49,

        "agent": 1,

        "name": "Introduction Clarity",

        "description": "This metric evaluates whether the AI voice agent clearly introduces itself...",

        "eval_type": "binary_workflow_adherence",

        "enum_values": [],

        "audio_enabled": false,

        "prompt_enabled": false,

        "prompt": "",

        "display_order": 20,

        "overall_score": null,

        "total_score": 35

    },

    {

        "id": 50,

        "agent": 1,

        "name": "User Response Handling",

        "description": "This metric assesses whether the AI voice agent waits for user responses...",

        "eval_type": "binary_workflow_adherence",

        "enum_values": [],

        "audio_enabled": false,

        "prompt_enabled": false,

        "prompt": "",

        "display_order": 21,

        "overall_score": null,

        "total_score": 35

    }

]

Code Examples

curl -X POST https://new-prod.vocera.ai/test_framework/v1/metrics-external/bulk_create/ \

  -H "X-VOCERA-API-KEY: <your-api-key-here>" \

  -H "Content-Type: application/json" \

  -d '[

    {

      "name": "Example Metric",

      "description": "Example description",

      "agent": 123,

      "eval_type": "binary_workflow_adherence"

    }

  ]'