Manual Evaluation

Overview

Manual Evaluation allows teams to perform manual quality assessments of interactions directly inside Birdie for criteria that cannot be reliably monitored by AI.

The feature is designed to centralize all manual evaluation workflows, ensuring governance, traceability, and consistency, while integrating manual results with AI-driven metrics. All manual evaluations start at the Area level, providing a scalable and standardized entry point that represents the full operational scope.

Manual Evaluation helps Birdie remain the single source of truth for quality monitoring by unifying configuration, execution, and analytics in one place.


Viewing & Searching

Manual evaluation results are available throughout Birdie analytics and dashboards, alongside AI-driven evaluations.

Filtering & Exploration

Users can filter and explore evaluation data by:

  • Area

  • Reason

  • Agent

  • Time range

  • Evaluation source (Manual or AI)

All views support drill-down from high-level metrics to individual evaluations, ensuring complete transparency.


Creating New Data

Manual Evaluation involves two different actions: configuring manual criteria and performing a manual evaluation.

This article focuses on how to use Manual Evaluation. Detailed configuration of criteria is covered in a separate article.


1. Creating Manual Criteria

Before performing manual evaluations, manual criteria must be configured by an Admin. If you need help about how to create a criterion please check Criteria


2. Performing a Manual Evaluation

Once criteria are configured, you can perform manual evaluations directly from an Area.

Step-by-Step: Performing a Manual Evaluation

  1. Go to the desired Area in Birdie

  2. Click Start Manual Evaluation

  1. Select one or more Reasons to include in the evaluation

  2. Confirm to start

  3. Birdie automatically opens an interaction for evaluation

  4. Complete the evaluation form

  5. Add optional comments or reasoning (if applicable)

  6. Submit the evaluation

After submission, the evaluation is automatically linked to the Area, Reason, agent, and interaction, and becomes available in dashboards and reports.


Modifying Existing Data

What Can Be Modified

  • Criteria definitions

  • Associations between criteria and Reasons

  • Reason participation within Areas

  • Statistical proportions used for sampling

Permissions

  • Only authorized Admins / Innerloop Specialists can create or modify criteria and configuration

All changes apply only to future evaluations and do not affect historical data.


Removing Data

Manual evaluations cannot be deleted once submitted, ensuring data integrity and compliance.

Supported Actions

  • Disable criteria so they are no longer used in future evaluations

  • Remove Reasons from an Area

  • Deactivate Areas that are no longer active

Historical evaluation data remains available for analytics and auditing purposes.


Troubleshooting & FAQs

Can I choose a specific evaluation form?

No. Forms are dynamically generated based on the criteria associated with the selected Reason to ensure governance and consistency.

Why was a different Reason selected than expected?

Birdie applies statistical sampling rules to ensure representative distribution of evaluations across Reasons.

Who can perform and view manual evaluations?

  • Viewers can perform manual evaluations

  • Supervisors have view-only access to results

  • Admins manage criteria and configuration

Last updated