Criteria
What are Criteria and what are they used for?
Description Criteria in Birdie are evaluation rules used to identify specific errors made by agents during a customer interaction, based exclusively on what can be verified from the conversation (transcript). They act as objective Quality Assurance (QA) mechanisms, enabling consistent and scalable monitoring, classification, and measurement of agent performance.
A criterion always describes an agent action (or lack of action) that can be clearly identified in the dialogue, such as failing to inform a delivery deadline, not confirming mandatory data, or not following an expected procedure. To be considered viable, a criterion must be clear, objective, non-subjective, and verifiable solely through the interaction content. Filters or external information may be used only to define the analysis scope, never to directly identify the failure.
Criteria can be classified into two main types:
General criteria: applied to all interactions, regardless of the contact reason. These typically cover mandatory best practices or standard procedures.
Specific criteria: applied only to particular processes or service flows. These criteria must always be linked to a Reason (contact reason) to ensure the evaluation happens within the correct service context.
Specific criteria can be further categorized by how they are evaluated:
AI criteria: automatically evaluated by Birdie’s models based on the transcript.
Manual criteria: used when a process is too specific, complex, or contextual to be reliably evaluated by AI, requiring human review instead.
When a criterion does not meet minimum performance thresholds or when the customer requests support, Birdie’s internal team performs an internal calibration, reviewing and adjusting the criterion’s title, instructions, and logic until it is viable and effective.
Criteria are essential to:
Evaluate service quality in a standardized way
Identify concrete operational failures by agents
Support performance analysis with reliable metrics
Enable targeted improvements in processes and training
Important: Criteria are directly related to service quality evaluation (QA) and always focus on observable agent behavior, unlike VoC-oriented analyses, which capture customer perception and do not describe objectively verifiable actions.
Creating New Data
Fill in the form

Fields in the form
Name, the primary label for the criteria, which can be changed anytime.
Critical?, failing a critical criterion means the entire interaction is considered a failure, regardless of how well the agent performed on other, less critical criteria. Critical criterion don't have Weight Manual?, when a criterion can not be AI evaluated and need to be manually evaluated
Select or create a collection, which collection this Criterion belongs to.
Fields, select fields to give context when evaluating the criteria
Weight, a numerical value used to calculate the Quality Score at various levels (Workspace, Area, Reason, and Agent).
Description, a place to describe your Criteria.
Last updated

