# Criteria

## Overview

Criteria are the QA rules Birdie uses to evaluate observable agent behavior in customer interactions.

They turn service quality into something consistent, measurable, and scalable.

A strong criterion describes one observable behavior that can be verified from the interaction itself.\ <br>

<figure><img src="https://2659701720-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FN7EEIMhL4xEKWM6BDyEL%2Fuploads%2F1GmgTWXlvSbkwcxseueO%2Fimage.png?alt=media&#x26;token=640046a6-d325-4f3a-949f-9cc02b8f2556" alt="" width="563"><figcaption></figcaption></figure>

## Why criteria matter

Criteria help teams:

* evaluate quality with the same standard across agents
* identify specific service failures
* understand which behaviors affect quality scores most
* guide coaching, calibration, and process improvement

## How criteria work in QA

Criteria can be:

* **General** — applied across interactions
* **Specific** — linked to a specific [Reason](https://ask.birdie.ai/agent-quality-assurance/reasons)

Criteria can also be evaluated in two ways:

* **AI** — Birdie evaluates the criterion automatically from the transcript
* **Manual** — a person evaluates the criterion in [Manual Evaluation](https://ask.birdie.ai/agent-quality-assurance/manual-evaluation)

## How criteria affect scoring

* **Weight** contributes to the Quality Score at Workspace, Area, Reason, and Agent levels
* **Critical** criteria do not use weight
* Failing one **Critical** criterion fails the interaction for that Reason
* **Manual** criteria still contribute to reporting once the evaluation is submitted

## What makes a good criterion

Good criteria are:

* objective
* specific
* tied to observable agent behavior
* independent from customer opinion or assumptions

{% hint style="info" %}
Manage the full criteria catalog in [Taxonomy → Criteria](https://ask.birdie.ai/admin-and-settings/taxonomy/criteria).
{% endhint %}

## Criteria in analysis

In analysis, criteria help you understand which behaviors pass, fail, or require manual review across teams, reasons, and agents.

The **Criteria** analysis page follows the same layout described in [Analysis Page Structure](https://ask.birdie.ai/getting-started/platform-overview).

## Best practices

* Write one observable rule per criterion
* Use manual criteria when transcript-only evaluation is not reliable
* Review critical and weighted criteria carefully, because they affect scoring

## Related articles

* [Taxonomy → Criteria](https://ask.birdie.ai/admin-and-settings/taxonomy/criteria)
* [Manual Evaluation](https://ask.birdie.ai/agent-quality-assurance/manual-evaluation)
* [Reasons](https://ask.birdie.ai/agent-quality-assurance/reasons)
