Traditional QA samples a handful of calls per agent per week. AI quality scoring reviews every recorded call against your defined rubric, flags outliers for human review, and trends signals over time.
Setting up a rubric
Admin → QC → Rubrics. Define the criteria your QA team cares about: opening script adherence, required disclosures, prohibited phrases, outcome-specific criteria. Weight each criterion. Save.
What gets scored
- Script adherence, did the agent say the opening?
- Required disclosures, TCPA, recording consent, etc.
- Prohibited phrases, regional compliance, competitor mentions
- Sentiment trajectory, did the call improve or degrade?
- Outcome alignment, was the disposition appropriate given what was said?
Where results show up
Reports → AI Quality. You see per-agent rubric scores, team averages, trend charts, and a flagged-calls queue for manual review. Supervisors can drill into any call to see the specific rubric line items and their scores.
Start with a 5-item rubric. Add more once you see what's useful. A rubric with 30 line items drowns signal in noise.
