Why we built AI diagnostics into every ticket
We rebuild an estimate 30+ times a day. The pattern is always the same: customer describes symptoms, tech googles them, narrows to a likely cause, looks up parts cost, types the estimate. Twenty minutes of pattern-matching per ticket.
The expensive middle step
That middle step — matching symptoms to causes — is exactly what LLMs are good at. So we wired it in. On every ticket, the tech clicks Run AI diagnosis and gets three things back: likely causes (with probability), parts needed (with cost estimates), and warnings for common pitfalls.
The response isn't binding. The tech reviews it, ticks the parts they want to add, and moves on. No hallucinated repair flows, no "AI approved it" liability.
Why Claude, not GPT
Three reasons. First, Claude's confidence calibration is noticeably better — it's more willing to say "65% confident" than GPT, which tends to cluster at 90%+ on anything it hasn't flagged. Second, the API is half the price for our token profile. Third, Anthropic's stance on training-on-customer-data is friendlier for shops storing device IMEIs in prompts.
What it replaced
- 20 minutes per ticket on symptom-to-cause research.
- 5–10 minutes of parts-cost lookups.
- Three separate browser tabs the tech kept open all day.
The feature shipped in our first release. It's on the way out to every shop we onboard.