Short answer: AI ROI for SMBs in 2026 should be measured against four primary metrics: time saved (operational hours), deflection rate (work the AI handled instead of a human), conversion lift (revenue impact), and error reduction (cost-of-mistake avoided). Pick one or two as your primary metric per use case, ignore the rest of the AI marketing dashboard, and measure honestly against a defined baseline. If you can't define a baseline, you can't claim ROI — and most projects skip this.
This article gives you the framework, four worked examples (one per metric), the per-month math, and the honest mistakes that destroy ROI claims after the project ships. It's the same framework we use to scope and review AI projects with clients at Palmidos — and the same framework we use to tell clients honestly when an AI project isn't worth doing.
Why most AI ROI claims are wrong
Three patterns we see consistently.
No baseline. "The AI handled 5,000 conversations" is not ROI. ROI requires comparison to what would have happened without the AI. If you don't measure baseline cost, baseline conversion, or baseline error rate before the project, every after-claim is unfalsifiable marketing.
Confusing volume with value. An AI that processes 10,000 documents has done volume. An AI that processes 10,000 documents that previously cost $X to handle has produced ROI. The dollar figure attached matters; the count does not.
Ignoring ongoing cost. A shipped AI is not free to run. Token costs, infrastructure, maintenance, and human review all eat into ROI. The honest formula is (value created) − (build cost amortized + run cost) = ROI, and most teams stop measuring after the build cost.
The framework below fixes all three.
The four-metric framework
| Metric | What it measures | Best for | How to compute |
|---|---|---|---|
| Time saved | Hours of human labor avoided | Internal productivity, document review, drafting, classification | (baseline hours per task − current hours per task) × volume × loaded labor cost |
| Deflection rate | % of work the AI handled without human | Customer support, FAQ chatbots, intake | (volume handled fully by AI / total volume) × (cost per human-handled unit − cost per AI-handled unit) |
| Conversion lift | Revenue impact on a funnel step | Sales chat, lead qualification, recommendations, abandoned-cart | (rate with AI − baseline rate) × volume × average revenue per conversion |
| Error reduction | Cost-of-mistakes avoided | Compliance, document extraction, data entry, fraud detection | (baseline error rate − current error rate) × volume × cost per error |
Pick one metric per use case as your primary success measure. Track the others if it's cheap, but don't dilute your decision-making by mixing metrics. "It saved time AND lifted conversion AND reduced errors" is a sign the team isn't actually measuring anything.
Worked example 1: Time saved (internal AI helpdesk for HR)
Setup: 200-employee company, ~150 HR-related questions per month routed through a tier-1 HR person. Each question takes ~12 minutes to research and answer (average over simple and complex). Loaded HR cost: ~$50/hour ($0.83/minute).
Baseline cost: 150 questions × 12 min × $0.83/min = ~$1,500/month of HR time on tier-1 questions.
After AI deployment: AI handles 70% of questions end-to-end (the policy-answer kind). The remaining 30% still go to HR but the AI provides a draft, cutting human time per question from 12 min to ~3 min. Volume unchanged.
Math after: 105 questions handled fully by AI (HR time = 0) + 45 questions with HR review at 3 min each = 45 × 3 × $0.83 = ~$112/month of HR time.
Time saved: $1,500 − $112 = $1,388/month, or ~$16,650/year. Build cost was $14,000, run cost is ~$80/month in tokens. Year-1 ROI: ($16,650 − $14,000 − $960) / $14,000 = +12% in year 1. Year 2 onward: ROI ~115% per year.
Note that pure dollar ROI is modest in year 1. The real return is freeing 16 hours/month of HR capacity for higher-value work, plus dramatically faster employee response times. SMBs that report only the time-saved number often underestimate the true business impact, but it's the cleanest metric to start with.
Worked example 2: Deflection rate (AI customer support chatbot)
Setup: e-commerce store, ~3,000 support tickets/month at average $4 cost per human-handled ticket (loaded support agent + tooling).
Baseline cost: 3,000 × $4 = $12,000/month.
After AI deployment: AI deflects 55% of tickets (tier-1 questions about shipping, returns, sizing, order status). Cost per AI-handled ticket: $0.04 (mostly tokens + a small share of platform fees). The 45% that escalate cost $4 + a small overhead from the AI's failed attempt = $4.20.
Math after: 1,650 tickets × $0.04 (AI) + 1,350 × $4.20 (human) = $66 + $5,670 = $5,736/month.
Savings: $12,000 − $5,736 = $6,264/month, or ~$75,000/year. Build cost was $25,000, ongoing run cost is in the $100–$300/month range plus occasional prompt iteration. Year-1 ROI: ($75,000 − $25,000 − $3,000) / $25,000 = +188%.
Deflection rate is the cleanest metric for support automation specifically because the unit economics are clear: every deflected ticket is a directly attributable dollar saved. The trap is over-claiming deflection — make sure your "deflection" metric requires the customer to actually finish the conversation without a human, not just "the AI replied first."
Worked example 3: Conversion lift (AI lead qualification)
Setup: real-estate agency receiving 800 web leads/month. Baseline: 25% of leads convert to a booked viewing (200 viewings/month). Average closed-deal value: $40,000 in commission. Baseline conversion rate from booked viewing to closed deal: 5% (10 deals/month).
After AI deployment: AI calls every lead within 60 seconds. Booked-viewing rate rises to 35% (280 viewings/month, +80 viewings). Conversion from viewing to deal stays at 5% (14 deals/month, +4 deals).
Revenue impact: 4 additional deals × $40,000 = $160,000/month, or $1.92M/year.
Cost: Build cost was $40,000, run cost is ~$500/month in voice-agent runtime plus $300/month in observability and CRM integration.
Year-1 ROI: ($1,920,000 − $40,000 − $9,600) / $40,000 = +4,575%.
Conversion-lift use cases are where AI ROI gets eye-watering, because the conversion metric multiplies through high-value transactions. The trap: be honest about whether the lift is real. Run a holdout group — half the leads get the AI, half get the baseline process — for at least four weeks before claiming the lift number is causal.
Worked example 4: Error reduction (AI document extraction for finance)
Setup: accounting firm processing 2,000 invoices/month. Baseline manual data-entry error rate: 4% (80 errors/month). Average cost per error caught downstream (reconciliation, customer-facing correction, time): $80. Average cost per error caught after a customer raises it: $400.
Baseline cost of errors: 80 errors × ~$120 weighted average cost per error = $9,600/month.
After AI deployment: AI-assisted extraction with mandatory human spot-check reduces error rate to 0.8% (16 errors/month). Cost per error is unchanged.
After cost of errors: 16 × $120 = $1,920/month.
Savings: $9,600 − $1,920 = $7,680/month, or ~$92,000/year. Build cost: $35,000. Run cost: ~$200/month. Year-1 ROI: ($92,000 − $35,000 − $2,400) / $35,000 = +156%.