Back to Home

If Your AI Cannot Prove ROI, Kill It

Most companies do not have an AI adoption problem. They have an accountability problem dressed up as innovation. If the agent cannot show the number, shut it down.

May 11, 20268 min readBy Jesse Alton

Kill the AI project.

Not pause it. Not rebrand it. Not move it into a center of excellence so it can die slowly in a Confluence page.

Kill it.

If your agent saves time, reduces tickets, closes revenue, cuts risk, improves throughput, or prevents loss, show me the number. If it cannot, it is not strategy. It is a very expensive screensaver.

I am tired of AI deployments that survive because nobody has the guts to ask for the scoreboard.

The AI Problem Is Not Adoption

The board wants AI.

The CEO wants AI.

The CIO wants a governance deck.

The VP wants a demo.

The team wants a budget.

Everybody wants the story. Very few want the scoreboard.

That is the disease.

Newsweek covered the exact problem in AI Impact: Companies Are Deploying AI. Few Can Prove It Works. The core point is simple: enterprises are deploying AI, but a significant share of IT leaders say their organizations are not effectively measuring the return on those investments.

That should scare you more than model hallucinations.

A hallucination is a defect.

Unmeasured investment is a management failure.

Your AI Pilot Is Not Sacred

I have shipped software into real organizations for more than 15 years. Commercial. Government. Modernization work with nine-figure stakes. Weird legacy systems. Procurement traps. Compliance landmines. The works.

I have also watched teams turn technical curiosity into budget theater.

It always sounds reasonable:

  • “I need time to evaluate.”
  • “The organization is still learning.”
  • “The value is qualitative.”
  • “The model will improve.”
  • “The culture is not ready.”

Cool.

What changed?

Did handle time drop?

Did tickets close faster?

Did revenue move?

Did cycle time shrink?

Did risk go down?

Did the human team reclaim hours?

If the answer is vague, the project is vague. If the project is vague, the budget is exposed.

Everything is a business.

Even internal tools. Especially internal tools.

The Scoreboard Comes First

Most teams deploy AI backward.

They start with a model. Then a use case. Then a demo. Then a stakeholder showcase. Then a launch. Then six months later someone asks if it worked.

That is a shit recipe for success.

I start with the scoreboard.

Before I build an agent, I want the baseline:

  • Current volume
  • Current cost
  • Current error rate
  • Current cycle time
  • Current escalation rate
  • Current revenue leakage
  • Current compliance exposure
  • Current human time burned

Then I want the target.

Not “better.”

Better is fake.

I want:

  • 30% fewer tier-one tickets
  • 15 minutes saved per intake
  • 20% faster quote turnaround
  • 50% fewer manual routing errors
  • $10,000 per month in avoided cost
  • ROI inside 60 days

Now I have a game.

Now I can instrument it.

Now I can kill it if it fails.

GoTo Gets the Point

The Newsweek piece points to GoTo as an example of an operator talking about AI in business terms instead of magic terms.

In auto dealerships, incoming calls pull staff away from customers standing in front of them. That is not abstract. That is revenue friction. Through GoToConnect, AI agents handle a large portion of those interactions: scheduling appointments, answering routine questions, and routing complex issues.

That matters because the employee stays with the customer on the floor.

Rich Veldran, GoTo’s CEO, put it cleanly in Newsweek: “It’s not a cool tech demo. It’s a real operational shift.”

That is the unlock.

AI is not impressive because it talks.

AI is impressive when it removes drag from a business process people already care about.

The MIT Number Should Embarrass People

The best recent gut punch came from MIT’s Project NANDA report, The GenAI Divide: State of AI in Business 2025. Fortune reported that the study was based on 150 leader interviews, a survey of 350 employees, and analysis of 300 public AI deployments. The finding: about 5% of AI pilot programs achieved rapid revenue acceleration, while the vast majority delivered little to no measurable P&L impact (Fortune).

Do not turn that into a meme and move on.

Sit with it.

The models are powerful. The tooling is everywhere. The budget exists. The executive attention exists. The hype machine never sleeps.

Still, most programs do not move the business.

That tells me the bottleneck is not the model.

The bottleneck is management.

Bad process in. Bad process out.

No ownership in. No ROI out.

Adoption Metrics Are Coward Metrics

I do not care how many employees have logged into the AI tool.

I do not care how many prompts ran.

I do not care how many workshops happened.

I do not care that the team is “excited.”

Adoption is not impact.

Usage is not value.

Activity is not throughput.

If 700 people use a chatbot to produce worse emails faster, congratulations. You scaled mediocrity.

Real metrics look different:

  • Cost per resolved ticket
  • Revenue per rep
  • Claims processed per hour
  • Time from request to fulfillment
  • Defects per release
  • Manual touches per workflow
  • Escalations per customer segment
  • Analyst hours per investigation
  • Lead response time
  • Renewal risk detected before churn

Measure the job.

Not the toy.

Agents Make Accountability More Important

I build agents. I believe in agents. I am building Cadderly because intent recognition, MCP integration, and agent-to-agent coordination are real primitives for the next software stack.

That does not mean I give agents a free pass.

The more autonomous the system, the more aggressive the measurement has to be.

Agentic AI introduces new risk. CISA and international partners warned that agentic systems can expand attack surfaces, create privilege creep, introduce behavioral misalignment, and obscure event records in their guide on the secure adoption of agentic AI.

That is not a reason to stop.

It is a reason to instrument the hell out of it.

Every serious agent needs:

  • A named owner
  • A business KPI
  • A kill switch
  • Audit logs
  • Permission boundaries
  • Escalation paths
  • Cost tracking
  • Error tracking
  • Human override
  • Regular review

If an agent can act, it can cause damage.

If an agent can cause damage, it needs governance.

If governance cannot explain the ROI, it is theater.

The Real Question Is Brutal

Here is the question I ask clients:

What would happen if I turned this AI system off tomorrow?

If nobody screams, it is not important.

If nobody’s numbers move, it is not important.

If nobody can tell whether it is gone, it is not important.

That is the cleanest test.

Real systems create dependency because they remove pain.

A support agent that deflects thousands of repetitive tickets creates dependency.

A documentation agent that cuts engineer onboarding from weeks to days creates dependency.

A routing agent that prevents high-value leads from rotting in a CRM creates dependency.

A compliance agent that catches missing evidence before an audit creates dependency.

A slide generator does not.

My Rule: No Metric, No Mandate

I want AI projects to earn the right to exist.

That means every AI deployment gets a simple contract:

  • What job does it do?
  • Who owns the outcome?
  • What metric moves?
  • What is the baseline?
  • What is the target?
  • What is the review date?
  • What happens if it misses?

The last question matters most.

Most organizations never define consequences.

That is why zombie pilots survive.

Nobody wants to admit the thing failed. Nobody wants to offend the executive sponsor. Nobody wants to unwind the vendor relationship. Nobody wants to say the quiet part out loud.

So the screensaver stays on.

The invoice keeps coming.

The team pretends the transformation is still transforming.

No.

Do your job or I will replace you.

That applies to people. It applies to vendors. It applies to agents. It applies to strategy.

Build Less. Measure More.

The next wave of AI winners will not be the companies with the most pilots.

They will be the companies with the tightest feedback loops.

They will know what each system costs. They will know what each system saves. They will know which workflows deserve automation and which ones deserve deletion. They will put AI where operational pain already has a price tag.

That is how this gets real.

Not with innovation theater.

Not with another internal prompt library.

Not with a chatbot named after a mascot.

With numbers.

With ownership.

With kill criteria.

With the courage to shut down what does not work.

Put the Scoreboard on the Wall

If you are running AI inside a company right now, do this today:

Pick one deployment.

Write down the metric it is supposed to move.

Find the baseline.

Find the current number.

Calculate the cost.

Name the owner.

Set the kill date.

If that feels uncomfortable, good. That discomfort is accountability entering the room.

If your AI cannot prove ROI, kill it.

If it can, scale it.

If you want help separating agents that move the business from expensive screensavers, reach out. Bring the workflow. Bring the numbers. I will bring the knife.

📍 Posted directly to jessealton.com
Share:
JA

Jesse Alton

Founder of Virgent AI and AltonTech. Building the future of AI implementation, one project at a time.

@mrmetaverse

Related Posts

Subscribe to The Interop

Weekly insights on AI strategy and implementation.

No spam. Unsubscribe anytime.