The Fireworks Phase

Why agentic AI is still an explosion looking for an engine

By Jim Ford · Chief Architect, GAIN Credit · Founder, Equilateral AI
24 years managing workforce systems at ADP for 15 million employees

The Fireworks Phase: From Explosive Potential to Directed Work — Black Powder 1242, Internal Combustion 1876, Agentic AI 2024

Before the Engine

In 1242, Roger Bacon documented a formula for black powder. By 1300, Europeans were producing gunpowder in quantity. For the next four centuries, they used it almost exclusively to make things explode spectacularly—in cannons, in fireworks, in mining blasts. The explosion was the product.

The internal combustion engine didn't appear until 1876. Six hundred years after the explosion was mastered, someone finally figured out how to put the explosion inside a controlled chamber, harness it directionally, and connect it to work.

The engine wasn't a better explosion. It was a governance layer around the explosion.

Same arc, different century: the first cars appeared on roads built for horses, governed by no traffic laws, with no licensing requirements, no insurance mandates, no liability frameworks. The vehicle technology preceded the control infrastructure by decades. Cars got faster. People died. Governance caught up slowly, painfully, and only after enough incidents made the cost of non-governance undeniable.

We are in the fireworks phase of agentic AI. Everyone is impressed by the explosion. Nobody has built the chamber yet.

What the Fireworks Phase Looks Like

The pattern is consistent across transformative technologies:

  1. Spectacle. The technology produces an impressive output. Crowds gather. Investors notice.
  2. Capability. The technology is refined for increasingly sophisticated outputs. Better fireworks. Bigger explosions.
  3. Controlled harnessing. Someone builds the chamber. The explosion becomes directed work.
  4. Governance infrastructure. Roads get paved. Traffic laws get written. Licenses get issued. Liability gets assigned.
  5. Regulatory normalization. The governance layer gets codified into law. Auditors arrive. Compliance becomes the floor.

We are between steps 1 and 2. The AI industry is producing spectacular outputs and refining them at extraordinary velocity. GPT-4 to GPT-5. Claude 2 to Claude 4. Models that can write, reason, code, and now act autonomously on your behalf.

Step 3—the controlled chamber—is where almost nobody is building.

The Explosion Is Impressive. The Chamber Is the Product.

Here is the distinction that the engineering community keeps missing:

Models execute. Systems govern. Authority must live outside the model.

An AI model that can autonomously browse the web, write code, send emails, update records, and trigger financial transactions is not a governance system. It is an explosion. It is black powder in an open field. Impressive. Directionally unpredictable. Ungoverned.

The governance layer—the chamber around the explosion—is what determines whether the energy goes into work or into chaos. And that layer has almost nothing to do with the quality of the model inside it.

This is the insight that keeps getting lost in the capability benchmark cycle:

  • A bigger model does not eliminate the need for dispatch authority.
  • A smarter model does not generate its own audit trail.
  • A more capable model does not scope its own permissions.
  • Governance cannot depend on attention. It must depend on external constraint and recorded authority.

That last point is worth dwelling on. Every attempt to govern AI agents through better prompting, larger context windows, or more sophisticated instructions is betting that the model will pay attention to the right thing at the right moment. That is a probabilistic bet. Governance by attention fails in the exceptional case—which is exactly when governance matters most.

The Context vs. Attention Paradox

Larger context windows do not produce more reliable reasoning—they introduce more surface area for drift. Governance cannot depend on the model noticing the right constraint in a 200,000-token context window. The constraint must exist outside the model, enforced at the execution layer, independent of what the model chose to attend to.

Roads and Traffic Laws Are Infrastructure, Not Features

When the automobile industry was in its fireworks phase, the argument against traffic laws was essentially: “Why regulate cars when the cars keep getting safer?” Better brakes. Better steering. Better visibility. Just make the cars better, and the safety problem solves itself.

That argument was wrong, and it was wrong for a structural reason: the safety problem was not a car problem. It was a system problem. Cars interacting with each other, with pedestrians, with road conditions, with driver behavior—no amount of improvement to the individual vehicle addresses the systemic coordination failure.

Traffic laws are not a constraint on good cars. They are the infrastructure that makes good cars useful at scale.

The same logic applies to AI governance. The argument against governance infrastructure is: “Why add governance overhead when the models keep getting better?” Better reasoning. Better refusal behavior. Better alignment.

That argument is wrong for the same structural reason: the governance problem is not a model problem. It is a system problem.

  • Which agent took which action, under whose authorization, at what time?
  • Did that agent's permissions expire when the task ended, or did they accumulate?
  • Can you prove to an auditor that the agent could not have acted outside its authorized scope?
  • When two agents hand off a task, where is the chain of custody?

No model improvement answers any of those questions. They require roads and traffic laws. They require infrastructure that exists outside the model, independent of the model, enforced before and after the model executes.

What the Chamber Looks Like

I spent 24 years at ADP managing workforce systems for 15 million employees. Every contractor who walked through the door got an identity badge, scoped permissions, and an audit trail. When the engagement ended, we revoked access. The contractor's skill level was irrelevant to the governance requirement—a highly capable contractor who inherited unconstrained access was more dangerous than an average contractor who couldn't act outside their authorization.

AI agents are digital contractors. They need the same chamber:

  1. Verified identity. Each agent instance has a unique, authenticated identity. Not the user who launched it—the agent itself.
  2. Scoped authorization. Capability tokens issued for specific tasks, with explicit boundaries that the security layer enforces before execution begins.
  3. Time-limited access. Permissions expire when the task ends. They do not accumulate across sessions.
  4. Independent audit trails. Every action recorded by the database mediation layer, not reported by the agent. The record exists whether or not the agent chose to report it.
  5. Instantaneous revocation. When the engagement ends, access ends. No residual permissions. No inherited credentials.

This is not a constraint on capable AI. This is the infrastructure that makes capable AI safe to deploy at scale in regulated industries.

Where We Are in the Arc

The fireworks phase ends when the first major agentic AI incident creates undeniable regulatory and legal pressure. This is not speculation—it is the same arc that played out with automobiles, with industrial chemicals, with financial instruments, with internet data handling. Every technology that touches consequential systems eventually gets governed.

The question is not whether AI governance infrastructure gets built. The question is whether it gets built before or after the incidents that make it mandatory.

The organizations building governance infrastructure now are not pessimists about AI. They are architects who have read the history of every previous transformative technology. The explosion is real. The engine is coming. The roads and traffic laws are the work of this moment—not because the models aren't impressive, but because impressive explosions without chambers are fireworks, not engines.

You are not anti-AI if you build governance infrastructure. You are the person who invented the cylinder.