Let me take you back to 1996.
Your boss has just seen a demo of OLE—Object Linking and Embedding. Microsoft's vision of the future. You could embed a spreadsheet inside a Word document. The spreadsheet ran inside the document. You could double-click it and Excel would open right there, inside the page. It was genuinely impressive. It felt like the future.
So the industry did what it always does with impressive new technology: it promoted it from tool to platform. OLE became ActiveX. ActiveX became the automation container for an entire generation of enterprise software. Governance logic, business rules, workflow automation—all of it went inside the container because the container was where the capability was.
You know how that ended. Security nightmares. Brittle integrations. An attack surface so wide that “disable ActiveX” became standard IT advice for a decade. And when the container died, everything built inside it died with it.
Here's the thing: we learned nothing.
The Pattern Nobody Talks About
Every generation of technology produces one capability that is so impressive, so genuinely powerful, that the industry makes the same mistake: it promotes the capability from tool to container. It stops asking “what should this thing do?” and starts asking “what can we build inside this thing?”
This is not a technology problem. It's a cognitive problem. When something is sufficiently capable, it becomes very easy to believe it's capable of running itself. That the automation medium is also the right automation container. That the thing doing the work should also be the thing governing the work.
It never is. It has never been. And the cost of learning that lesson, per generation, is measured in billions of dollars and years of lost productivity.
Let me walk you through the full timeline, because the pattern is more consistent than you might think.
1991–2000: OLE, ActiveX, and the Document as Platform
The original sin. Microsoft's Component Object Model was genuinely revolutionary—the idea that software components could talk to each other, expose interfaces, be embedded and reused across applications. COM was a real architectural insight.
The mistake was deciding that the document was the right container for that capability. Automation logic went into the document because that's where users were. Business rules lived inside spreadsheets because Excel was where the business users spent their time. ActiveX controls ran inside Internet Explorer because the browser was where people were looking.
The container was chosen based on where the user was sitting, not on what the automation actually needed.
When the security implications became impossible to ignore—and they were catastrophic, arbitrary code execution from a web page, the stuff of IT nightmares—the container collapsed. Everything built inside it either died or became the undead: Excel macros that nobody can touch, Access databases running critical business processes that haven't been updated since 2003, VBA automation that the company has forgotten how to maintain but is terrified to replace.
That's the first lesson: when you choose the wrong container, you don't get a clean failure. You get a slow, expensive, ungovernable accumulation of technical debt that outlives everyone who made the original decision.
1996–2010: Flash and the Plugin as Platform
Flash was magic. Genuinely. In a world of static HTML and blinking <marquee> tags, Flash could animate, interact, stream video, build full applications. It was the most capable thing in the browser ecosystem by a country mile.
So the industry put everything inside it.
Not just animations. Entire applications. Business logic. User flows. Navigation. Accessibility—well, not accessibility, that was one of the first things to go. But everything else. If you wanted it to be interactive and impressive, you built it in Flash, because Flash was where the capability was.
The problem, again, was that the capability and the container got conflated. Flash was an extraordinary execution medium. It was a terrible governance container. There was no standard way to audit what Flash was doing. No way to make it accessible. No way to make it work when the plugin wasn't available. No way to make Google index it. No way to make it work on mobile.
When Steve Jobs refused to support Flash on iPhone in 2010, the writing was on the wall. When Adobe killed Flash in 2020, everything built inside Flash died. Not the companies that used Flash as a component inside a larger architecture—they had already migrated. The ones that died were the ones that had made Flash the container.
The container you choose will be deprecated. It is a question of when, not if. And if your business logic, your governance rules, your automation workflows live inside that container, they get deprecated with it.
2000–Present: Macros, RPA, and the Interface as Platform
These two belong together because they share the same root mistake, just at different levels of the stack.
Macros—VBA, Excel automation, Access—happened because the spreadsheet was where business users lived. Finance teams, operations teams, HR teams. The capability (automation) went where the users were (the spreadsheet). Result: thirty years later, critical business processes run on Excel files that nobody fully understands, that break silently, that have no audit trail, and that the business is terrified to touch. Ungoverned. Unmaintainable. Unkillable.
RPA—Robotic Process Automation—happened because the UI was where the work happened. If you need to move data between systems and you don't have API access, you automate the UI. Bots that click through screens like a very fast, very literal user. It was impressive. It automated real work.
The mistake: the UI became the container. Governance lived in the bot's screen-scraping logic. Business rules were encoded in coordinates and element selectors. When the UI changed—when the button moved three pixels, when the vendor updated their interface, when the company migrated to a new system—the automation broke. Catastrophically and silently, often at 2am on a Monday.
RPA vendors will tell you this has been solved. It hasn't. The fundamental problem is that you've made the interface the container, and interfaces are not stable enough to govern anything.
Both of these failed for the same reason as OLE and Flash: the automation container was chosen based on where the capability was most visible, not on where the governance needed to live.
2023–Present: The Intelligent Agent as Container
And here we are.
LLMs are genuinely extraordinary. The capability leap from GPT-2 to GPT-4 to Claude 3 to what exists today is real and significant and not fully appreciated even by people who use these systems daily. The things a well-prompted model can do would have seemed like science fiction fifteen years ago.
So the industry is doing what it always does.
It's promoting the capability to container.
The pitch goes like this: “The model is smart enough to understand the goal. Give it tools. Give it memory. Let it decide what to do next. The agent is the automation container.” Autonomous agents that plan their own execution. Multi-agent systems where agents negotiate with each other and decide between themselves what actions to take. Governance baked into the system prompt. Business rules in the context window. Authority wherever the model infers it should be.
This is OLE with a chat interface.
The mistakes are structurally identical:
The container is chosen because it's impressive, not because it's right. LLMs are the most impressive technology in the room. Therefore they become the container. The question “should the thing doing the work also be the thing governing the work?” never gets asked.
Governance gets embedded inside the container rather than sitting above it. System prompts are governance. CLAUDE.md files are governance. Instruction sets loaded into context windows are governance. All of it lives inside the execution environment of the thing being governed. You have put the rules inside the entity the rules are supposed to constrain.
The capability is articulate, which makes the mistake harder to see. This is the new wrinkle that makes this generation's mistake more dangerous than the prior ones. OLE objects didn't tell you they understood the goal. Flash didn't explain its reasoning. RPA bots didn't describe their intent. LLMs do all of these things, fluently, confidently, and convincingly. The model says “I understand the authorization requirements” and it sounds like governance. It isn't. It's the model predicting what governance sounds like.
When the container is compromised, so is the governance. A prompt injection attack doesn't just hijack the agent's action—it hijacks the governance layer, because the governance layer and the agent are the same thing. A signed payload from a compromised agent is still a signed payload. Cryptographic attestation of a bad decision is still a bad decision, just with better receipts.
The Fuel Is Not the Engine
Here's the frame that clarifies everything:
The model is the fuel. Not the engine. Not the chassis. The fuel.
Fuel is high-energy. Useful. Essential. Without it nothing moves. But fuel doesn't decide where the car goes. Fuel doesn't govern the route. Fuel doesn't check whether the driver is authorized to be behind the wheel. Fuel burns in the engine. The engine runs inside the chassis. The chassis operates within a system of rules—traffic laws, road design, the physical constraints of the vehicle—that exist entirely outside the fuel.
Nobody argues that because gasoline is the energy source, the governance should live in the gasoline.
But that's exactly what “the intelligent agent is the right automation container” argues. The energy source—the model—becomes the thing that decides, routes, and governs. The governance moves inside the fuel tank.
The right architecture is the same one that eventually worked for every prior era of this mistake:
Separate the execution medium from the orchestration layer.
Put authority outside the container.
Make the container replaceable.
The orchestration layer decides what runs and when. The governance layer—consequence tiers, authority hierarchies, intent capsules, audit trails—lives outside the model's execution context. The model does the work. It doesn't govern the work. Those are different jobs and they require different architectural layers.
The Flash Survivor Test
Here's the question to ask about any system you're building with AI agents today:
If the model gets deprecated tomorrow—if Anthropic discontinues this version of Claude, if OpenAI releases a breaking change, if your vendor gets acquired—does your governance survive?
Flash died in 2020. The companies that built their governance inside Flash lost it when Flash died. The companies that used Flash as a component inside a larger architecture had already migrated, because the architecture survived even when the component didn't.
Claude 4 will be replaced by Claude 5. GPT-4o will be deprecated. The model you're using today will not be the model you're using in three years. If your business rules, your authority hierarchy, your audit trail, your compliance posture live in the context window of the current model, they get deprecated with it.
If they live in an orchestration layer that sits above the model—that calls models as workers rather than trusting them as governors—they survive every model rotation and get stronger with each one, because each new model is a more capable worker inside a system that already knows how to govern it.
Why the Magic Beans Crowd Won't See It Coming
Every generation of this mistake had true believers who were certain this time was different. ActiveX developers who argued the security concerns were overblown. Flash developers who insisted the plugin model was the future of the web. RPA practitioners who said the brittleness was a training problem, not an architecture problem.
They weren't stupid. They were smart people who were very close to genuinely impressive technology and mistook impressiveness for architectural correctness. The capability was real. The container was wrong.
The LLM agent crowd is smart too. The capability is real—more real than any of the prior examples. The models are genuinely impressive in ways that OLE objects never were.
That's what makes the mistake harder to see. The more capable the technology, the more plausible it seems that it could be its own governor. The more articulate the model, the more convincingly it describes its own governance. The more autonomous the agent, the more it looks like the container is working.
Until it doesn't.
The failure mode for LLM-native governance isn't going to look like ActiveX failures. It's not going to be obvious. It's going to look like a model that was confidently wrong, that had all the right-sounding governance language in its context window, that produced a perfectly coherent audit trail of decisions that led systematically to a bad outcome. It's going to look like an agent that passed all the internal checks because the internal checks were written by the same system being checked.
It's going to look like a compliance event at 2am on a Monday. And someone is going to have to explain to the regulator why the governance lived inside the thing that failed.
The Resolution Is Older Than the Problem
The good news is that every prior era eventually found the right answer. Not by abandoning the impressive technology—we still use JavaScript, spreadsheets, even some macros. But by finding the right abstraction level for governance.
The resolution was always the same: pull the governance out of the container. Make the container a worker, not a decider. Put authority in a layer that the container cannot modify, cannot prompt-inject, cannot hallucinate away.
COM objects still exist. They're used as components inside architectures that govern them from the outside. Flash is gone but video and animation are everywhere, as components inside governed systems. Macros still exist in Excel but the companies that have gotten this right run them inside RPA-to-API migration layers with external audit trails, not as freestanding governance containers.
The model is not different. It's more impressive than all the prior examples. It's also, architecturally, a worker. A very capable, very articulate worker that does extraordinary things when it's given clear instructions within a system that governs it from the outside.
The automation container for the AI era is not the agent.
It's the governance layer the agent runs inside.
This is the architecture problem Equilateral was built to solve—governed multi-agent orchestration where authority lives outside the model, consequence tiers gate every action before execution, and the audit trail is immutable regardless of what the model does or says. The standards library is open at glidecoding.org. More on the architecture, the governance model, and the open standards at equilateral.ai.