Andrej Karpathy coined the term "vibe coding" to describe the new paradigm of natural language programming. You describe what you want; the AI generates it. Magical.
Except when it isn't.
In December 2024, Cursor published research showing that AI coding agents often get stuck in what they called a "tornado loop"—endlessly generating, validating, fixing, and resetting without making progress. The more complex the task, the tighter the spiral.
We saw the same patterns across hundreds of projects. So we built something different.
The Mechanics of Drift
Traditional AI coding follows a generate-then-validate workflow:
- Developer describes intent
- AI generates code
- Validation catches issues
- AI fixes issues
- Repeat until "done"
The problem? Each iteration can introduce drift. The AI makes assumptions. Context gets lost. By iteration five, you're validating code that's already far from your original architecture.
This is governance chasing AI.
The tornado pattern: Generate → Validate → Fix → Reset → Generate → Validate...
Each loop moves further from correct. Eventually, you need a fresh start.
Glide Coding: Governance Leads AI
Glide Coding inverts the sequence. Instead of validating after generation, we inject architectural standards before the first token is produced.
| Vibe Coding | Glide Coding |
|---|---|
| ✕ Generate → Validate | ✓ Inject standards → Generate |
| ✕ Large correction cycles | ✓ Smaller correction cycles |
| ✕ Hope agents comply | ✓ 808 rules enforced in context |
| ✕ Governance chases AI | ✓ Governance leads AI |
| ✕ Periodic resets | ✓ Ship with confidence |
The result: the AI's first output is already close to correct. Iterations refine rather than repair.
First Output Distance
We measure "first output distance"—how far the AI's initial generation is from architecturally correct code.
Vibe coding: First output is often far from correct. Multiple correction cycles required.
Glide coding: First output is close to correct. Standards were present during generation, not applied after.
This single metric explains why governed development feels faster even though it requires more upfront configuration. You're not faster per iteration—you need fewer iterations.
Where Governance Lives
In vibe coding, governance happens at the end: linters, tests, code review, PR feedback. By then, the code exists. Fixing means rewriting.
In glide coding, governance happens at the start: standards are injected into the AI's context before generation. The code is born compliant.
Vibe Coding Flow:
Human intent → AI generation → Validation & governance (too late)
Glide Coding Flow:
Human intent → Governance + standards → AI generation (already correct)
Drift Accumulation Over Time
Codebase alignment degrades differently in each model:
Vibe coding: Starts high, drops steadily. Each session introduces drift. Eventually requires "fresh start" to recover alignment.
Glide coding: Maintains consistent alignment. Standards persist across sessions. No periodic resets needed.
The longer your project runs, the more glide coding's advantage compounds.
The Open Source Stack
GlideCoding is built entirely on open-source components. No vendor lock-in. No magic boxes.
The Four Repositories
EquilateralAgents-Open-Standards
The Fuel. 62 YAML standards across 11 categories. 808 rules covering serverless, security, frontend, multi-agent orchestration, and more. Fork and customize for your organization.
project-object
The Injector. Scans your project structure and injects only the relevant standards into AI context. No manual configuration—it detects what matters.
equilateral-agents-open-core
The Engine. 22 specialized agents, hooks, and governance infrastructure. Claude Code compatible. Run locally or extend with your own agents.
EquilateralAgents-Community-Standards
Community Fuel. Additional standards contributed by the community. Specialized domains, framework-specific patterns, and niche use cases.
How They Work Together
- Clone the repositories into your project (or use the symlink pattern)
- project-object scans your codebase and identifies relevant standards
- Standards are injected into your AI assistant's context (Claude Code, Cursor, etc.)
- AI generates code that's already aligned with your architecture
- Hooks validate at commit time (optional but recommended)
Total setup: clone, symlink, code. The governance happens automatically.
Getting Started
The fastest path to governed AI development:
- Star the repos — support the open-source ecosystem
- Clone Open-Standards — browse 62 production-tested standards
- Try project-object — see automatic context injection in action
- Read the methodology — glidecoding.org has the full manifesto
Or jump straight to glidecoding.com to understand the philosophy.
The tornado loop is optional. You can choose to glide instead.