hero:Scenario : Text is the new Code (2026.4)

Scenario : Text is the new Code (2026.4)

The fourth write-up on my 2026 predictions arrives fashionably late — though the phenomenon itself has been right on schedule. The previous predictions can be found via this link.

I haven’t written even a single “for loop” in 2025.

In the past, building a new feature for Pelilauta1 or any of my many projects meant opening VS Code and eventually searching online for loop syntax in TypeScript, Svelte2, or Astro—each with its own quirks and conventions for even the most basic programming patterns.

These days, I handle everything in markdown. Even better, I simply describe what’s needed and refine the markdown wherever the language model misses the mark.

I outline the architecture as a list, define data structures in YAML, and map out logic flows in Mermaid diagrams. The link-to-asdlc-post — the orchestrated pipeline of AI agents that turns specifications into running software — handles the implementation.

For decades, we told people that “learning to code” was all about mastering syntax—semicolons, brackets, and memory management. But that was never the real challenge. Syntax was simply the price of admission to communicate with machines. Now that machines understand3 human language, that barrier has finally disappeared.

Yet there’s a twist: while the machine now speaks English, it still thinks in structure.

My fourth prediction for 2026: The primary “programming languages” of the AI era won’t be last decade’s scripting languages, but instead the structural formats we once considered mere documentation—Markdown, YAML, Gherkin, and Mermaid among them.

The Economics of Tokens

Why use structured formats rather than plain English? It comes down to determinism and context efficiency. While LLMs have impressive context windows, both their memory and their “attention” are fundamentally limited. Feed an AI ten pages of wandering prose to describe a software system, and it’s likely to hallucinate or miss edge cases4. But present it with a precise YAML definition and a Mermaid diagram, and you give it the constraints it needs to reason reliably.

Markdown and YAML, in particular, strike a unique balance. They’re fast to write—even on a napkin or in a phone note—often easier than writing Python. They offer just enough structure to keep the AI anchored while remaining flexible enough to capture even complex, abstract ideas. At the same time, they are highly token-efficient, stripping away the unnecessary fluff of conversational language and maximizing the value of every word.

Now, a reasonable objection: isn’t plain English prompting getting good enough to make structured formats unnecessary? After all, models improve every quarter5. But this misses the point. The issue isn’t whether the model can interpret prose — it’s whether it will do so consistently. Structure isn’t a crutch for weak models; it’s a constraint that makes strong models reliable. The better the model gets, the more you want to feed it precise specifications rather than ambiguous paragraphs.

The road to spec-driven development

Gherkin’s6 resurgence marked a turning point for me. My “a-ha” moment came when I used a Gherkin-style example to clarify an edge case for a model that kept going off track. The improvement was so dramatic that I soon found myself adopting full behavior-driven development (BDD) definitions wherever possible.

The Given-When-Then syntax at the heart of BDD isn’t just for testing. It’s a highly token-efficient and precise way to instruct AI on how a scenario should work.

For example, if you’re orchestrating an Agent to analyze a CSV and email outliers, you don’t want to spell out the logic in long paragraphs. Instead, you define the behavior:

Given the sales data from Q3,
  When a transaction exceeds €5,000,
  Then flag it as an anomaly and generate a summary report.

This isn’t just a test case anymore—it’s the source code. The Agent interprets these logical constraints and implements the necessary code behind the scenes. In this new paradigm, we’re shifting from imperative programming — telling the computer exactly what steps to execute — to a declarative approach that expresses what we want, leaving the how to the machine. Structural languages are the bridge between human intent and machine execution.

The Playbook

This shift may seem subtle, but it fundamentally redefines the skill set needed to build software or manage workflows. Mastering Rust is no longer essential; what matters now is the ability to structure logic clearly and effectively.

The key insight is where these specifications live. Not in Confluence. Not in a .docx emailed around for comments. In the repository, the code is version-controlled alongside the code they describe. The markdown spec is the source — it’s what the agent reads, what gets reviewed in pull requests, and what stays in sync with the implementation because it is the implementation’s input.

Projects now include files like agents.md — dedicated directive files that tell the AI how to work with the codebase. And these directives work best when they use the same structural formats: YAML for configuration, Gherkin for behavior rules, Mermaid for flow. Schema definitions, behavior specs, architecture documents — all in the repo, all part of the same version history as the code they produce.

Documentation is turning into source code; and that source code is what gets built and deployed.

Footnotes

  1. https://pelilauta.social is a social platform for tabletop role-playing games.

  2. To be fair, I’m a pretty recent Svelte convert. I still have some Vue, Solid and Lit components I need to touch time to time - so the situation is even worse than explained above.

  3. Yeah, stochastic sembalance of understanding is not really understanding the thing. But from a human perspective, it’s often close enough to get the job done.

  4. Curiously, with current frontiel models: having extra features seems to be a bigger issue than in-feature drift or missing details.

  5. Ilmari Koskinen had a good discussion point on this: will the model/agent harness providers optimize the tooling to a point where english enough? On that I tend to disagree, as the models - however pre-prompted, “regress towards the mean” - meaning that expecting the model provider to optimize the tooling, means you have chosen that excellence is not worth chasing.

  6. In all honesty, I do think that the traditional – pre llm era – BDD is a futile exercise of over engineering. Obviously, there are cases where its valuable enough to be worth the effort - but generally, I’d argue that for most teams it was more of a fashion statement than a practical approach.