Steve Reaser
← All writing
May 11, 2026

Design OS: The Step I Was Skipping With AI

I built a real app with it. Specs rewrote three times before a line of code was touched, the design never went stale, and scope stayed tight. Here's what actually happened.

Design OS — The missing design process between your idea and your codebase

I’ve been building with AI coding agents for a few months now. Claude Code, Cursor, the whole ecosystem. And I kept feeling like something was missing.

I’d describe what I wanted. The agent would build something. It was functional; but it wasn’t quite right. Features were half-baked. I’d spend the next session redirecting and patching — and then sometimes it would seem to forget decisions we made or even the whole initial prompt.

Turns out something was missing.

The actual problem

Here’s what I finally realized: when you hand a coding agent a partially developed idea, you’re asking it to do two things at once. Figure out what to build and build it. Simultaneously. It’ll try, but it’s less than ideal.

Design decisions get made on the fly, buried in code, impossible to untangle later. There’s no spec. No shared picture of what “success” looks like.

Design OS, a free open-source tool from Brian Casel at Builder Methods, solves this. The tagline: “The missing design process between your idea and your codebase.”

What it actually does

Design OS is a structured process — not just a prompt template, a real workflow — that runs before you write a single line of code. It walks you through four stages:

  1. Product Planning — define your vision, map your roadmap, model your data shape
  2. Design System — pick your colors, typography, overall shell
  3. Section Design — for each feature area, specify requirements, generate sample data, sketch the screens
  4. Export — package everything into a handoff document your coding agent can actually use

Each stage is a conversation. The AI (it works with Claude, Cursor, Copilot — whatever you’re using) asks you questions. You answer. Together you make decisions before implementation begins. By the end you have a spec, sample data, TypeScript types, and a design direction. Your coding agent has something concrete to build against.

Who this is for

I’d say two groups:

If you’re a developer using AI to build faster, this is the layer between your product ideas and your codebase that you’ve probably been trying to improvise on the fly.

If you’re a non-technical founder who knows what you want to build but has been struggling to get AI to build it correctly — this is the answer. You’re not bad at prompting. You’re just missing the planning stage.

Design OS gives you a structured way to articulate your vision so the coding agent is executing against your decisions, not making them for you.

What it looked like in practice

I ran a full app through this process — a personal productivity tool I’ve been building called Adulting. Here’s what I actually noticed.

Specs were cheap to change. Code was expensive. Before writing a line of code, I tweaked the spec for one of the core features three times. We debated the flow, what data gets saved, how the UI should handle different states. All of that recorded by Claude.

The specs became a living source of truth. Throughout the whole build, Claude kept opening the spec files to confirm intent. When we changed our minds, Claude updated the spec first, then the code. The usual problem with documentation is that it goes stale instantly. With Design OS, the specs live in the same repo as the code, right next to each other. Drift is visible and fast to catch.

Sample data made the designs honest. Each feature section came with realistic sample data — long titles, partial states, edge cases, things that were overdue or half-finished. You can’t accidentally design only for the happy path when the data shows you the messy paths automatically. This was subtle but it made a real difference in how things felt once we started building.

“Out of scope” was a feature, not a footnote. Every section spec ended with a deliberate list of what we were not building. Multi-select, calendar sync, certain automations — all explicitly deferred. Naming what you’re not building turns out to be one of the better scope-control tools I’ve found.

The design tokens paid compound interest. Picking the color palette and typography on day one meant everything built afterward felt cohesive without anyone enforcing it. No “reconcile the palette” session at the end.

It made picking up mid-project seamless. Adulting is a side project — I work on it in pockets of time. Without the specs, every session would start with “okay, where was I?” Instead Claude always knows where we are in the process and can help me jump right back in.

One small moment that made me literally laugh out loud: weeks earlier, when I first dictated the project brief, I said something like “TurboTax-style interviews — ‘Do you own a home? How old is your roof?’” as an example of how I wanted one major section to feel. Weeks later, building the landing page, Claude plucked that exact phrase out of the initial brief and dropped it into the copy, word for word.

(Got a weird look from the wife.)

That’s what good structured context does. Nothing gets lost.

The tool is free and open source on GitHub. You clone it, run it locally, and work through the process conversationally. No SaaS signup. No paywall.

Worth checking out if…

  • You’ve felt frustrated that AI builds the wrong thing even when you described it well
  • You’ve spent more time redirecting AI output than building on it
  • You want to maintain creative control over what gets built — not just approve what the agent decided

GitHub: buildermethods/design-os


Steve Reaser helps small businesses put AI to work. He writes about what’s actually working, and what isn’t. Watch me learn and build in public at SteveReaser.com