Skip to content
·13 min read

I'm a Solo Dev Shipping a SaaS With an AI Agent. Here's My Exact Workflow.

No co-founder. No team. No budget for contractors. Just me, Claude Code, and a structured workflow that turned chaotic AI sessions into a predictable shipping machine. This is how I build features from scratch, maintain code quality, and avoid the rework trap.

The Solo Developer's Dilemma

When you're a solo developer building a SaaS, you wear every hat: product manager, designer, architect, developer, QA, DevOps. AI coding agents promised to be the force multiplier — the thing that lets one person ship like a team of five.

And they are. Sort of. The problem is that without structure, AI agents amplify your bad decisions as fast as your good ones. When there's no tech lead to question your architecture, no code reviewer to catch scope creep, and no senior engineer to say “that approach doesn't scale” — the AI happily builds exactly what you asked for, whether it was the right thing or not.

I wasted three full weekends on rework before I figured out a process that works. Here it is.

The Stack

My SaaS is a project management tool for small agencies. The stack is Next.js 15 (App Router), tRPC, Drizzle ORM, Postgres, and Tailwind. Deployed on Vercel with a Neon database. Nothing exotic — just a standard production stack.

The AI agent is Claude Code. The workflow framework is Archie. But the principles here apply regardless of your stack or your agent.

Scenario 1: Building a Feature From Scratch

Last week, I needed to add time tracking to my project management app. Users should be able to log hours against tasks, and project owners should see a summary dashboard. Here's how it went.

Step 1: Design (5 minutes)

I run /architect and describe the feature: “Users need to log time against tasks. Entries have a duration, optional description, and date. Project owners see a dashboard with total hours per team member and per task.”

The agent — in Architect mode — doesn't write code. Instead, it produces a design. It identifies the affected components: a new time_entries table, three new tRPC procedures, two new UI components, and a dashboard view. It flags a risk: the summary query could be slow on large projects and recommends a materialized view or cached aggregation for v2. It explicitly scopes out: no billing integration, no approval workflow, no Toggl/Clockify import.

I read the design. It's solid. I would have forgotten to scope out billing integration, which means the agent would have probably tried to build it. I say “approved.”

Step 2: Task Breakdown (2 minutes)

I run /tech-lead. The agent reads the approved design and produces 4 tasks:

T-012: Database migration + Drizzle schema for time_entries

T-013: tRPC procedures (create, list, delete time entries)

T-014: Time entry UI — log hours modal + task-level list

T-015: Project time dashboard — hours by member and task

Each task lists the exact files it will touch, the libraries it needs, and a clear “done when” statement like “user can create a time entry from the task detail page and see it in the task's time log.” I approve, and the tasks land in my backlog.

Step 3: Implement One Task at a Time (8-15 min each)

I run /dev-agent. The agent picks up T-012, creates a feature branch, writes the migration and schema, commits with feat(T-012): add time_entries table and Drizzle schema, and opens a PR. I review: 4 files touched, clean migration, follows my existing Drizzle patterns. Merge.

I run /dev-agent again. It picks up T-013. Same process: branch, implement, commit, PR. This one takes a bit longer because it writes validation logic and error handling. But the PR is 7 files, all focused on the API layer. Merge.

By lunch, all 4 tasks are merged. Total time: about 50 minutes, including my review time. And every PR was small enough to actually review — not a 400-line monster I'd rubberstamp out of exhaustion.

Get all 16 free CLAUDE.md templates + cheat sheets

Enterprise-grade conventions for every major stack, plus Claude Code and prompt engineering guides. No account needed.

Download free

Scenario 2: The Quick Fix That Isn't Quick

A user reports that project names with special characters break the URL slug generation. Sounds like a 5-minute fix, right?

Without structure: I would have told Claude “fix the slug generation for special characters.” It would have fixed the slug function, but also restructured the routing logic, added URL encoding to three other places that didn't need it, and changed the slug column type in the database. I'd spend 30 minutes figuring out what it changed and why.

With structure: I use /quick — the fast-track workflow for small features. It does design and task breakdown in one pass. The agent identifies: the generateSlug() function in lib/utils.ts doesn't handle unicode. Fix: replace the regex with a proper slugify approach. One task. Three files (util, test, one call site that needs the updated behavior). I approve, the agent implements, opens a PR with 3 file changes. Done in 8 minutes.

The key difference: the agent told me its plan before it started. It didn't silently restructure the routing. It didn't touch the database. It stayed in scope because the scope was defined before any code was written.

Scenario 3: Picking Up Where I Left Off on Monday

Friday afternoon, I was halfway through the notification epic. Two tasks done, three remaining. I close my laptop and go live my life.

Monday morning, I open Claude Code. The agent reads my memory files and immediately has the full context: the notification epic design (approved), the task breakdown (5 tasks, 2 completed, 3 remaining), my architecture and conventions, and the decision log that says “we chose Resend over SendGrid because of the simpler API.”

I run /status to get a quick read on where things stand. Then I run /dev-agent and it picks up the next task. No re-explaining. No “can you remind me what we were working on?” No 15-minute context reload. Just “what's next?”

This is the part that changed my daily experience the most. The persistent memory means Monday morning feels like Friday afternoon plus a weekend of rest, not like starting a new project from scratch.

Scenario 4: When the Agent Finds Something Wrong

During implementation of a dashboard widget, the agent notices that the existing analytics helper has a potential N+1 query. In the old workflow, it would have silently refactored the helper — adding 50 lines and 3 files to a PR that was supposed to be about a dashboard widget.

With the structured workflow, the agent can't do that. The task spec says “implement the dashboard widget.” The N+1 query is out of scope. So instead, the agent logs a new task to the backlog: fix: N+1 query in analytics helper (discovered during T-018). The dashboard widget PR stays focused. The N+1 fix gets its own task, its own branch, its own PR.

This is how senior engineers work. They don't fix every problem they find in the middle of an unrelated task. They note it, file it, and address it in the right context.

CLAUDE.md sets the rules. Archie runs the workflow.

Persistent memory, role-based skills, and approval gates. From idea to merged PR.

View pricing

What My Shipping Week Looks Like Now

Here's a typical week after adopting this workflow:

Monday: Review backlog. Pick up remaining tasks from last week. Ship 2-3 PRs before lunch.

Tuesday: Design a new feature with /architect. Break it into tasks. Start implementing.

Wednesday: Continue implementation. Ship the remaining tasks from Tuesday's feature. Handle a quick bug fix with /quick.

Thursday: Design the next feature. Break it into tasks. Start implementing.

Friday: Ship what I can. Log remaining tasks for Monday. Deploy.

I'm shipping 2-3 features per week. Each feature goes through a proper design review (by me, reviewing the agent's design) and produces small, clean PRs. The git history is readable. The codebase is consistent. I sleep well knowing the architecture is something I approved, not something the AI decided at 2am.

The Solo Dev Advantage

Here's something people don't talk about enough: solo developers with structured AI workflows are faster than small teams. No standups. No merge conflicts from parallel work. No waiting for code reviews from teammates in a different timezone. No alignment meetings. No Jira ticket grooming.

You think, the agent designs, you approve, the agent builds, you review, you merge. The entire feedback loop is measured in minutes, not days. The overhead of a structured workflow is 5-10 minutes per feature — that's your design review and task approval. In exchange, you get clean architecture, focused PRs, and zero rework.

One person. One AI agent. A structured workflow. That's the new startup team.