How I Ship React Native Features 3x Faster With a Structured AI Workflow
Six months ago, I built features alone. I'd spec out a screen in Figma, spend a week writing the code, deal with platform-specific bugs, test on physical devices, and ship. It was slow and exhausting.
Three months ago, I started using AI agents. I gave them unstructured prompts. They generated code that kinda worked. I spent half the time fixing their mistakes as I would have spent writing it myself.
Two months ago, I built a structured workflow. Architect → Tech Lead → Dev Agent, with a proper memory system, CLAUDE.md, and mobile-specific skills. The change was transformative. Features that used to take a week now take 2-3 days. Quality is actually better. Platform-specific issues are caught earlier.
Here's the workflow, the tooling, and the patterns that made it work.
My Setup
Stack
- Expo (managed) for quick iteration, no native build complexity
- React Navigation (native stack + bottom tabs) for multi-screen nav
- Zustand for state (simpler than Redux, AI-friendly)
- TanStack Query for server state + caching
- TypeScript strict mode (catches AI mistakes early)
- Claude Code + Archie Mobile for AI-assisted development
Structure
src/
├── app/ # Root nav, entry point
├── screens/ # Screen components (1 per file)
├── components/ # Shared UI components
├── hooks/ # Custom hooks (useAuth, useApi, etc)
├── services/ # API clients, platform services
├── lib/ # Utilities, helpers
├── types/ # TypeScript (navigation, models)
├── stores/ # Zustand stores
└── theme/ # Colors, spacing (not Tailwind)I maintain a detailed CLAUDE.md at the root with examples for each section. I also keep a memory/ directory with architecture decisions, API patterns, and mobile constraints.
The Workflow: /architect → /tech-lead → /dev-agent
The three-step workflow forces clarity. Each step has a purpose. Each step gates into the next.
Step 1: /architect
I write a feature spec and run /architect. Archie reads my CLAUDE.md, understands my stack, and designs the feature.
Design means:
- Wireframe of screens and navigation flow
- Data models (what server returns, what client stores)
- API endpoints needed (or changes to existing ones)
- State management plan (which store, which actions)
- Platform considerations (does this work on iOS and Android?)
- Permissions needed (if any)
- Native modules required (if any)
- Security considerations
Approval gate: I review the design. If it's wrong, I correct it and re-run. If it's right, I approve.
Get all 16 free CLAUDE.md templates + cheat sheets
Enterprise-grade conventions for every major stack, plus Claude Code and prompt engineering guides. No account needed.
Step 2: /tech-lead
I point /tech-lead at the approved design. Archie breaks it into concrete tasks.
Task breakdown respects these constraints:
- Each task touches ≤15 files
- Each task targets one concern (not "auth + notifications")
- Platform-specific work is separate tasks
- API integration is separate from UI
- One task per agent (so agents can work in parallel)
For a mid-size feature, this usually yields 3-5 tasks. Tasks are sequenced in the backlog so dependencies are resolved before dependents.
Approval gate: I review the task breakdown. Are the dependencies correct? Are the scopes reasonable? Approved or iterate.
Step 3: /dev-agent
I run /dev-agent (or multiple in parallel). Each agent claims a task from the backlog, creates a git worktree, and implements in isolation.
An agent's work cycle:
- Lock the task in the backlog (prevents other agents picking it up)
- Create a git worktree at .worktrees/T-XXX/ with a feature branch
- Read CLAUDE.md and relevant memory files
- Implement the task (writing tests, following patterns)
- Commit with conventional commits
- Create a Merge Request on GitLab
- Unlock the task and move it to done
The worktree isolation means agents never interfere. Each has its own branch, its own node_modules, its own build output.
Approval gate: Code review. I review the MR, request changes if needed, or merge.
Worked Example: Biometric Auth With PIN Fallback
Real feature. Real workflow. Here's how it went down.
The Spec
Feature: Biometric Authentication
Users should be able to:
- Authenticate using Face ID (iOS) or BiometricPrompt (Android)
- Fall back to a 4-digit PIN if biometric fails or is unavailable
- Receive helpful error messages if auth fails
- Skip biometric setup and use PIN-only on first launch
Constraints:
- Must work offline (read cached user on first launch)
- Must securely store PIN (never in AsyncStorage)
- Must handle permission edge cases (user denies biometric)
- Must not block app launch if biometric fails/architect Output
Archie designed it like this:
## Design: Biometric Auth With PIN Fallback
### Screens
1. LoginScreen
- Shows biometric prompt on load (if device supports)
- Falls back to PIN entry after 5 sec or if biometric fails
- Input: 4-digit PIN
- Output: JWT token → AuthStore
2. BiometricSetupScreen (optional, first launch only)
- "Enable fingerprint?" prompt
- Toggle to skip and use PIN-only
- Explains why we ask
### Data Models
User {
id: string
email: string
pinHash: string (hashed, stored server-side)
}
AuthState {
token: string | null
user: User | null
isLoading: boolean
error: string | null
biometricSupported: boolean
}
### API Endpoints
POST /auth/login
body: { pin: string }
returns: { token: string, user: User }
POST /auth/verify-biometric
body: { deviceId: string }
returns: { token: string, user: User }
### State Management
AuthStore (Zustand)
- login(pin): call POST /auth/login
- verifyBiometric(): call POST /auth/verify-biometric
- logout()
- setError()
- setPinInputValue()
### Platform Considerations
iOS:
- Use LocalAuthentication.authenticateAsync()
- Face ID is primary (no fingerprint available)
- Show "Use passcode" button if Face ID fails
Android:
- Use BiometricPrompt (via expo-local-authentication)
- May include fingerprint, face, iris, whatever device has
- Handle legacy devices without biometric
### Native Modules Required
- expo-local-authentication (already installed)
- expo-secure-store (for PIN or session token)
### Permissions
- iOS: no special permissions needed
- Android: <uses-permission android:name="android.permission.USE_BIOMETRIC" />
(handled by expo-local-authentication)
### Security
- PIN never stored client-side (hash sent to server)
- Token stored in SecureStore (not AsyncStorage)
- Biometric fails gracefully (falls back to PIN, doesn't crash app)
- MFA not required (PIN alone is sufficient, biometric is convenience)I approved this design. Nothing wrong with it.
/tech-lead Output
Archie broke it down like this:
## Task Breakdown
### Task T-001: Create AuthStore + API Integration
Scope:
- Define AuthState type
- Create AuthStore (Zustand) with login() and verifyBiometric() actions
- Create authApi service with /auth/login and /auth/verify-biometric
- Implement offline fallback (read cached user from SecureStore)
- Add error handling and setError() action
Files touched: ~8
Complexity: M
Depends on: None
Sequencing: First (others depend on this)
### Task T-002: Implement LoginScreen + PIN Entry
Scope:
- Create LoginScreen component with navigation type
- Render PIN input (4 digits, numeric keyboard, dots)
- Handle PIN submission via AuthStore.login()
- Show loading state during auth
- Display error messages from AuthStore
- Handle offline case (skip to cached user)
Files touched: ~5
Complexity: M
Depends on: T-001
Sequencing: After T-001
### Task T-003: Implement Biometric Prompt (iOS + Android)
Scope:
- Create useBiometric hook that wraps LocalAuthentication
- Handle platform differences (iOS Face ID vs Android BiometricPrompt)
- Check device capability and user permission
- Implement 5-second timeout (falls back to PIN)
- Call AuthStore.verifyBiometric() on success
- Handle errors gracefully (show PIN input, don't crash)
Files touched: ~4
Complexity: M
Depends on: T-001
Sequencing: After T-001 (parallel with T-002)
### Task T-004: BiometricSetupScreen (First Launch)
Scope:
- Create BiometricSetupScreen with toggle
- Persist choice to SecureStore
- Navigate to LoginScreen after choice
- Handle skip (PIN-only mode)
Files touched: ~4
Complexity: S
Depends on: T-001, T-003
Sequencing: After T-003
### Task T-005: Integration Tests + App.json Permissions
Scope:
- Add jest tests for AuthStore (login, error handling)
- Add integration tests for LoginScreen
- Update app.json with biometric permissions
- Document security practices in comments
Files touched: ~6
Complexity: M
Depends on: T-002, T-003, T-004
Sequencing: Last (all others done)I approved this breakdown. Good scoping, clear dependencies, tasks are parallelizable.
CLAUDE.md sets the rules. Archie runs the workflow.
Persistent memory, role-based skills, and approval gates. From idea to merged PR.
/dev-agent Execution
I ran /dev-agent 3 times (I could have done 5 in parallel, but I wanted to validate T-001 first).
Agent 1 (Session A): Implemented T-001 (AuthStore + API). 2 hours. MR created, reviewed, merged.
Agent 2 (Session B): Implemented T-002 (LoginScreen). Ran in parallel with Agent 3. 1.5 hours. Merged after T-001.
Agent 3 (Session C): Implemented T-003 (Biometric hook). 2 hours. Ran in parallel with Agent 2. Merged after T-001.
Agent 4 (Session D): T-004 (SetupScreen) + T-005 (tests). After all others. 2 hours.
Total wall-clock time: ~5 hours (T-001 serial, T-002 + T-003 parallel, T-004 + T-005 serial). Agents never stepped on each other. Code quality was excellent. Platform-specific issues caught in T-003.
What the Memory System Remembers
Between tasks, between agents, between sessions — my memory files keep context alive.
CLAUDE.md: Stack choice, folder structure, patterns (how to write hooks, how to style, how to name files, API conventions).
memory/architecture.md: Navigation structure, state model, API versioning.
memory/navigation-types.md: RootStackParamList, all screens, all params. Every agent can see the full nav tree.
memory/api-patterns.md: How we fetch, retry, cache, handle errors. Agents copy this pattern for every new endpoint.
memory/known-issues.md: Bugs I've fixed, platform gotchas, workarounds. Agents avoid repeating my mistakes.
memory/backlog/tasks.md: Every task (done and pending) with dependencies, complexity, status.
This memory system is the difference between agents writing code that fits your project and agents writing generic code.
Security Review: Catching the Gotchas
After all tasks merged, I ran /security-review (Archie's security skill). It audited the code and caught:
- PIN hash was being sent over HTTP in dev (missing env var check)
- Token was cached in AsyncStorage, should be SecureStore only
- Biometric failure silently fell back instead of showing error
- No cert pinning for API requests
I assigned these as follow-up tasks. All fixed within a day. Result: production-ready auth feature.
Results
Speed: This feature would have taken me 1 week solo. With the structured workflow, it took 5 hours of agent time + 2 hours of my review/approval time. 3x faster.
Quality: Cleaner code than I would have written (more consistent, follows my patterns). Platform-specific issues were caught earlier.
Parallelization: Three agents worked simultaneously without conflicts. Git worktrees made isolation trivial.
Security: The security review caught real issues. Without it, I might have shipped insecure code.
Scalability: I can now handle 3x more features with the same team size. Or write higher-quality code with the same speed.
Get all 16 free CLAUDE.md templates + cheat sheets
Enterprise-grade conventions for every major stack, plus Claude Code and prompt engineering guides. No account needed.
How to Start
1. Write a CLAUDE.md for your project. Document your stack, folder structure, component patterns, hook conventions, API layer, styling rules. Be detailed. Be specific. Examples for each section. This is the instruction set AI agents follow.
2. Create a memory/ directory. Document architecture, navigation types, API patterns, known issues. Keep it under 300 lines per file (split if bigger).
3. Run /architect on your next feature. Approve or iterate on the design.
4. Run /tech-lead on the approved design. Get a task breakdown.
5. Run /dev-agent on each task. Watch agents implement. Review MRs.
6. Update memory files as you go. Found a bug? Document it in known-issues. Discovered a pattern? Add it to patterns. These updates compound over time.
CLAUDE.md sets the rules. Archie runs the workflow.
Persistent memory, role-based skills, and approval gates. From idea to merged PR.
Final Thought
Three months ago, I thought AI agents were a gimmick for solo web developers. I was wrong. Structured AI-assisted development with proper memory, architecture, and workflows is a game-changer for mobile teams.
The key isn't just the AI. It's the structure. /architect forces clarity. /tech-lead ensures scoping. /dev-agent brings it home. Memory files keep context alive across sessions. Security review catches edge cases.
Start small. Pick one feature. Run the workflow. You'll ship 3x faster and wonder how you ever coded without it.