How we built brrain.io

From brainstorming to production in 14 hours. 319 files. Zero errors. This could only have been done over the course of multiple days because we used bRRAIn to build bRRAIn. Our proof of concept is our own existence.

14 hrs Total build time
319 Files created
150+ Stories executed
8 Subdomains live
0 Compilation errors

Every interaction in this process — every brainstorm, every standard, every design file, every build decision — was stored in bRRAIn's persistent AI memory. When we opened Claude Code in VS Code to execute the build, it loaded this complete institutional context automatically. The AI didn't need onboarding. It didn't need documentation walkthroughs. It inherited everything.

01
April 10, 2026

The brainstorm

Every project starts with a conversation.

We sat down with AI (Claude) and captured the entire scope — 46KB of requirements, 59 design decisions. Instead of weeks of stakeholder meetings, we had a comprehensive scope document in one session.

bRRAIn memory in action

Every requirement, every design decision, every architectural choice from this session was stored in our AI Memory system — a structured file-based memory that persists across conversations. When we returned the next day, the AI didn't start from zero. It inherited the complete scope.

Scope Capture Session $ claude "Let's scope the entire bRRAIn platform" Capturing requirements... Authentication: Argon2id, sessions, 2FA, RBAC Architecture: 8-zone zero-trust design Marketing: 22 pages, 8 use cases, 8 blog posts Certification: 8 roles, 3 disciplines, exam system LMS: enrollment, progress, certificates Scope complete: 46KB requirements | 59 design decisions | 1 session
Artifact: Scope Document Excerpt
scope-document.md Architecture Decisions: ├── Decision #5: Go selected as implementation language ├── Decision #11: 8-zone zero-trust architecture ├── Decision #15: Hetzner CPX32 for production ├── Decision #22: Argon2id for password hashing └── Decision #40: DeepSeek R1 Distill as handler model Revenue Streams (6): 1. Self-Service SaaS License 2. Managed Install 3. Marketplace (30% commission) 4. Vendor Network 5. Certification Program (8 roles) 6. Smart Contract Transactions
02
April 11-12, 2026

Setting standards

Before writing a single line of code, we built the standards.

The Vector-First Design Standard — every color, font, spacing value defined before implementation. The SaaS Website Content Standard — 13-section content architecture. The Lead Generation Optimization Standard — research-backed conversion patterns.

These standards became the instructions. When we told AI to build, it didn't guess — it followed the blueprint.

bRRAIn memory in action

Each standard document was saved to our persistent memory store and tagged with POPE metadata. When the AI later built templates, it didn't need to be reminded of the design tokens — it loaded the Vector-First standard from memory and applied every color, font, and spacing value automatically.

Artifact: Vector-First Design Standard (excerpt)
vector-first-design-standard.css :root { /* Colors */ --color-bg: #000000 ; --color-surface-1: rgba(255,255,255,0.02) ; --color-text-primary: rgba(255,255,255,1.0) ; --color-accent-cyan: #00ebff ; /* Typography */ --font-ui: 'Inter', sans-serif ; --font-serif: 'IBM Plex Serif', serif ; --font-mono: 'JetBrains Mono', monospace ; /* Spacing */ --space-xs: 4px; --space-sm: 8px; --space-md: 16px; --space-lg: 32px; } /* Buttons: primary = white bg, 4px radius */ /* NO shadows. NO blur. NO raster images. */

Vector-First Design Standard

Colors, fonts, spacing, components — every visual decision pre-made

SaaS Content Standard

13-section content architecture for every page type

Lead Gen Optimization Standard

Research-backed conversion patterns and CTA placement

3 standards documents, 0 ambiguity
03
April 14, morning

Research & design

27 customer journeys mapped before the first template was created.

We mapped every persona's path through the site: Enterprise Buyers, Certification Seekers, Partners, Existing Customers, Developers. The B2B SaaS Content Architecture Analysis studied Stripe, Twilio, HubSpot, Salesforce, Notion, Vercel, Cloudflare. 14 new design files created in one session.

The website was fully designed before a single line of Go was written.

bRRAIn memory in action

27 customer journey documents, 14 design files, and the complete content architecture were stored in our AI Memory project folder. This became the institutional knowledge that informed every page the AI built. Session summaries and key decisions were logged with timestamps — creating an audit trail of every design choice.

Artifact: Customer Journey CJ-01 (excerpt)
customer-journey-CJ-01.md Journey: CTO, Mid-Size Accounting Firm Entry: Google search "AI memory for accounting" Step 1: Lands on brrain.io homepage → Reads hero, scrolls to features → Clicks "Product" in nav Step 2: Product overview page → Reviews 8-zone architecture → Clicks "Security" link Step 3: Security & compliance page → Verifies SOC 2, HIPAA, encryption → Clicks "Request Demo" Step 4: Demo request form → Fills name, email, company, team size → Submits → Receives confirmation email Conversion: Marketing → Demo Request | Pages touched: 4
27 Customer journeys
14 Design files
4,000+ Lines of content spec
04
April 14, afternoon

The plan

250+ stories. 16 epics. One autonomous execution plan.

The build plan was 6,000 lines — every story tagged [AI], [Human], or [AI+Human]. Each story had acceptance criteria, dependencies, estimated complexity, and file paths.

We didn't ask AI to "build a website." We gave it a 6,000-line blueprint with every decision pre-made.

bRRAIn memory in action

The 6,000-line build plan was stored alongside all prior context. When Claude Code opened in VS Code to execute the plan, it loaded the entire project memory — standards, architecture, customer journeys, content specs — giving the AI the same institutional context that a senior developer would need weeks to accumulate.

Artifact: Build Plan Story E2-S02 (excerpt)
build-plan.md E2-S02: Implement Argon2id Password Hashing Tag: [AI] Complexity: M Description: Create password hashing utility using Argon2id (memory=64MB, iterations=3, parallelism=4). Implement password validation against HaveIBeenPwned k-anonymity API. Acceptance Criteria: ✓ Hash uses Argon2id with correct parameters ✓ Validation checks HaveIBeenPwned via k-anonymity ✓ Password strength: 12+ chars, upper, lower, digit ✓ All checks are constant-time ✓ Unit tests for all functions Files Affected: internal/auth/password.go internal/auth/password_test.go
Artifact: Build Plan Story E3-S06 (excerpt)
build-plan.md E3-S06: Vector-First Design System as CSS Tag: [AI] Complexity: L Description: Create comprehensive CSS implementing the Vector-First design standard. Colors, typography, spacing, grid texture, surfaces, buttons, animations. Acceptance Criteria: ✓ CSS custom properties for all design tokens ✓ Button styles: primary, secondary, accent ✓ Component styles: cards, forms, code blocks ✓ Animation utilities: fade-in, slide-in (max 0.3s) ✓ Dark mode default, WCAG AA contrast ✓ Responsive: mobile + tablet + desktop Files: static/css/design-system.css Dependencies: E3-S01, E3-S02
E0 Infra E1 Auth E2 Web E3 CSS E4 Home E5 Pages E6 Content E7 Blog E8 LMS E9 Cert E10 Cases E11 Docs E12 QA E15 Deploy 16 Epics — 250+ Stories — 6,000 Lines
05
April 14, 22:00 UTC

The build

223 files in 3 hours. This is what happens when the plan is right.

The build ran as parallel agents — infrastructure, auth, web framework, and marketing all built simultaneously.

bRRAIn memory in action

This is where bRRAIn's persistent memory proved its value. Claude Code, running inside VS Code, had access to every decision, every standard, every customer journey from prior sessions. Parallel agents didn't need briefings — they loaded the shared memory and executed autonomously. The same AI memory architecture we sell to customers is what made this 3-hour build possible.

Artifact: Generated Code -- internal/auth/password.go (excerpt)
internal/auth/password.go func HashPassword (password string ) ( string , error ) { salt := make([] byte , 16) if _, err := rand.Read(salt); err != nil { return "" , fmt.Errorf( "generate salt: %w" , err) } hash := argon2.IDKey( [] byte (password), salt, 3 , // iterations 64 * 1024 , // 64MB memory 4 , // parallelism 32 , // key length ) return fmt.Sprintf( "$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s" , argon2.Version, 64*1024, 3, 4, ...
Artifact: Generated Template -- homepage hero (excerpt)
templates/marketing/home.html < section class = "hero" id = "hero" > < div class = "hero__container" > < h1 class = "hero__title" > Give Your Organization a bRRAIn < span class = "hero__amp" >&</ span > Gain < span class = "hero__cycle" id = "hero-cycle" > Super Powers </ span > < p class = "hero__subtitle" > bRRAIn gives your team persistent AI memory — so every engagement builds on everything that came before. </ p >
Build Output — Parallel Agents [22:00] Agent 1: EPIC 1 Infrastructure - 14 scripts, Caddy, PostgreSQL, Redis [22:00] Agent 2: EPIC 2 Auth System - 17 Go files, Argon2id, sessions, 2FA [22:00] Agent 3: EPIC 3 Web Framework - 21 Go files, templates, CSS [22:45] Agent 4: EPIC 4-7 Marketing - 22 templates, 8 components [23:15] Agent 5: EPIC 8 LMS Backend - enrollment, exams, certs [23:30] All agents: BUILD PASS - 223 files, 0 errors [23:45] Production deploy: brrain.io brrain-web + brrain-auth RUNNING $

The key insight: parallel execution with a detailed plan eliminates the bottleneck. When agents don't need to ask questions, they don't need to wait.

75 files/hour build rate
06
April 14-15, 2026

QA & iteration

11 bugs found. 11 bugs fixed. Then we made it beautiful.

The QA process found issues — CSS inheritance problems, missing routes, logo font inconsistencies. But because the architecture was sound, fixes were surgical — not rewrites.

Design iterations: 4 different hero illustrations tested, cycling text added, blue orb tuned.

bRRAIn memory in action

Bug reports, design feedback, and iteration decisions were logged as session learnings. When the AI fixed a CSS inheritance bug, it remembered the fix pattern and applied it consistently across all subdomains — no repeated mistakes, no regression.

Artifact: QA Bug Report BUG-0001 (excerpt)
qa-bug-report.md BUG-0001 Severity: Critical Pages affected: 8+ marketing pages Description: Content below hero is invisible. Scroll-triggered CSS animations set opacity to 0 and never fire the transition to opacity 1. Root cause: html/template "incomplete template" error -- {{define}} wrapper pattern incompatible with html/template context tracking. Fix: Remove {{define}} / {{template}} wrappers wrapper. Execute base.html directly. Result: 28,690 bytes rendered (was 15 bytes) FIXED

We iterated on design 5 times in 2 hours. Traditional agencies charge for weeks of revision cycles.

07
April 15, 2026

Content at scale

30,000+ words of use case content. 8 blog posts. 12 certification pages. In one session.

The persistent memory approach meant AI didn't start from zero for each page. Use case pages inherited the content standard, the design system, and the product knowledge. Blog posts referenced real frameworks (Dan Shapiro, Jensen Huang) with proper citations.

Content that would take a marketing team months was produced in hours — because the AI had the same institutional memory we're selling.

bRRAIn memory in action

30,000+ words of use case content maintained perfect consistency because every page inherited the same institutional memory — product architecture, certification program, pricing model, security framework. The AI didn't write each page in isolation; it wrote them with the full context of everything that came before.

Artifact: Blog Post -- five-levels-ai-maturity.md (excerpt)
five-levels-ai-maturity.md --- title: "The five levels of AI maturity" slug: five-levels-ai-maturity-dark-factory date: "2026-04-15" author: "bRRAIn Team" category: "AI Governance" tags: [ai-maturity, dark-factory, dan-shapiro] reading_time: 10 featured: true --- Most organizations think they're using AI. They're at Level 1. Here's what it takes to reach Level 5 -- and why it requires professionals who don't exist yet. ## The dangerous plateau at Level 2

This is our own product in action. bRRAIn's persistent memory means every page we built inherited everything from every previous page.

08
Final inventory

The result

96 Go source files
66 HTML templates
16 CSS files
8 Blog posts (10,000+ words)
8 Use cases (30,000+ words)
12 Certification pages
8 Zone architecture pages
12 Documentation pages
27 Test users seeded
8 Domains with TLS
Production hardened UFW, fail2ban, auto-updates
bRRAIn memory in action

319 files, all consistent, all following the same standards, all reflecting the same institutional knowledge — because they were built from the same persistent memory. This is the compounding effect we talk about on every product page. We didn't just build a website. We proved the product.

Artifact: Git History (final commits)
git log --oneline 5f30416 "See it in action" promo + IP removal 89d8fb5 AI memory narrative in every chapter 3eee41a How we built brrain.io -- story page bcb72eb Blog: dropdown filter from post categories 1a70776 Fix mobile hamburger menu d492483 Use case template: visual prose + lead gen 6ebf47c 8 zone landing pages + architecture grid ... 9d084f6 Epic 0: Development environment setup b939eed Initial commit Total: 50+ commits across 2 days

319 files. 14 hours. Zero errors. One founder.

09

What this means for you

If we can build our own platform this way, imagine what your organization can do with persistent AI memory.

The same technology that built this website in 14 hours is the technology we're offering to your organization. Persistent memory. Compounding knowledge. Zero context loss.

Experience what's possible

The same persistent AI memory that built this platform in 14 hours — available to your organization today.