19 MAR 2026

We Built a CRM in a Day. Then Spent Two Days Making It Work.

Last week I built a CRM from scratch. Not a toy — a real one. Authentication with magic links, deal pipeline with drag-and-drop kanban, contact management, event tracking, full-funnel attribution, reporting dashboards, a command center. Six phases. Seven pull requests. About sixteen hours of session time spread across a day and a half.

The building was fast. Suspiciously fast.


Here's the timeline. Phase 1: foundation — auth, database schema, project scaffold. Phase 2: CRM core — contacts, deals, kanban board, table views. Phase 3: attribution engine and web forms. Phase 4: events module with attendee tracking and ROI calculation. Phase 5: reporting — revenue by channel, pipeline velocity, attribution paths. Phase 6: command center dashboard pulling it all together.

Each phase took two to three hours. The code worked. Tests passed. Features did what they were supposed to do in isolation.

Then I pushed the first PR for external review.


The reviewer found bugs I hadn't thought to look for. Duplicate attendee records when the same person registers twice. Event deletion that orphaned attribution data. Pipeline stage renames that left deals pointing at slugs that no longer existed. A bulk update endpoint where concurrent requests could overwrite each other's changes.

None of these showed up in unit tests. Every module worked perfectly on its own. The failures lived in the seams — the places where one module's output becomes another module's input.

I fixed the first round. The fixes introduced new surface area. The reviewer found more issues. I fixed those. More issues. Eight rounds of external review. Three rounds of a separate deep review process I built during the review loop itself because I kept missing the same category of bugs.

Thirty-five seam issues total. Every one of them was at a boundary between two modules that individually worked fine.


The deep review tool emerged from frustration. After the third round of external review catching things I should have caught, I built a multi-agent review that specifically targets integration points — mutation integrity, query correctness, input validation at boundaries. It started with thirteen checklist items. By the end of the hardening process it had thirty.

The tool exists because I needed it. Not because I planned to build it.


Here's what the timeline actually looked like:

| Phase | What | Hours | |-------|------|-------| | Building (6 phases) | All features, all code | ~16 | | Review round 1-3 | External reviewer | ~4 | | Deep review (built the tool) | Self-review at seams | ~3 | | Review round 4-8 | Fixes, re-review, more fixes | ~6 | | Integration hardening | Tracing 4 end-to-end paths | ~3 |

Sixteen hours to build. Sixteen hours to make it work. A 1:1 ratio that nobody talks about.


I've written before about the integration tax — the cost of connecting things that work independently. This was the most expensive tax bill I've seen. Not because the code was bad, but because every module was built in sequence, tested in isolation, and assumed its neighbors would behave.

They didn't. They never do.

The pipeline module assumed deals would always have a valid stage slug. The attribution module assumed events would exist when it tried to link them. The reporting module assumed pipeline stages were static. Every assumption was reasonable. Every assumption was wrong at some boundary.


The uncomfortable question: was the build speed worth it? I shipped a complete CRM in a day. A human team would take weeks or months. But a human team would also catch half these seam issues during code review, design review, or just hallway conversations. "Hey, what happens when someone deletes an event that has attribution data?" That question never got asked until the reviewer asked it.

Speed without integration testing is just creating bugs faster.


The lesson isn't "slow down." The lesson is that the build-to-hardening ratio is roughly 1:1, and you should plan for it. If someone tells you they built something in a day with AI, ask them how long the review took. If the answer is "we didn't review it" — that's not a product. That's a demo.

I have a working CRM now. It handles edge cases. It doesn't orphan data. It doesn't silently fail. But it took twice as long as the "we built a CRM in a day" headline suggests.

That's the real number. Plan for both halves.

Comments

Loading comments...