The Integration Tax
Here's the pattern. I've watched it play out across dozens of projects, and it always goes the same way.
Step one: build the core logic. This is the fun part. Clean data models, elegant algorithms, tests that pass. The module works in isolation. It does exactly what you designed it to do.
Step two: build another module. Also fun. Also clean. Also works.
Step three: connect them. Everything breaks.
The failures at step three aren't subtle. Module A outputs dates in ISO format. Module B expects Unix timestamps. Module A returns a list. Module B expects a single item. Module A throws an exception on invalid input. Module B catches all exceptions silently and proceeds with default values.
Each of these is trivial to fix once you find it. The problem is finding it. The modules were tested independently. Their tests pass. The bug exists between them — in the contract that was never written down, in the format that was assumed but never verified, in the error behavior that was designed for one context and deployed in another.
I call this the integration tax, and it's the single most expensive cost in software development that almost nobody budgets for.
Why does this happen? Three reasons.
Building is more fun than connecting. Writing a new module from scratch is creative work. You're designing, making decisions, watching something come to life. Connecting two existing modules is plumbing. You're reading someone else's code (or your own code from three weeks ago, which might as well be someone else's), matching formats, handling edge cases you didn't anticipate. The creative work attracts attention. The plumbing gets deferred.
Tests don't cross boundaries by default. Unit tests verify that each module works in isolation. Integration tests verify that they work together. Most projects have significantly more of the former than the latter. The ratio is inverted from where it should be — the boundaries are where things break, but the boundaries are where the test coverage is thinnest.
Implicit contracts are invisible until they're violated. When Module A returns a response, there's an implicit contract about what that response looks like. The shape of the data. Which fields are always present. Which ones might be null. What the error format is. None of this is written down unless someone deliberately wrote it down. And by the time the contract is violated, both modules have been shipped, and the fix requires changing one of them, which risks breaking everything else that depends on the old behavior.
I have a checklist now. Before I call anything "done," I trace the workflow from the user's entry point through every module it touches, all the way to the output. For each boundary crossing:
- Types match? What Module A outputs, Module B actually accepts.
- Formats match? Strings are in the same format. Dates are in the same timezone. IDs use the same scheme.
- Errors propagate? If A fails, does the user see a useful message, or does B swallow the error?
- Empty states handled? What happens when A returns nothing? Does B crash, silently fail, or handle it?
This is boring work. It's the least interesting part of any project. It also catches more bugs than any other practice I've adopted.
The failure pattern has a corollary that's even more insidious: features break other features.
You ship Module A. It works. You ship Module B. It works. You ship Module C, which connects to both A and B. Everything works. Then you update Module A — a small change, clearly correct, tests pass. Module C breaks, because it depended on a behavior of A that the tests didn't cover and the update changed.
This isn't a testing failure. It's a visibility failure. Nobody could see the dependency because it was implicit. A's tests didn't know about C. C's tests didn't test A's specific behavior. The contract existed only in the code, and the code changed.
The intervention is simple to state and tedious to execute: every time you change a module that other modules depend on, run the full test suite. Not just the changed module's tests. All of them. When a test fails, that's a regression. Fix it before proceeding.
Simple to state. But in practice, this is where projects die. Because the full test suite takes time, and you're in the middle of building something, and the change was small, and surely nothing else depends on this one function...
I've started doing something that feels excessive but has caught real bugs: after building a feature that spans multiple files, I explicitly check every seam.
A seam is any point where two modules meet. For each one, I ask:
Is there a test that exercises this seam specifically? Not unit tests for each side — a test that sends data across the boundary.
If Module A's output format changes, will a test fail? If no, the seam is untested.
Are there implicit contracts? "This field is always present" — is that enforced, or assumed?
Most of the time, the answer to at least one of these is "no." And that "no" is exactly where the next bug will be.
The uncomfortable truth about integration is that it's the hardest part of the project, and it's also the part that gets the least respect. Nobody gets promoted for seam testing. Nobody gives conference talks about format matching. Nobody puts "verified that Module A's error types are compatible with Module B's error handler" on their resume.
The work that prevents projects from failing is the work that nobody wants to do. Building is the fun part. Integration is the real part. And the ratio of effort that goes to each is almost always inverted.
I keep a list of failure patterns from past projects. The top one — the pattern that shows up in every single post-mortem — is always the same:
Core logic: works. Integration: breaks everything. Polish: never happens, because we're still fixing integration bugs.
If you want to ship something that works, budget half your time for the connections. Not a quarter. Half. You'll still run over. But at least you won't be surprised when the modules that work perfectly in isolation fail catastrophically together.
That's the integration tax. You pay it or the project pays it for you.