Your Backend Architecture Should Evolve, Not Be Designed Upfront
Most backend systems don't fail from picking the wrong pattern — they fail from adding complexity too early or too late. A practical guide to recognizing the signals and evolving your architecture.
Your backend is a mess. Business logic is scattered across controllers. Services call each other in circles. Every new feature feels like surgery. The temptation is strong: throw it all away, redesign it “properly” — hexagonal architecture, domain-driven design, clean separation of concerns. This time, you’ll get it right.
You won’t. Not because you’re not good enough, but because you don’t have enough information yet. Architecture decisions made upfront are guesses. Educated guesses, sure — but guesses about how the system will evolve, what the bottlenecks will be, where the boundaries should live. And those guesses are almost always wrong in ways that matter.
The systems I’ve seen survive and stay maintainable over years didn’t start with a grand architecture. They started simple and evolved — deliberately, guided by real pain signals rather than theoretical best practices. The complexity was added when it earned its place, not before.
This is not an argument for sloppy code. It’s an argument for calibrated complexity — matching your architecture to the actual forces acting on your system, not the ones you imagine. Most backend systems don’t fail because they picked the wrong pattern. They fail because they added complexity before the pain justified it, or ignored the pain until the codebase became hostile to change.
The hard part isn’t knowing what to evolve toward — the patterns are well-documented. The hard part is recognizing when.
Start Boring, Stay Boring (As Long As It Works)
The simplest backend architecture that works is a layered one: a controller handles the HTTP request, a service contains the business logic, a repository talks to the database. No ports, no adapters, no event bus. Just three layers with clear responsibilities.
This isn’t a shortcut — it’s a strategy. And it’s grounded in a fundamental problem: you don’t have enough information at the start of a project to make good architectural decisions.
Martin Fowler has argued for years that you should start with a monolith, because even experienced architects get boundaries wrong at the beginning. The same logic applies at a smaller scale. When you’re writing the first version of a feature, you don’t know which parts will change frequently, where the performance bottlenecks will appear, or which modules will need to evolve independently. Architectural patterns like hexagonal architecture, CQRS, or domain-driven design are answers — but you haven’t encountered the questions yet.
Laurentiu Iarosevici makes this case well in his Stratification series, where he traces how a backend naturally evolves through generations — from raw handlers, to controllers, to services, to CQRS, to full domain-driven design. Each generation emerges as a response to the limitations of the previous one. You don’t skip ahead, because each transition only makes sense when you’ve experienced the specific friction it solves.
This is the YAGNI principle — “You Aren’t Gonna Need It” — applied to architecture itself. We’re used to applying YAGNI to features: don’t build the admin panel until someone needs it. But we rarely apply it to structure. We add abstraction layers, repository interfaces, and event systems “because we’ll need them later.” Maybe. But every layer you add now is a tax you pay on every single change until that “later” arrives. More files to navigate, more indirection to trace, more concepts for new team members to learn. That tax compounds.
There’s a deeper principle at play here, one that Iarosevici calls the conservation of complexity: complexity doesn’t disappear when you add architecture — it moves. An abstraction layer doesn’t remove the complexity of database access; it relocates it behind an interface. That’s valuable when the complexity needs to be hidden from certain consumers. But when your service is the only consumer, and the business logic is “insert a row and return it,” the abstraction adds indirection without reducing cognitive load. You’ve made the code harder to follow without making it easier to change.
In a NestJS codebase, the “boring” starting point looks like this:
@Controller("missions")
export class MissionsController {
constructor(private readonly missionsService: MissionsService) {}
@Post()
async create(@Body() dto: CreateMissionDto) {
return this.missionsService.create(dto);
}
}
@Injectable()
export class MissionsService {
constructor(private readonly db: DrizzleService) {}
async create(dto: CreateMissionDto) {
const [mission] = await this.db
.insert(missions)
.values({
title: dto.title,
description: dto.description,
type: dto.type,
dailyRate: dto.dailyRate,
status: "draft",
})
.returning();
return mission;
}
}
No interface. No abstraction layer between the service and the database. The service is the business logic and the data access. Someone reviewing this code might itch to extract a repository, add an interface, separate concerns. Resist that itch. Right now, the “concern” is singular: persist a mission. Separating it into two classes doesn’t make it simpler — it makes it longer.
This is what John Ousterhout calls a deep module in A Philosophy of Software Design: a simple interface that hides enough complexity to justify its existence. When your service has three lines of logic, wrapping it behind a repository interface creates a shallow module — one that exposes nearly as much complexity as it hides. Save the depth for when there’s something worth hiding.
The question isn’t whether this will scale. It won’t — eventually. But “eventually” might be six months or two years from now, and when it arrives, you’ll know exactly where the separation needs to happen, because the pain will tell you. That’s what the next section is about.
Recognizing the Signals
Starting simple is the easy part. The hard part is knowing when simple stops being enough.
Most content about software architecture skips this entirely. You get “start with a monolith” on one side and “here’s how to implement hexagonal architecture” on the other, with nothing in between — no guidance on what the transition feels like from the inside. But that transition is where most teams get stuck. They either evolve too early (adding complexity they don’t need yet) or too late (struggling with a codebase that fights every change).
In my experience, there are four concrete signals that your current architecture is reaching its limits. Each one points to a specific kind of evolution.
Signal 1: Velocity Drops
Features that should take a day take a week. Not because they’re technically hard, but because the code is hard to navigate. You spend more time understanding the existing logic than writing new logic. A service has grown to 800 lines. A module handles five loosely related features. The mental model required to make a change safely exceeds what fits in your head.
This is the earliest signal, and the most commonly ignored — because it’s gradual. Nobody notices the frog boiling. Each feature is a little slower than the last, and the team attributes it to “increasing complexity” as if that’s an inevitability rather than a symptom.
When you feel this: it’s time for structural decomposition. Split large services into focused ones. Extract modules around cohesive features. You don’t need a new architecture — you need better boundaries within the one you have.
Signal 2: Coupling Ripples
You change a method in the MissionsService and tests break in the notifications module. A database migration on the applications table requires updating files across four features. A “small” refactor turns into a 30-file pull request.
This is the clearest sign that boundaries are either missing or drawn in the wrong place. Your modules share too much — types, database queries, internal methods — and changes propagate across boundaries that should be walls.
You change this: But this breaks:
┌──────────────────┐ ┌──────────────────────────┐
│ MissionsService │────▶│ NotificationsService │
│ - updateStatus()│────▶│ ApplicationsController │
│ │────▶│ AnalyticsService │
└──────────────────┘ └──────────────────────────┘
When coupling gets this wide, the problem isn’t that your code is bad — it’s that implicit dependencies have accumulated over time. Services import each other’s internals, share database queries, or pass around types that belong to another module.
When you feel this: it’s time for explicit interfaces between modules. Define what each module exposes — its public contract — and make everything else internal. In NestJS terms, this means being deliberate about what you export from a module and refusing to import the internals of another. This is where the ideas from designing abstractions from the domain down start earning their cost — not as theoretical purity, but as a practical response to coupling that’s slowing you down.
Signal 3: Performance Hits Structural Walls
Your listing page is slow. The query is fine — you’ve optimized it, added indexes, checked the execution plan. The problem is that the same service method assembles data for both the listing and the detail page, because they were the same feature once. Now the listing loads relations it doesn’t need, runs validations that only the write path requires, and triggers side effects that don’t belong in a read operation.
You can hack around this with flags and conditionals — if (includeRelations), if (!skipValidation). But you’re fighting the structure. The read path and the write path have different performance profiles, different data needs, and different change frequencies. They want to be separate.
When you feel this: you’re looking at CQRS — Command Query Responsibility Segregation. Not the full event-sourcing version that conference talks love to present, but the practical core: separate the code that reads from the code that writes. A dedicated query service for listing pages, optimized for reads. A command service for mutations, focused on business rules and consistency. Martin Fowler himself cautions against applying CQRS everywhere — it adds real complexity and should only be used on specific portions of a system where the read/write tension is genuine.
Signal 4: Business Concepts Don’t Map to Code
Product says “mission application workflow.” Your code has updateStatusAndNotifyAndLog(). Product says “a freelancer applies to a mission.” Your code scatters that across three services, two controllers, and a cron job. The gap between how the business talks about the domain and how the code represents it grows until every product conversation requires a mental translation layer.
This isn’t just an aesthetic problem. When the domain language and the code language diverge, bugs become more likely — because the developer’s mental model doesn’t match the system’s behavior. New team members take longer to become productive. Product requirements become harder to translate into technical tasks.
When you feel this: you’re ready for domain modeling. Give your business concepts real types, real names, real behavior in the code. An Application isn’t a row in a database — it’s an entity with a lifecycle, rules about valid transitions, and behavior that belongs on the model, not scattered across services. This is where domain-driven design earns its place — not as an upfront architecture, but as a response to the growing distance between your domain and your code.
The Meta-Signal
Notice the pattern: each evolution is a response, not a plan. You don’t adopt CQRS because a blog post said so. You adopt it because your read and write paths are fighting each other and you’ve felt the pain. You don’t introduce domain modeling because it’s “best practice.” You introduce it because the team keeps misunderstanding how the application works.
The right time to evolve is when the cost of the current structure exceeds the cost of changing it. Not before.
The Boyscout Rule as Architecture Strategy
So you’ve recognized a signal. Your listing endpoint is tangled with your write path. Your MissionsService is coupled to half the codebase. You know what needs to evolve. The question is how — without stopping the world.
The answer, in most cases, is not a migration project. It’s the boyscout rule: leave the code better than you found it, applied deliberately and consistently over time.
This idea is usually framed as a code quality habit — rename a variable here, extract a method there. But it works as an architecture migration strategy too. The key insight: you don’t need to refactor your entire backend to the new architecture. You refactor the parts you’re already touching, at the moment you’re touching them.
In practice, this means your codebase has two architectures coexisting. Some modules follow the original simple layered structure. Others have been evolved — clearer boundaries, explicit interfaces, maybe separated read/write paths. And that’s fine. Consistency is a virtue, but it’s not worth a dedicated migration project that competes with feature work and never finishes.
Two modes of boyscouting, depending on the scope of what you’re changing:
Minor touch, minor refactor. You’re fixing a bug in a legacy service. While you’re there, you rename a misleading method, extract a type that was inline, clarify an interface. This is a five-minute improvement, done in the same PR as the bug fix. No separate ticket, no planning, no approval needed. The code is marginally better, and the cost is nearly zero.
Bigger scope, refactor first. A new feature requires significant changes to a legacy module — the one with the 800-line service and the coupling ripples you’ve been feeling. Don’t build the feature on top of the mess. Refactor the area first: extract services, define boundaries, untangle the dependencies. Then build the feature on the clean foundation. Two separate PRs — the refactor, then the feature. This makes the refactor reviewable on its own terms and the feature easier to understand.
Feature request arrives
│
▼
┌─────────────────────┐ ┌──────────────────────┐
│ Is the area clean │──▶ │ Build the feature │
│ enough to work in? │ Yes │ directly │
└─────────────────────┘ └──────────────────────┘
│ No
▼
┌─────────────────────┐ ┌──────────────────────┐
│ Refactor first │────▶│ Then build the │
│ (separate PR) │ │ feature on top │
└─────────────────────┘ └──────────────────────┘
Over months, the codebase gradually converges toward the better architecture — without a “big rewrite” project that never ends, without a feature freeze that frustrates the business, and without the risk of introducing new bugs across the entire system at once.
This Only Works If the Culture Supports It
Here’s the part that no architecture article talks about, but that determines whether any of this actually happens: company culture must explicitly value this work.
If the team is measured purely on feature delivery — stories closed, PRs merged, velocity charts going up — nobody will spend time on a refactoring PR. It’s invisible work. It doesn’t demo well. The product manager won’t celebrate it in the sprint review.
Tech leads and engineering managers need to make the case that refactoring is feature delivery. It’s investing in the speed of future work. That refactoring PR that “adds no value” today is the reason the next three features ship on time instead of late. Fowler frames this as technical debt — and like financial debt, the interest compounds. Paying it down incrementally is cheaper than waiting for bankruptcy.
Practically, this means:
- Refactoring PRs are normal PRs. They get reviewed, they get merged, they count as work. Not as side projects, not as “cleanup when you have time.”
- Estimates include the cleanup. When a feature touches a legacy area, the estimate accounts for the refactoring. “This feature is a 3, but the area needs cleanup first, so it’s a 5.” If the team is never allowed to say this, the debt only grows.
- Tech leads protect this space. Not by asking permission for every refactor, but by building a culture where improving the code while delivering features is the default, not the exception.
The boyscout rule scales from a single developer to an entire organization — but only if the organization treats code quality as an investment, not a luxury.
Know What You’re Building
Everything above assumes your system deserves the evolution. Not all of them do.
A fintech platform processing thousands of transactions daily has different architectural needs than an internal dashboard used by ten people. Both are “backend systems.” Both can have messy code, slow features, and frustrated developers. But the forces acting on them — scale, reliability requirements, team size, rate of change — are fundamentally different. Applying the same architectural rigor to both is a waste of time on one and a necessity on the other.
Laurentiu Iarosevici frames this well in “What Are You Actually Building?”: before debating patterns, identify your software’s archetype. An internal business tool, a SaaS product, a platform API — each operates under different constraints and evolves under different pressures. He warns against what he calls “CV Driven Development” — adopting complex architectures unsuited to your actual context simply because they look impressive. The architecture discussion that makes sense for one archetype is actively harmful for another.
What Actually Differentiates Systems
Three forces determine how far your architecture needs to evolve:
How many people work on it. A system maintained by two developers doesn’t need explicit module boundaries the way a system touched by four teams does. Much of architecture is a coordination mechanism — it exists so that people can work on different parts of the system without stepping on each other. If your team fits in one room and everyone understands the whole codebase, the coordination overhead of formal boundaries may cost more than the coupling it prevents.
How fast and unpredictably it changes. An internal reporting tool that gets a new feature every quarter has different structural needs than a product where the business model is still being figured out and requirements shift every sprint. Systems that change fast need boundaries that isolate change — so that evolving one area doesn’t destabilize others. Systems that change slowly can tolerate more coupling, because the coupling rarely gets exercised.
What breaks when it breaks. If your internal HR dashboard goes down for an hour, someone sends an email and you fix it after lunch. If your payment processing pipeline goes down, you’re losing money every second and possibly violating regulatory requirements. The higher the cost of failure, the more your architecture needs to enforce correctness — through domain modeling, explicit validation boundaries, and separation of critical paths from non-critical ones.
Calibrating the Signals
This matters because the signals from the previous section can be misleading if you don’t account for context. Velocity dropping on an internal tool used by one team? Maybe the answer is simpler code — extract a few methods, rename some variables — not a new architecture. The boyscout rule at its lightest. Coupling ripples on a platform handling external integrations with financial consequences? That’s a real problem that needs real boundaries, and waiting too long will cost far more than the refactoring investment.
The question to ask before any architectural evolution: does the cost of the change justify the benefit, given what this system actually is?
A CRUD admin panel with five entities doesn’t need CQRS, even if the service file is getting long — split it into two services and move on. A multi-tenant SaaS with complex business workflows probably needs domain modeling earlier than you think, because the cost of getting the domain wrong compounds across every tenant and every feature.
A useful exercise: imagine the worst thing that happens if you don’t evolve the architecture. If the answer is “features take a bit longer to build,” you can afford to wait. If the answer is “we’ll introduce subtle bugs in financial calculations because the logic is spread across six services with no clear ownership,” evolve now.
When You Do Evolve, Build From the Domain
When the signals tell you it’s time for proper abstractions — interfaces, ports, adapters — the trap is building them around the implementation you already have. You’ve been using Stripe for two years, so you extract a PaymentService that mirrors Stripe’s API. You add an interface on top and call it “abstracted.” But it’s just Stripe with extra steps — and the moment you integrate a second provider, the abstraction falls apart.
I wrote about this in detail in Don’t Leak Implementation Details in Your Abstractions: design the interface from the domain down, not the provider up. Let your business logic define the contract — what the application needs — and push the implementation details into adapters. The abstraction should speak your domain’s language, not the provider’s.
This principle applies beyond external providers. When you extract a module boundary, define its public interface based on what its consumers need, not on what the module currently does internally. The interface is a contract owned by the consumer, not the implementation. That’s the Dependency Inversion Principle in practice — and it’s what keeps your boundaries stable as the implementation behind them evolves.
The Evolution Path Depends on What You’re Building
Internal tool SaaS product Platform / API
│ │ │
▼ ▼ ▼
Simple layers Simple layers Simple layers
│ │ │
▼ ▼ ▼
Maybe stop here Module boundaries Module boundaries
│ │
▼ ▼
Domain modeling Domain modeling + CQRS
│
▼
Hexagonal / Ports & Adapters
This isn’t a universal prescription — your system might take a completely different path, and the stages might overlap or arrive in a different order. The point is that the destination depends on the forces acting on your software, not on what’s trendy or what the last conference talk recommended. Sandi Metz’s warning applies to architecture as much as it does to code: the wrong architecture is more expensive than no architecture. A system drowning in abstractions it doesn’t need is just as painful to maintain as a system with no structure at all — and arguably harder to fix, because removing architecture is politically harder than adding it.
TL;DR
- Architecture is a response, not a plan. The systems that stay maintainable aren’t the ones that started with the right pattern — they’re the ones that evolved when the pain justified it. Start with the simplest layered structure and resist the urge to add complexity before you’ve felt the cost of not having it.
- Learn to read the signals. Dropping velocity, coupling ripples across modules, performance problems you can’t solve without restructuring, business language that doesn’t match the code — these are the inflection points. Each one points to a specific evolution. Don’t guess which one you’ll need; wait until the codebase tells you.
- The boyscout rule is an architecture strategy. You don’t need a migration project. Refactor the parts you’re already touching, at the moment you’re touching them. Two architectures coexisting in the same codebase is fine — it’s better than a rewrite that never finishes.
- Culture determines whether any of this happens. If the organization treats refactoring as a luxury rather than an investment, the code will only get worse. Tech leads must protect the space for incremental improvement — not as side projects, but as the normal cost of feature delivery.
- Calibrate to what you’re actually building. An internal tool, a SaaS product, and a platform API need different levels of architectural sophistication. The forces that matter are team size, rate of change, and cost of failure — not best practices from conference talks about systems ten times your scale.