The question “should we use microservices?” has consumed more engineering hours in conference talks, blog posts, and Slack debates than almost any other architectural topic of the past decade. And most of that time has been wasted, because the question itself is wrong.
The real question is not monolith or microservices. It is: where should we draw service boundaries, and when? The answer depends on your team, your product, and the complexity you actually have today, not the complexity you imagine having in two years.
The Debate Is a Distraction
Most startups should start with a monolith. Most Fortune 500 companies should not run everything as one. Neither of these statements is controversial on its own, yet the industry keeps framing architecture as a binary choice, as if every team needs to pick a camp.
The reality is that architecture decisions are context-dependent. A five-person team building an MVP has fundamentally different constraints than a 200-person engineering organization running a platform with millions of daily active users. Applying the same architectural pattern to both is like prescribing the same medication regardless of the diagnosis.
What matters is understanding where you are today, where you are heading, and what problems you are actually trying to solve. If your biggest challenge is shipping features fast with a small team, microservices will slow you down. If your biggest challenge is that 15 teams cannot deploy independently because they share a single codebase and a single deployment pipeline, a monolith is the bottleneck.
The architecture should serve the organization. When it does not, it is time to re-evaluate, not because a conference speaker said so, but because the constraints have changed.
The Modular Monolith: The Best of Both Worlds
The modular monolith is the most underrated architectural pattern in the industry. It gives you the simplicity of a single deployable unit with the internal structure needed to evolve cleanly over time.
The idea is straightforward: organize your monolith into clearly bounded modules with explicit interfaces between them. Each module owns its domain logic and, ideally, its data. Modules communicate through well-defined APIs or events rather than reaching into each other’s internals. You deploy it as one unit, but internally it has the same separation of concerns that microservices aim to achieve.
Shopify is the most prominent example of this approach at scale. Rather than decomposing into hundreds of microservices, they invested heavily in componentizing their monolithic Rails application into modules with enforced boundaries. The result is a system that a large engineering organization can work in without the operational overhead of managing hundreds of independent services.
Why This Works
A modular monolith gives you several advantages that are easy to overlook:
- Refactoring is cheap. Moving code between modules is a code change, not a cross-service migration.
- Local development is simple. One repository, one process, one debugger.
- Transactions are straightforward. You do not need sagas or eventual consistency for operations that span modules.
- Splitting later is possible. If a module genuinely needs to become a service, the boundaries are already drawn. Extraction is a deliberate decision, not an emergency.
The key discipline is enforcing module boundaries with the same rigor you would enforce service boundaries. Without enforcement, a modular monolith degrades into a regular monolith with aspirational folder names. This is how architectural drift becomes technical debt — the boundaries blur gradually until every change is high-risk.
When Microservices Earn Their Complexity
Microservices are not free. They introduce network latency, partial failure modes, distributed tracing requirements, data consistency challenges, and significant operational overhead. These costs are real, and they need to be justified by equally real benefits. Here are the situations where they typically are.
Independent deployment is a genuine requirement. If your payment processing team needs to ship three times a day but cannot because they are coupled to a release train with twelve other teams, independent deployability has concrete value.
Polyglot requirements exist. Some problems are genuinely better solved in different languages or runtimes. A machine learning pipeline in Python, a high-throughput event processor in Go, and a CRUD API in whatever your team knows best can each use the right tool for the job.
Scale varies dramatically between components. If your search indexer needs 50 instances during peak hours but your admin panel needs two, scaling them independently saves real money.
Teams cannot coordinate releases. Amazon’s famous “two-pizza team” rule was not an arbitrary organizational preference. It was a response to the coordination overhead that made their monolith unworkable at their scale. When you have hundreds of teams, independent services become an organizational necessity, not just a technical preference.
If none of these apply to you, microservices are adding complexity without adding value. In the same spirit, choosing boring, proven technology for each component avoids spending your innovation budget on infrastructure that does not differentiate your product.
The Distributed Monolith Trap
The worst outcome is a distributed monolith: a system that has all the operational complexity of microservices with none of the benefits. This is more common than anyone likes to admit.
You have a distributed monolith if your “microservices” exhibit any of the following:
- They deploy together. If you cannot ship Service A without also shipping Service B because of shared schema changes or API contract updates, they are not independent services. They are a monolith with a network boundary in the middle.
- They share a database. Multiple services reading and writing to the same tables means you have shared mutable state with extra network hops. Changes to the schema require coordinating across teams, which is exactly the coupling microservices are supposed to eliminate.
- Failures cascade. If one service going down takes out three others because of synchronous call chains with no circuit breakers or fallbacks, you have a fragile distributed system that is harder to reason about than the monolith it replaced.
- They cannot be tested independently. If running the integration test suite requires spinning up the entire system, you have not achieved meaningful isolation.
Amazon’s Prime Video team wrote a well-known blog post in 2023 describing how they moved a monitoring service from a microservices architecture back to a monolith, reducing costs by 90% and simplifying operations significantly. The original architecture introduced distributed systems overhead that was not justified by the problem being solved. This was not a failure of the team. It was a healthy re-evaluation of an architecture that no longer matched the constraints.
The lesson: architecture is not a one-way door. If the costs outweigh the benefits, consolidate.
A Decision Framework
Rather than debating architectural philosophies, run through these concrete inputs:
Team Size and Structure
- Fewer than 15-20 engineers: A monolith, modular if you are disciplined, is almost certainly the right choice. The coordination cost of microservices will dominate your engineering time.
- 20-50 engineers: A modular monolith is the sweet spot. Start extracting services only where deployment coupling is causing measurable pain.
- 50+ engineers across multiple teams: Selectively introduce services where teams have clear domain ownership and need independent deployment. Not everything needs to be a service.
Conway’s Law
Your architecture will mirror your communication structure whether you plan for it or not. If you have a single team, you will build a monolith regardless of what you call it. If you have five teams, you will naturally end up with something that has at least five components. Work with this tendency rather than against it.
Data Ownership
If two components must share the same data to function, they are candidates for the same service. If they operate on clearly separate data domains with well-defined integration points, they are candidates for separation. Data boundaries are the most reliable indicator of where service boundaries should be drawn — and those boundaries directly shape the API contracts that your teams and consumers depend on.
Deployment Coupling
Track how often one team’s deployment is blocked by another team’s changes. If the answer is “rarely,” you do not need service independence. If the answer is “constantly,” that is a strong signal to extract.
Migration Patterns That Work
When you do need to evolve your architecture, three patterns have a strong track record.
The Strangler Fig Pattern
Named after the fig trees that gradually envelop their host, this approach involves building new functionality as services while leaving existing functionality in the monolith. Over time, the new services handle more traffic and the monolith shrinks. A routing layer directs requests to the appropriate destination. This is the lowest-risk migration strategy because you never need to rewrite existing working code. You simply stop adding to the old system and build new capabilities alongside it.
Branch by Abstraction
When you need to replace an existing component, introduce an abstraction layer that delegates to the old implementation. Build the new implementation behind the same abstraction. Switch traffic gradually. Remove the old implementation when it carries zero traffic. This pattern is particularly useful for replacing shared libraries or infrastructure components that multiple parts of the system depend on.
Why Big Bang Rewrites Fail
The “let’s rewrite the whole thing” approach fails with remarkable consistency. The new system needs to reach feature parity with a moving target, because the old system keeps evolving while the rewrite is underway. The rewrite takes two to three times longer than estimated. Business stakeholders lose patience. The team burns out. And the organization ends up maintaining two systems instead of one.
Incremental migration is slower on paper but faster in practice. You ship value continuously, you validate architectural decisions with real traffic, and you can course-correct without scrapping months of work.
Closing Thought
Architecture is not an identity. It is a set of tradeoffs that should be revisited as your team, product, and constraints evolve. Start simple. Draw clear boundaries. Split when the pain of coupling exceeds the pain of distribution. And do not let anyone tell you there is a single right answer, because the right answer changes as you grow. With AI reshaping which software categories remain viable, the architectural choices you make today also determine how well your product can survive the AI wave — adaptable architectures with clear boundaries are far easier to evolve than tightly coupled monoliths.
