How I Think About System Boundaries
A framework for deciding where to draw boundaries in software systems, covering service boundaries, module boundaries, and data boundaries, with the trade-offs of each approach.
Context
Every software system is a collection of boundaries. Service boundaries, module boundaries, API boundaries, data boundaries. Where you draw these boundaries determines how the system evolves, how teams collaborate, and how failures propagate.
See also: Refactoring a System Without Breaking Users.
Getting boundaries wrong is expensive. Too many boundaries and you have a distributed monolith: all the complexity of microservices with none of the autonomy. Too few and you have a monolith where every change risks breaking something unrelated.
This is how I think about where to draw the line.
The Purpose of a Boundary
A boundary exists to contain change. When something changes on one side of a boundary, the other side should not need to change. This is the fundamental test: does this boundary allow independent evolution of the things it separates?
If a boundary does not allow independent evolution, it is not a boundary. It is overhead.
Three Types of Boundaries
1. Data Boundaries
Data boundaries determine which components own which data. This is the most important boundary to get right because data outlives code. A bad data boundary creates coupling that persists even if you rewrite the code.
Principles for data boundaries:
- Each domain owns its data exclusively. No direct database access from other domains.
- Data is shared through APIs, not through shared databases.
- The schema is an internal implementation detail, not a public contract.
The litmus test: Can this service change its database schema without coordinating with any other team? If no, your data boundary is in the wrong place.
2. Service Boundaries
Service boundaries determine which functionality runs in which process. They affect deployment independence, scaling independence, and failure isolation.
Draw a service boundary when:
- Two components have different scaling requirements (one is CPU-bound, the other is I/O-bound)
- Two components have different availability requirements (one can tolerate downtime, the other cannot)
- Two components are owned by different teams that need to deploy independently
- A failure in one component should not cascade to the other
Do not draw a service boundary when:
- The components are tightly coupled and frequently change together
- The components share significant amounts of data that would need to cross the network
- The team is small enough to coordinate deployments easily
- The operational infrastructure for running multiple services does not exist
3. Module Boundaries (Within a Service)
Module boundaries determine how code is organized within a single service. They are cheaper than service boundaries (no network, no serialization, no deployment coordination) but still important for maintainability.
Principles for module boundaries:
- Group code by domain concept, not by technical layer. A "payments" module contains the payment controller, service, repository, and model. Not a "controllers" package that contains every controller.
- Enforce boundaries with visibility rules. A module's internal types should not be accessible from other modules.
- Depend on abstractions at boundaries. Module A calls an interface defined by Module B, not Module B's implementation directly.
The Boundary Decision Framework
When deciding where to draw a boundary, I evaluate:
| Factor | Favors Fewer Boundaries | Favors More Boundaries |
|---|---|---|
| Team structure | Single team | Multiple teams |
| Deployment cadence | Same cadence for all components | Different cadences needed |
| Scaling profile | Uniform resource requirements | Heterogeneous resource requirements |
| Data coupling | High data sharing | Low data sharing |
| Failure isolation needs | Failures are acceptable system-wide | Failures must be contained |
| Operational maturity | Low (no service mesh, no tracing) | High (full platform capabilities) |
Common Boundary Mistakes
The Entity Service Anti-Pattern
Drawing service boundaries around data entities: UserService, OrderService, ProductService. This creates services that have no business logic, just CRUD operations. Every business operation requires coordinating multiple entity services, which means every business operation is a distributed transaction.
Better: draw boundaries around business capabilities. An "Order Fulfillment" service owns the entire fulfillment workflow, including the order data, inventory reservation, and shipping initiation.
The Shared Database
Two services share a database. They are "separate services" in name but are coupled at the data layer. A schema change in one service can break the other. An index added by one service can degrade performance for the other. A connection pool exhausted by one service starves the other.
This is not two services. It is one service with two deployment artifacts. Either split the database or merge the services.
The Chatty Boundary
Two services that make 20 round-trip calls to each other per request. The boundary is in the wrong place. The functionality that requires this communication should be on the same side of the boundary.
A useful heuristic: if two services always change together, always deploy together, and always fail together, they should be one service.
How Boundaries Evolve
Boundaries are not permanent. They should evolve as the system and team grow.
A common and healthy evolution:
- Start as a monolith with clear module boundaries. Enforce separation through packages and visibility rules.
- Extract a service when a specific need arises. A module needs independent scaling, or a new team takes ownership.
- Split data when the service boundary is proven. Only after the service has been running independently for a while.
This progression is much safer than starting with microservices because each step is backed by evidence, not speculation.
Key Takeaways
- Boundaries exist to contain change. If a boundary does not enable independent evolution, it is overhead.
- Data boundaries are the most important and the hardest to change. Get these right first.
- Draw service boundaries based on scaling needs, availability requirements, team ownership, and failure isolation, not based on entities.
- A shared database between services is not a boundary. It is coupling with extra steps.
- Start with module boundaries within a monolith and extract services when evidence justifies it.
- Chatty boundaries indicate the boundary is in the wrong place. Functionality that communicates heavily should be co-located.
Related: How I'd Design a Mobile Configuration System at Scale.
Further Reading
- Designing a Feature Flag and Remote Config System: Architecture and trade-offs for building a feature flag and remote configuration system that handles targeting, rollout, and consistency ...
- Event Tracking System Design for Android Applications: A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, an...
- What I Look for in System Designs: The specific qualities and patterns I look for when reviewing system designs, from data flow clarity to failure mode analysis, and the co...
Final Thoughts
The best system boundaries are the ones you barely notice. They allow teams to work independently, components to scale independently, and failures to be contained, all without imposing significant communication overhead. Drawing these boundaries well requires understanding the domain, the team structure, and the operational capabilities. It is one of the highest-leverage design decisions you can make, and one of the hardest to change after the fact.
Recommended
Designing an Offline-First Sync Engine for Mobile Apps
A deep dive into building a reliable sync engine that keeps mobile apps functional without connectivity, covering conflict resolution, queue management, and real-world trade-offs.
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.