How I Decide Where Complexity Belongs

Dhruval Dhameliya·July 23, 2025·7 min read

A framework for placing complexity in the right layer of a system, whether that is the client, the server, the database, or the infrastructure.

Every system has essential complexity that cannot be eliminated. The question is not whether complexity exists, but where it lives. Putting complexity in the wrong layer creates friction that compounds over time. Putting it in the right layer makes the system feel simpler even when it is not.

See also: Designing a Simple Metrics Collection Service.

The Complexity Placement Framework

When I encounter a piece of complexity, I evaluate it against four criteria:

  1. Who changes it most frequently? Complexity should live close to the team that modifies it.
  2. What is the blast radius of a bug? High-risk complexity should live in layers with the strongest safety nets (type checking, testing, rollback).
  3. What are the performance constraints? Latency-sensitive complexity should live as close to the data as possible.
  4. How many consumers depend on it? Shared complexity should live in a shared layer, not duplicated across consumers.

These criteria often conflict. A piece of business logic might change frequently (favoring the application layer), require low latency (favoring the database), and be shared across multiple services (favoring a library). The framework does not give a single answer. It gives you the dimensions to reason about.

Client vs. Server Complexity

The traditional question: should this logic live in the client or the server?

Place it in the client whenPlace it in the server when
Immediate user feedback is neededCorrectness requires authoritative data
The logic is purely presentationalMultiple client types need the same logic
Offline operation is requiredSecurity validation is involved
Reducing server load is a priorityThe logic needs to change without client updates

The most common mistake is putting validation only in the client. Client-side validation improves UX by providing immediate feedback. But it is never a substitute for server-side validation because clients can be bypassed, outdated, or compromised.

My default: validate twice. Client for UX, server for correctness. The duplication is intentional.

Application Layer vs. Database Layer

Moving logic into the database (stored procedures, triggers, computed columns) trades application-layer flexibility for data-layer performance and consistency.

When I move logic to the database:

  • Data integrity constraints that must be enforced regardless of which application writes to the database. Foreign keys, check constraints, and unique constraints belong in the schema.
  • Aggregate calculations over large datasets where transferring raw data to the application is prohibitively expensive.
  • Transactional consistency requirements that span multiple tables and cannot tolerate application-level coordination failures.

When I keep logic in the application:

  • Business rules that change frequently. Deploying a database migration is slower and riskier than deploying application code.
  • Logic that requires external service calls. Database-level logic should not reach out to HTTP endpoints.
  • Complex branching logic. Stored procedures with extensive conditional logic are harder to test, version, and review than application code.

The anti-pattern I see most often: putting business logic in database triggers. Triggers are invisible to application developers, difficult to test, and create action-at-a-distance bugs that are painful to diagnose.

Service Boundary Complexity

In a microservices architecture, deciding which service owns a piece of complexity is the most consequential placement decision.

The principle I follow: complexity belongs in the service that owns the data it operates on. If the order service needs to validate that a product is in stock, it calls the inventory service rather than querying the inventory database directly. The inventory service owns the complexity of stock validation.

This seems obvious, but it breaks down in practice when:

  • A business operation spans multiple services. Order creation involves inventory, pricing, payment, and fulfillment. Where does the orchestration logic live?
  • Performance requirements demand denormalization. The order service needs product details for display. Calling the product service on every request adds latency.
  • Teams disagree on ownership. The pricing logic depends on customer tier (owned by the customer service) and product category (owned by the product service). Who owns pricing?

For orchestration, I use a dedicated orchestrator service or a saga coordinator. The business logic of each step stays in the owning service. The sequence and error handling live in the orchestrator.

Related: Designing Event Schemas That Survive Product Changes.

For denormalization, I use event-driven data replication. The product service publishes changes. The order service maintains a local read replica of the product data it needs. The complexity of synchronization is explicit and isolated.

For ownership disputes, I follow the rule: the service that would break if the logic is wrong owns the logic. If wrong pricing breaks the order service (because orders get created with wrong amounts), pricing logic belongs in (or is delegated to) the pricing domain, which the order service calls.

Infrastructure Layer Complexity

Some complexity belongs in infrastructure rather than application code:

  • Rate limiting and throttling. Better handled by an API gateway than by each service independently.
  • TLS termination and certificate management. Infrastructure concern, not application concern.
  • Service discovery and load balancing. The application should not need to know the physical addresses of its dependencies.
  • Log aggregation and metric collection. Sidecars and agents handle this more reliably than application-level code.

The rule: if every service needs the same capability and the implementation does not depend on business logic, it belongs in infrastructure.

The exception: when infrastructure-level solutions do not provide sufficient granularity. A generic rate limiter might not understand that some API calls are more expensive than others. In that case, the coarse rate limiting stays in infrastructure and fine-grained rate limiting moves to the application.

Complexity Migration

Complexity placement is not permanent. As systems evolve, complexity that was correctly placed in one layer may need to move to another.

Signals that complexity needs to migrate:

  • A database stored procedure has grown to 500 lines with extensive business logic
  • Client-side validation logic has diverged from server-side validation
  • An infrastructure-level solution is being customized per-service to the point where it is no longer generic
  • Application code is reimplementing database features (custom caching, custom indexing, custom transaction management)

Migration follows the same phased approach as any refactor: introduce the new location, run both in parallel, verify correctness, cut over, remove the old location.

Key Takeaways

  • Complexity placement is governed by change frequency, blast radius, performance constraints, and number of consumers.
  • Client validation is for UX. Server validation is for correctness. Always do both.
  • Database-level logic is appropriate for integrity constraints and aggregations. Application-level logic is appropriate for frequently changing business rules.
  • In microservices, complexity belongs in the service that owns the data it operates on.
  • Infrastructure handles cross-cutting concerns that do not depend on business logic.
  • Complexity placement is not permanent. Migrate when the original placement no longer fits the system's evolution.

Further Reading

  • When Caching Makes Things Worse: Real scenarios where adding a cache increased complexity, introduced bugs, or degraded performance, and the decision framework I use to e...
  • How I Think About Engineering Risk: A framework for identifying, categorizing, and managing engineering risk across system design, team dynamics, and operational decisions.
  • When to Rewrite vs Refactor: A decision framework for choosing between incremental refactoring and a full rewrite, based on system state, team context, and business c...

Final Thoughts

Misplaced complexity is one of the most common sources of architectural pain. It manifests as performance problems (logic is too far from the data), maintenance burden (logic changes require coordinated deploys), and bugs (logic is duplicated inconsistently). Getting the placement right does not eliminate complexity, but it ensures the complexity is where the team can manage it effectively.

Recommended