Choosing Boring Technology on Purpose

Dhruval Dhameliya·June 17, 2025·7 min read

Why mature, well-understood technology is usually the right choice, how to evaluate when new technology is justified, and the hidden costs of novelty.

Dan McKinley's "Choose Boring Technology" essay formalized something experienced engineers already knew: new technology carries hidden costs that are not visible in feature comparison matrices. This post is a practical guide to applying that principle, including the cases where boring technology is genuinely the wrong choice.

The Innovation Token Model

Every team has a limited budget of innovation tokens. Each new, unproven technology costs a token. When you spend a token, you are committing to:

  • Learning the technology's failure modes through production experience (not documentation)
  • Building operational expertise that does not yet exist in your team or the broader community
  • Discovering edge cases that the technology's authors have not yet encountered
  • Accepting that best practices do not exist yet because the community is still figuring them out

Related: Building a Minimal Feature Flag Service.

See also: Building a View Counter System With Postgres.

Most teams can afford one or two innovation tokens per project. Spending three or more means the team spends more time fighting technology than building product.

What Makes Technology Boring

Boring technology has these properties:

  • Known failure modes. The community knows how it breaks, and there are established recovery procedures.
  • Operational expertise is available. You can hire people who have run it in production. Stack Overflow has answers for common problems.
  • Performance characteristics are well-documented. Benchmarks exist for realistic workloads, not just synthetic tests.
  • Upgrade paths are proven. Major version upgrades have been done by enough teams that the process and pitfalls are known.
  • Tooling ecosystem is mature. Monitoring, backup, migration, and debugging tools exist and are maintained.

PostgreSQL is boring. Redis is boring. Nginx is boring. Kafka is approaching boring. These are compliments.

The Hidden Costs of New Technology

Cost categoryBoring technologyNew technology
DebuggingStack traces are googleableYou read source code and file issues
HiringCandidates have production experienceCandidates have tutorial experience
Operational runbooksCommunity-maintainedYou write them from scratch
Performance tuningKnown knobs, documented trade-offsExperimentation required
Security patchesRapid community responseSmaller team, slower patches
Migration toolingMature ecosystemBuild your own or wait
IntegrationLibraries exist for every languageSDK quality varies, some languages unsupported

The debugging cost alone is significant. When a boring database has a performance anomaly, you can search for the exact error message and find a blog post from someone who encountered it three years ago. When a new database has the same anomaly, you open a GitHub issue and wait.

When New Technology Is Justified

Boring technology is not always the right choice. New technology is justified when:

The problem genuinely does not have a boring solution. Real-time collaborative editing, certain machine learning inference patterns, and some streaming data processing workloads have requirements that boring technology cannot meet. But verify this claim rigorously. "PostgreSQL cannot do this" is often wrong.

The boring solution has a hard scaling ceiling that you will hit within the project's lifetime. If PostgreSQL handles your current workload but you have concrete evidence (not projections) that you will exceed its capacity within 12 months, evaluating alternatives is reasonable.

The new technology is the boring choice for its domain. A technology can be new to your team but well-established in its community. Adopting Kubernetes in 2025 is not an innovation token. Adopting an experimental container runtime is.

The team has deep expertise in the new technology. If your team includes engineers who have operated the technology in production at scale, the "new technology" cost model does not fully apply. Their experience substitutes for community knowledge.

The Evaluation Framework

When I consider new technology, I apply this checklist:

  1. Can a boring technology solve this problem? If yes, use it. If "yes, but with some compromise," quantify the compromise before proceeding.

  2. Has this technology been in production at a company of similar scale for more than two years? If not, you are paying early-adopter costs.

  3. Can I hire three engineers with production experience in this technology within a reasonable timeframe? If not, the team becomes dependent on the engineers who introduced it.

  4. Is there a migration path back to boring technology if this does not work out? If the adoption is irreversible, the bar for justification is much higher.

  5. Am I solving an actual problem or optimizing for a hypothetical one? "We might need this at 10x scale" is not a reason to adopt new technology today. Build for current scale with a known path to the next order of magnitude.

Managing the Innovation Budget

When a team does spend an innovation token, I apply containment strategies:

Isolate the new technology. Do not let it become a dependency for every service. If the new database is for one specific workload, keep it there until it has been proven in production.

Invest in operational expertise early. Do not wait for the first production incident to write runbooks. Run failure injection exercises. Practice recovery procedures. Understand backup and restore before you need them.

Set a review date. Evaluate the technology after 6 months in production. Has it delivered the expected benefits? Are the operational costs within budget? If not, plan the migration back to boring technology before the switching cost increases.

Document the decision. An Architecture Decision Record that captures why the team chose this technology, what alternatives were considered, and what the expected trade-offs are. This prevents the next team from re-evaluating the same decision and also provides context for the 6-month review.

The Boring Technology Stack

For most web applications and backend services, this stack is boring and effective:

  • Compute: Linux, containers, standard orchestration
  • Data: PostgreSQL for relational, Redis for caching, S3-compatible storage for blobs
  • Messaging: Kafka or RabbitMQ depending on ordering and throughput requirements
  • Search: Elasticsearch for full-text search
  • Networking: HTTP/REST or gRPC, Nginx or HAProxy for load balancing
  • Monitoring: Prometheus, Grafana, structured logging to an aggregation service

This stack handles the requirements of 95% of applications. It is well-understood, well-documented, and well-supported. The team's engineering energy goes into solving business problems rather than fighting infrastructure.

Key Takeaways

  • Every team has a limited budget of innovation tokens. Spend them only on problems that boring technology genuinely cannot solve.
  • Boring technology has known failure modes, available expertise, mature tooling, and proven upgrade paths.
  • The hidden costs of new technology include debugging, hiring, operational knowledge, and security patching.
  • New technology is justified when the problem has no boring solution, the boring solution has a proven scaling ceiling, or the team has deep existing expertise.
  • Isolate new technology, invest in operational expertise early, and set a 6-month review date.
  • A standard boring stack (PostgreSQL, Redis, Kafka, Linux containers) handles the requirements of most applications.

Further Reading

Final Thoughts

Choosing boring technology is not a sign of conservatism or lack of ambition. It is a sign of maturity. The most impactful engineering is done by teams that spend their cognitive budget on the problem domain rather than on infrastructure novelty. Boring technology frees that budget.

Recommended