How I Evaluate Technical Proposals
The specific questions and frameworks I use when reviewing technical proposals, design documents, and architecture decisions, based on patterns from hundreds of reviews.
Context
I have reviewed hundreds of technical proposals over the years: design documents, RFC-style proposals, architecture decision records, and informal "here is what I am thinking" write-ups. The quality varies enormously, but the failure modes are remarkably consistent. Most weak proposals fail in the same ways, and most strong proposals succeed for the same reasons.
Related: Design Trade-offs I'd Make Differently Today.
This post describes the evaluation framework I use. It is not a scoring rubric. It is a set of questions I ask and patterns I look for.
The First Question: What Problem Does This Solve?
This sounds obvious, but a surprising number of proposals start with the solution and work backward to the problem. "We should use Kafka" is a solution. "We need to decouple order processing from payment processing because deploys to the payment service are blocked by the order processing deployment schedule" is a problem.
A good problem statement includes:
- Who is affected: Users, developers, operations, business stakeholders
- How they are affected: Quantified if possible (latency, error rate, developer hours, cost)
- Why now: What changed that makes this problem worth solving today rather than next quarter
If the proposal cannot articulate a clear problem, the solution is premature.
The Second Question: What Are the Alternatives?
Every proposal should present at least two alternatives to the proposed approach, including "do nothing." The do-nothing option is important because it forces the author to articulate the cost of the status quo.
What I look for in the alternatives section:
- Honest evaluation: Each alternative should be evaluated on its merits, not dismissed to make the preferred option look better.
- Same evaluation criteria: All options should be evaluated against the same set of criteria.
- Acknowledgment of trade-offs: The preferred option should have downsides. If the proposal presents the preferred option as strictly better than all alternatives, the analysis is incomplete.
A proposal without alternatives is an announcement, not a proposal.
The Third Question: What Could Go Wrong?
The risk section is where I learn the most about the author's experience level. Junior engineers tend to list superficial risks ("the project might take longer than estimated"). Senior engineers list operational risks ("during migration, both the old and new systems will run simultaneously, and we need to handle writes landing in either system").
Risks I always look for:
- Migration risk: How do we get from the current state to the proposed state without disrupting users?
- Operational risk: What new failure modes does this introduce?
- Data risk: Can this change corrupt data or cause data loss?
- Rollback risk: If the change goes wrong, how do we undo it?
- Dependency risk: Does this introduce new external dependencies?
- Team risk: Does the team have the skills and experience to build and operate this?
The Evaluation Framework
For each proposal, I evaluate along these dimensions:
| Dimension | Question | Red Flag |
|---|---|---|
| Necessity | Does this solve a real, current problem? | Solution looking for a problem |
| Simplicity | Is this the simplest approach that could work? | Over-engineering for hypothetical future needs |
| Reversibility | Can we undo this if it does not work? | Irreversible changes without strong justification |
| Operability | Can we run this in production? | No discussion of monitoring, alerting, or debugging |
| Migration | How do we get there from here? | "Big bang" migration with no incremental path |
| Scope | Is the scope proportional to the problem? | Boil-the-ocean proposals that try to solve everything at once |
| Evidence | Is the proposal backed by data or prototypes? | Pure speculation about performance or behavior |
Patterns I See in Strong Proposals
They Start Small
The best proposals define a minimal first step that delivers value and gathers data. Instead of "redesign the entire notification system," the proposal says "add a message queue between the order service and the notification service as a first step, measure the impact, then evaluate further decomposition."
They Define Success Criteria
Strong proposals specify how we will know the change worked. Specific, measurable criteria: "p99 latency for checkout drops below 2 seconds" or "deployment frequency for the payment service increases from weekly to daily."
Without success criteria, you cannot evaluate the outcome. The project is "done" when the code is written, not when the problem is solved.
They Address Migration Explicitly
The hardest part of any systems change is not building the new system. It is migrating from the old system while the old system is running. Strong proposals dedicate significant space to the migration plan:
See also: Building a Minimal Feature Flag Service.
- How will traffic be shifted incrementally?
- How will data be migrated without downtime?
- How long will both systems run in parallel?
- What is the rollback plan at each stage?
They Acknowledge What They Do Not Know
Honest proposals include uncertainty. "We believe this approach will handle the expected load, but we have not validated it with a prototype. The first milestone includes a load test." This is more trustworthy than a proposal that presents everything as certain.
Patterns I See in Weak Proposals
Resume-Driven Architecture
The proposal introduces a technology the author wants to learn rather than the technology that best solves the problem. This is easy to spot: the proposal spends more time explaining the technology than explaining how it solves the problem.
Invisible Complexity
The proposal describes the happy path in detail but glosses over error handling, edge cases, and operational concerns. "Messages are processed by the consumer" is the happy path. What happens when the consumer crashes mid-processing? What happens when a message is malformed? What happens when the queue fills up?
The Missing "Compared to What?"
The proposal describes the benefits of the proposed approach without comparing them to the current approach. "The new system will handle 10,000 requests per second." Is the current system handling 9,000 or 500? The absolute number is meaningless without context.
Scope Creep in the Proposal
The proposal starts by solving one problem but expands to solve three. This usually means the author is not sure which problem is most important and is trying to justify the effort by expanding the scope. The strongest proposals solve one problem well and explicitly defer the others.
Key Takeaways
- Start with the problem, not the solution. A proposal without a clear problem statement is premature.
- Require alternatives, including "do nothing." A proposal without alternatives is an announcement.
- Evaluate risks seriously. Migration risk and operational risk are the most commonly under-estimated.
- Strong proposals start small, define success criteria, address migration explicitly, and acknowledge uncertainty.
- Watch for resume-driven architecture, invisible complexity, missing baselines, and scope creep.
- The best proposals are the ones that make the trade-offs explicit and let the reader evaluate them honestly.
Further Reading
- Designing Systems for Humans, Not Just Machines: Why the human factors in system design, including cognitive load, operational ergonomics, and team structure, matter as much as the techn...
- What I Look for in System Designs: The specific qualities and patterns I look for when reviewing system designs, from data flow clarity to failure mode analysis, and the co...
- Engineering Decisions That Reduce Pager Fatigue: Architectural and operational decisions that reduce the frequency and severity of production pages, based on patterns from years of on-ca...
Final Thoughts
Evaluating technical proposals is not about finding flaws. It is about ensuring the team makes an informed decision with eyes open to the trade-offs, risks, and alternatives. The goal is not to reject proposals but to improve them, to surface the assumptions and risks that the author may not have considered, and to ensure the investment of engineering time is directed at the right problem with the right approach.
Recommended
Designing an Offline-First Sync Engine for Mobile Apps
A deep dive into building a reliable sync engine that keeps mobile apps functional without connectivity, covering conflict resolution, queue management, and real-world trade-offs.
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.