SSR vs SSG vs ISR in Next.js: What I Measured
Concrete latency, TTFB, and cache-hit measurements across SSR, SSG, and ISR rendering strategies in Next.js under realistic traffic.
Context
Next.js supports three rendering strategies: Server-Side Rendering (SSR), Static Site Generation (SSG), and Incremental Static Regeneration (ISR). Documentation describes the trade-offs in general terms. I wanted numbers.
I deployed three identical pages on Vercel, one per strategy, behind a CDN. Each page rendered a product detail view pulling from a Postgres-backed API. I ran load tests with k6 at 50, 200, and 500 concurrent users over 10-minute windows.
Problem
Choosing a rendering strategy without measurements leads to premature optimization or, worse, production latency surprises. Teams default to SSR because it "feels dynamic" or SSG because it "feels fast" without understanding where each breaks.
See also: What Breaks First When Traffic Scales.
Constraints
- Deployment target: Vercel (serverless functions for SSR, edge for static)
- Database: Neon Postgres, single region (us-east-1)
- Content freshness requirement: product data updates every 5 minutes on average
- Page complexity: 12 API calls aggregated into a single render
- Budget: must stay within Vercel hobby tier limits for the test
Design
Each route served the same React component tree. The only difference was the data-fetching strategy:
| Strategy | Data Fetching | Cache Behavior |
|---|---|---|
| SSR | getServerSideProps | No CDN cache, fresh on every request |
| SSG | getStaticProps (build time) | Full CDN cache, stale until redeploy |
| ISR | getStaticProps + revalidate: 300 | CDN cache with background regeneration |
I instrumented each with custom Server-Timing headers to measure:
- Database query time
- React render time
- Total server response time (TTFB at origin)
Client-side, I captured TTFB, FCP, and LCP via the Performance API.
Trade-offs
Latency Results (p50 / p95 / p99 in ms)
| Strategy | TTFB p50 | TTFB p95 | TTFB p99 | FCP p50 |
|---|---|---|---|---|
| SSR | 320 | 890 | 1,400 | 580 |
| SSG | 18 | 32 | 48 | 210 |
| ISR (cache hit) | 22 | 38 | 55 | 220 |
| ISR (regeneration) | 340 | 920 | 1,500 | 590 |
Key observations:
- SSG is 15x faster at p50 TTFB compared to SSR. This held across all concurrency levels.
- ISR cache hits perform identically to SSG. The overhead of checking revalidation state is negligible.
- ISR regeneration requests are slower than SSR by about 5-8% because of the additional cache-write step after rendering.
- SSR p99 degrades badly under load. At 500 concurrent users, SSR p99 hit 2,100ms. SSG stayed at 52ms.
Freshness vs Speed
| Strategy | Max Staleness | Update Mechanism |
|---|---|---|
| SSR | 0 (always fresh) | Per-request |
| SSG | Unbounded (until redeploy) | CI/CD pipeline |
| ISR | revalidate seconds (300s here) | Background regeneration |
For product pages where 5-minute staleness is acceptable, ISR gives SSG-level performance with bounded staleness.
Cost at Scale
At 1M page views/day:
- SSR: ~1M serverless function invocations. At Vercel pricing, roughly $15-20/day in compute.
- SSG: Near-zero compute. CDN bandwidth only.
- ISR: Approximately 288 regenerations/day per page (one every 5 minutes). For 1,000 pages, that is 288,000 invocations/day, roughly $4-5/day.
Related: Failure Modes I Actively Design For.
Failure Modes
SSR under cold starts: Serverless SSR functions cold-start at 800-1,200ms in my tests. This makes the first request after idle extremely slow. Provisioned concurrency helps but increases cost.
ISR stale-while-error: If the background regeneration fails (database down, API timeout), ISR continues serving the last successfully generated page. This is a feature, but it means stale data can persist indefinitely if no one notices the regeneration failures. There is no built-in alerting for this.
SSG build time scaling: With 10,000 product pages, full SSG builds took 18 minutes. At 100,000 pages, builds exceeded the 45-minute Vercel limit. SSG does not scale linearly with page count for build times.
ISR thundering herd: When a cached ISR page expires, the first request triggers regeneration. But if 200 requests arrive simultaneously for that page, only one triggers regeneration while the rest get the stale version. This is correct behavior, but the single regeneration request carries the full latency penalty.
Scaling Considerations
- SSR scales horizontally through serverless function concurrency, but each request incurs full compute cost
- SSG scales through CDN, which is effectively infinite for read traffic
- ISR scales like SSG for reads but requires serverless capacity for regeneration bursts
- For sites with more than 50,000 pages, ISR with on-demand revalidation (
res.revalidate()) is preferable to time-based revalidation to avoid regeneration storms
Observability
What I monitored during the test:
Server-Timingheaders parsed into Grafana dashboards- Vercel function logs for cold start detection (look for
COLD_STARTin execution metadata) - CDN cache hit ratio via
x-vercel-cacheheader (HIT,STALE,MISS) - Database connection pool utilization (critical for SSR, irrelevant for SSG)
The most useful single metric: CDN cache hit ratio. For ISR, this should be above 99% for stable pages. If it drops below 95%, your revalidation interval is too short or you have too many unique URLs.
Key Takeaways
- SSG is the fastest and cheapest option when staleness is acceptable. Nothing beats "serve a file from a CDN."
- ISR provides a strong middle ground. For content that updates every few minutes, a 5-minute revalidation window adds negligible staleness at SSG-level latency.
- SSR should be reserved for pages where data must be request-specific (user dashboards, personalized content, real-time pricing).
- Measure TTFB at p95 and p99, not just p50. SSR's tail latency is where the pain lives.
- ISR regeneration failures are silent by default. Build alerting around the
x-vercel-cacheheader and function error rates.
Further Reading
- Measuring Cold Starts Across Different Architectures: Cold start latency measurements across AWS Lambda, Vercel Functions, Cloudflare Workers, and containerized deployments with concrete numb...
- Testing Caching Strategies in Real Conditions: Comparing cache-aside, write-through, and read-through strategies with measured hit rates, latency, and consistency trade-offs under prod...
- Implementing Server-Side Rendering Without Overhead: Techniques for reducing SSR latency including streaming, selective hydration, component-level caching, and measured performance gains.
Final Thoughts
The data made the decision straightforward for product pages: ISR with a 300-second revalidation. It delivered 22ms p50 TTFB (matching SSG), bounded staleness at 5 minutes, and cost 75% less than SSR at projected traffic. The only pages that stayed on SSR were the user dashboard and cart, where personalization made caching impossible. Measure before choosing. The defaults are not always what you expect.
Recommended
Designing an Offline-First Sync Engine for Mobile Apps
A deep dive into building a reliable sync engine that keeps mobile apps functional without connectivity, covering conflict resolution, queue management, and real-world trade-offs.
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.