Comparing REST vs GraphQL for Mobile Clients
Measured payload sizes, request counts, latency, and battery impact of REST vs GraphQL APIs serving a mobile application with varying network conditions.
Context
A mobile application with 30,000 DAU needed an API redesign. The existing REST API required 4-7 round trips per screen, transferring 2-3x more data than the client consumed. I built a GraphQL layer alongside the REST API and measured both under realistic mobile network conditions.
See also: Designing Mobile Systems for Poor Network Conditions.
Problem
Mobile clients face constraints that web clients do not: variable network quality, battery limitations, and strict payload size sensitivity. REST APIs designed for web clients often over-fetch and under-batch for mobile. GraphQL promises to solve both, but adds query parsing overhead and complexity.
Constraints
- Client: React Native application on iOS and Android
- Network conditions tested: 4G (50ms RTT, 10Mbps), 3G (100ms RTT, 1.5Mbps), and poor 3G (200ms RTT, 400Kbps)
- Screen tested: product listing (20 items) with category filters and user-specific pricing
- REST API: existing, with 5 endpoints for the screen (products, categories, prices, user preferences, promotions)
- GraphQL API: new, single endpoint, schema matching the REST resources
- Backend: Node.js, Postgres, Redis cache
Design
REST Implementation
The product listing screen requires:
| Endpoint | Payload Size | Purpose |
|---|---|---|
| GET /products?limit=20 | 12KB | Product list |
| GET /categories | 3KB | Filter options |
| GET /prices?product_ids=... | 4KB | User-specific pricing |
| GET /user/preferences | 1KB | Display preferences |
| GET /promotions?active=true | 2KB | Active promotions |
| Total | 22KB | 5 requests |
GraphQL Implementation
query ProductListing($limit: Int!, $userId: ID!) {
products(limit: $limit) {
id
name
thumbnail
price(userId: $userId) {
amount
currency
discount
}
category {
id
name
}
}
categories {
id
name
productCount
}
promotions(active: true) {
id
title
discount
}
userPreferences(userId: $userId) {
currency
sortOrder
}
}Single request. Response payload: 8KB.
Trade-offs
Network Performance (Product Listing Screen)
| Metric | REST (5 requests) | GraphQL (1 request) |
|---|---|---|
| Total payload (down) | 22KB | 8KB |
| Total payload (up) | 1.2KB (5 request headers) | 0.8KB (1 request + query) |
| Requests | 5 | 1 |
| Latency on 4G (p50) | 180ms (sequential) | 85ms |
| Latency on 3G (p50) | 420ms | 180ms |
| Latency on poor 3G (p50) | 1,200ms | 380ms |
| Latency on 4G (parallel) | 95ms | 85ms |
On poor 3G, GraphQL is 3x faster because it eliminates 4 round trips. Each round trip at 200ms RTT costs 400ms (request + response). On 4G with parallel requests, the difference is marginal.
Server-Side Performance
| Metric | REST | GraphQL |
|---|---|---|
| Server processing time (p50) | 25ms (total across 5) | 35ms (single, with resolver overhead) |
| Server processing time (p95) | 55ms | 65ms |
| Database queries | 5 (one per endpoint) | 5 (one per resolver, batched with DataLoader) |
| CPU per request | Lower per endpoint | Higher (query parsing + validation) |
| Cache hit rate | 85% (per endpoint) | 72% (per query, more unique cache keys) |
GraphQL is 10ms slower on the server due to query parsing and resolver orchestration. But the total end-to-end latency is lower because the network savings exceed the server overhead.
Payload Efficiency
| Field Category | REST (bytes) | GraphQL (bytes) | Reduction |
|---|---|---|---|
| Product data | 12,000 | 5,200 | 57% |
| Category data | 3,000 | 1,800 | 40% |
| Price data | 4,000 | Included in product | N/A |
| Preferences | 1,000 | 400 | 60% |
| Promotions | 2,000 | 600 | 70% |
| Total | 22,000 | 8,000 | 64% |
GraphQL eliminates unused fields. The REST product endpoint returned 28 fields per product; the client used 6. GraphQL returned exactly the 6 requested fields.
Battery Impact
Measured via Android Battery Historian over 1-hour sessions with 50 screen loads:
| Metric | REST | GraphQL |
|---|---|---|
| Network radio active time | 42s | 18s |
| Total data transferred | 1.1MB | 0.4MB |
| Estimated battery impact | 1.8% | 0.9% |
Fewer network requests mean fewer radio wake-ups, which directly impacts battery consumption.
Related: Failure Modes I Actively Design For.
Failure Modes
GraphQL query complexity attacks: Without limits, a client can request deeply nested data that generates expensive database queries. Mitigation: query complexity analysis with a maximum score, and depth limiting (max depth: 5).
Single point of failure: REST spreads risk across 5 endpoints. If one fails, the screen degrades gracefully (missing prices, but products still load). GraphQL returns a single response. A resolver failure can fail the entire query. Mitigation: use @defer or partial response patterns to allow partial success.
Cache inefficiency: REST endpoints have simple cache keys (/products?limit=20). GraphQL queries are structurally diverse, making response caching difficult. Two clients requesting the same products but different fields produce different cache keys. Mitigation: cache at the resolver level (per-field), not at the response level.
N+1 query problem in resolvers: Without DataLoader, a GraphQL query for 20 products with prices generates 20 individual price queries. DataLoader batches these into a single query, but it must be implemented explicitly for each resolver.
Scaling Considerations
- REST APIs scale horizontally by endpoint. High-traffic endpoints can be scaled independently. GraphQL concentrates all traffic on a single endpoint, requiring the entire resolver graph to scale together.
- GraphQL persisted queries (pre-registered query strings) eliminate parsing overhead and reduce request payload size. For mobile clients with a known set of queries, this is strongly recommended.
- CDN caching works naturally with REST (cache by URL). GraphQL POST requests require custom cache key extraction or Automatic Persisted Queries (APQ) with GET requests.
- At 100,000+ DAU, the resolver-level caching strategy matters more than the transport protocol. Invest in DataLoader and per-resolver TTLs.
Observability
- Track per-resolver execution time, not just total query time. Slow resolvers hide behind fast ones in aggregate metrics.
- Monitor query complexity scores and reject queries above threshold
- Log unique query patterns to identify opportunities for persisted queries
- Measure payload sizes per client version (older clients may request more fields)
- Compare REST vs GraphQL error rates during the migration period
Key Takeaways
- GraphQL reduces mobile network latency by 50-70% on slow connections by eliminating round trips and over-fetching.
- On fast connections with parallel requests, the difference between REST and GraphQL is marginal (10-15%).
- Server-side, GraphQL adds 10-30% CPU overhead due to query parsing and resolver orchestration.
- Cache hit rates are lower with GraphQL unless you implement resolver-level caching.
- The primary benefit for mobile is payload reduction (64% smaller), which impacts both latency and battery life.
- Use persisted queries in production. Parsing and validating arbitrary GraphQL queries on every request is wasteful for mobile clients with a fixed set of screens.
Further Reading
- Designing Idempotent APIs for Mobile Clients: How to design APIs that handle duplicate requests safely, covering idempotency keys, server-side deduplication, and failure scenarios spe...
- Designing APIs With Mobile Constraints in Mind: How to design backend APIs that account for mobile-specific constraints: bandwidth, latency, battery, intermittent connectivity, and long...
- Designing Rate Limiting for Mobile APIs: Rate limiting strategies for APIs consumed by mobile clients, covering token bucket algorithms, client identification, degradation modes,...
Final Thoughts
GraphQL was the correct choice for this mobile client. The 64% payload reduction and single round trip improved the user experience measurably on 3G networks, which represented 35% of the user base. The trade-off was increased server complexity and reduced cache efficiency. For a mobile-first product, that trade-off is worth making. For a web application on fast connections with effective parallel fetching, REST with field selection (?fields=id,name,price) often achieves 80% of the benefit with 20% of the complexity.
Recommended
Designing an Offline-First Sync Engine for Mobile Apps
A deep dive into building a reliable sync engine that keeps mobile apps functional without connectivity, covering conflict resolution, queue management, and real-world trade-offs.
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.