Comparing Search Implementations: Client vs Server
Measuring latency, bundle size, and relevance quality between client-side search with Fuse.js and server-side search with Postgres full-text and Meilisearch.
Context
A content site with 2,000 articles needed search. The question was whether to ship a search index to the browser or query a server. I tested three implementations: Fuse.js on the client, Postgres tsvector full-text search, and Meilisearch as a dedicated search service.
Problem
Client-side search avoids network round trips but ships data to every visitor. Server-side search adds latency and infrastructure but handles large datasets and complex ranking. The break-even point depends on corpus size, query complexity, and user expectations.
Constraints
- Corpus: 2,000 articles, average 1,200 words each
- Search fields: title, description, body text, tags
- Acceptable search latency: under 200ms end-to-end
- Bundle budget: under 100KB additional JavaScript for search
- Must support typo tolerance and prefix matching
- No user authentication required for search
Design
Client-Side: Fuse.js
Pre-built a JSON index at build time containing title, description, and the first 300 characters of each article body. Shipped this as a static asset loaded on first search interaction.
// Index generation at build time
const index = articles.map(a => ({
title: a.title,
description: a.description,
excerpt: a.body.slice(0, 300),
slug: a.slug,
tags: a.tags
}));Server-Side: Postgres Full-Text Search
ALTER TABLE articles ADD COLUMN search_vector tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(description, '')), 'B') ||
setweight(to_tsvector('english', coalesce(body, '')), 'C')
) STORED;
CREATE INDEX idx_articles_search ON articles USING GIN (search_vector);Query:
SELECT title, description, slug,
ts_rank(search_vector, websearch_to_tsquery('english', $1)) AS rank
FROM articles
WHERE search_vector @@ websearch_to_tsquery('english', $1)
ORDER BY rank DESC
LIMIT 10;Server-Side: Meilisearch
Deployed as a sidecar container. Indexed all article fields with default ranking rules. Queried via REST API.
Trade-offs
| Metric | Fuse.js (Client) | Postgres FTS | Meilisearch |
|---|---|---|---|
| Index size transferred | 420KB (gzipped: 140KB) | 0 (server-side) | 0 (server-side) |
| Search latency (p50) | 8ms (local) | 45ms (network + query) | 22ms (network + query) |
| Search latency (p95) | 35ms (local) | 80ms | 35ms |
| Typo tolerance | Yes (configurable) | No (requires trigram extension) | Yes (built-in) |
| Prefix matching | Yes | Partial (with :*) | Yes |
| Relevance quality | Acceptable for titles, poor for body | Good with weighted vectors | Excellent |
| Infrastructure cost | $0 | $0 (existing DB) | $5-15/month (container) |
| Initial load penalty | 140KB + parse time (~120ms) | None | None |
Bundle Impact Analysis
| Component | Size (gzipped) |
|---|---|
| Fuse.js library | 6KB |
| Search index (2,000 articles) | 140KB |
| Total | 146KB |
At 2,000 articles, the index exceeds the 100KB budget. At 500 articles, it was 38KB, which is acceptable. The break-even for bundle size is approximately 700 articles with the excerpt-only approach.
Relevance Comparison
I tested 20 common queries and scored results manually (correct result in top 3):
| Query Type | Fuse.js | Postgres FTS | Meilisearch |
|---|---|---|---|
| Exact title match | 20/20 | 20/20 | 20/20 |
| Partial title match | 18/20 | 16/20 | 19/20 |
| Concept search (e.g., "how to cache") | 12/20 | 17/20 | 18/20 |
| Typo (e.g., "authenication") | 15/20 | 3/20 | 19/20 |
Fuse.js struggles with concept matching because it operates on fuzzy string similarity, not semantic relevance. Postgres FTS handles concept matching through stemming but lacks typo tolerance without pg_trgm.
Failure Modes
Fuse.js: stale index. The search index is built at deploy time. New articles are invisible until the next build. For daily publishing, this creates a 0-24 hour search gap.
Fuse.js: memory pressure on mobile. Parsing a 420KB JSON object on a low-end Android device took 280ms and allocated 3.2MB of heap. On devices with less than 2GB RAM, this can trigger garbage collection pauses.
Related: How Garbage Collection Impacts Android Performance.
Postgres FTS: connection pool exhaustion. Each search query holds a connection. Under load (100+ concurrent searches), the connection pool saturated, causing 500ms+ queue times. Mitigation: dedicated read replica or connection pooler (PgBouncer).
Meilisearch: index corruption on crash. Meilisearch v1.x had known issues with index corruption during ungraceful shutdowns. Mitigation: scheduled snapshots and a rebuild script.
Scaling Considerations
- Client-side search does not scale with corpus size. At 10,000 articles, the index would be 700KB+ gzipped, which is unacceptable.
- Postgres FTS scales well to 100,000+ documents with proper indexing. Query time stays under 50ms with GIN indexes.
- Meilisearch handles millions of documents but requires dedicated RAM (roughly 1.5x the dataset size).
- For multi-language content, Postgres FTS requires per-language dictionaries. Meilisearch handles this natively.
Observability
- Client-side: track search latency via Performance API marks, index load time, and query-to-click conversion rate
- Postgres: monitor
pg_stat_statementsfor search query duration, index scan counts, and sequential scan fallbacks - Meilisearch: built-in
/statsendpoint provides index size, query count, and average response time - All implementations: log zero-result queries to identify content gaps
See also: Debugging Performance Issues in Large Android Apps.
Key Takeaways
- Client-side search works for small corpora (under 500 articles) where typo tolerance matters and infrastructure simplicity is valued.
- Postgres FTS is the best default for server-side search when you already have Postgres. Zero additional infrastructure, good relevance, and linear scaling.
- Meilisearch wins on typo tolerance and relevance quality but adds operational overhead.
- The decision point is corpus size. Under 500 documents, client-side is viable. Over 500, server-side is necessary.
- Bundle size is the binding constraint for client-side search, not latency.
Further Reading
- Building a Simple Search Index: Designing an inverted index from scratch with tokenization, ranking, and query parsing, then comparing it against Postgres full-text search.
- Comparing REST vs GraphQL for Mobile Clients: Measured payload sizes, request counts, latency, and battery impact of REST vs GraphQL APIs serving a mobile application with varying net...
- Building a View Counter System With Postgres: Designing a page view counter that handles concurrent writes, avoids double-counting, and stays responsive under load using Postgres.
Final Thoughts
I shipped Postgres FTS. The corpus was at 2,000 articles and growing. The 45ms p50 latency was well within the budget, typo tolerance was not critical enough to justify Meilisearch's operational overhead, and the zero-infrastructure cost sealed the decision. Adding pg_trgm for fuzzy matching covered the remaining relevance gaps at no additional cost.
Recommended
Designing an Offline-First Sync Engine for Mobile Apps
A deep dive into building a reliable sync engine that keeps mobile apps functional without connectivity, covering conflict resolution, queue management, and real-world trade-offs.
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.