Case Study
GraphQL IDBLink
Persistent IndexedDB Caching Layer for Apollo Client
Impact
~75% reduction in network requests for returning users
Redundant Homepage Network Requests
Impact
~10x faster for cached operations
Perceived Data Load Time (slow 3G)
Impact
~10 lines of code across 3 files, zero runtime cost
SSR/CSR Cache Staleness Fix
Background
On a high-traffic e-commerce homepage serving millions of daily users, GraphQL-powered widgets (sliders, recommendations, dynamic banners) were fetching fresh data on every page load — even when the underlying data hadn’t changed. Apollo Client’s built-in InMemoryCache provided fast in-session caching, but it was entirely volatile: every page reload, tab close, or browser restart wiped the cache. Users on slow or unstable mobile networks experienced noticeable delays as identical data was re-fetched from the server on each visit.
Challenges
- !Apollo InMemoryCache is volatile — lost on page reload, tab close, or navigation away from the SPA
- !Homepage widgets fetch user-specific data that changes infrequently (hourly promotions, personalized recommendations) but was re-requested on every visit
- !SSR-rendered pages already had fresh data, but CSR re-renders still triggered redundant network requests to ‘validate’ cache
- !Mobile users on 3G/slow-4G experienced 200–800ms perceived delays waiting for GraphQL responses that returned identical payloads
- !No per-query granularity — all-or-nothing caching strategies couldn’t differentiate between frequently-changing and stable data
- !Server-side rendering created a cache coherency gap: SSR bypasses IndexedDB (no browser APIs), so stale CSR cache entries persisted across render mode transitions
Existing Alternatives & Their Limitations
Business Context
The homepage is the highest-traffic page on the platform, serving as the primary entry point for product discovery and conversion. Every 100ms of perceived load time correlates with measurable drops in engagement and add-to-cart rates. Reducing redundant network requests directly impacts infrastructure costs (CDN egress, origin server load) and user experience metrics (FCP, LCP, TTI).
Solution
Build a transparent, persistent caching layer that sits inside the Apollo Link chain and intercepts GraphQL operations at the transport level. Instead of replacing Apollo’s InMemoryCache, this solution adds a second, persistent tier using the browser’s IndexedDB API. The system is composed of three cooperating Apollo Links (IDBManageLink, IDBPrecheckLink, IDBLink) that use Apollo’s split mechanism for conditional routing — enabling per-operation opt-in caching with configurable TTL, user-scoped keys, and non-blocking writes.
Design Principles
Link Chain Overview
Core Links
auth · error · retry
IDBManageLink
Gate — generate key, set context flags
IDBPrecheckLink
Check: has(key, ttl)
IDBLink
from IndexedDB
Cache Hit → instant
HTTP Link
+ write IndexedDB
Cache Miss → network
HTTP Link
Standard network request
Data Flow
IDBManageLink
- 01Check APOLLO_IDB_CACHE_ENABLED flag
- 02Read idb context (enabled, ttl, customKey)
- 03Generate DJB2 hash key → set __idb_key
IDBPrecheckLink
- 01Call has(key, ttl) — check IndexedDB for valid entry
- 02Set __idb_cache_ready on context
IDBLink
get(key) from IndexedDB
__idb_from_cache: true
HTTP Link
Fetch from server
Write response → IndexedDB
IndexedDB Storage Schema
app-idb
IndexedDB Database
apollo-cache
Object Store
{operationName}#{hash}
DJB2 hash of query + variables
{ value: <response>, timestamp: <epoch ms> }
Full GraphQL response + write timestamp
"timestamp"
Enables TTL expiration checks
Example Entry
TTL checked via timestamp comparison · Expired entries auto-deleted by has()
Key Architectural Decisions
- IndexedDB over localStorage: async API, no 5MB limit, structured data support, doesn’t block the main thread
- Three-link architecture over monolithic link: separation of concerns (gate, precheck, serve), each link testable in isolation
- DJB2 hashing for cache keys: deterministic, fast, compact base-36 output for small key sizes
- postTask scheduler over raw setTimeout: native browser API for priority-aware background work, graceful fallback
- Feature flag gating: APOLLO_IDB_CACHE_ENABLED allows instant kill-switch without code deployment
Detailed Implementation
1Enable the Feature Flag
The APOLLO_IDB_CACHE_ENABLED constant must be set to a truthy value in your application’s runtime configuration.
// runtime-config.ts export const APOLLO_IDB_CACHE_ENABLED = true;
2Integrate into the Apollo Link Chain
Insert the IDBLink system at the end of the link chain, wrapping the terminating (HTTP) link using Apollo’s split mechanism.
import { from } from '@apollo/client';
import type { Operation } from '@apollo/client';
import { IDBManageLink, IDBPrecheckLink, IDBLink }
from '@/client/graphql/link/IDBLink';
const [terminatingLink] = link.slice(-1);
const coreLinks = link.slice(0, link.length - 1);
const composedLink = from([
...coreLinks,
new IDBManageLink().split(
(op: Operation) => op.getContext().__idb_enabled,
new IDBPrecheckLink().split(
(op: Operation) => op.getContext().__idb_cache_ready,
new IDBLink(),
terminatingLink,
),
terminatingLink,
),
]);3Initialize Apollo Client
Pass the composed link to createApolloClient to finalize the integration.
import { createApolloClient } from '@/graphql/apolloClient';
const client = createApolloClient({
link: composedLink,
});4Call markSSRHydration on Client Mount
In your app’s root component, call markSSRHydration() once during client-side hydration to invalidate stale IndexedDB entries from previous SSR cycles.
import { markSSRHydration } from '@/client/graphql/link/IDBLink/index-db';
function App() {
useEffect(() => {
markSSRHydration();
}, []);
return <ApolloProvider client={client}>...</ApolloProvider>;
}Impact Analysis
The IDBLink system transformed the homepage loading experience for returning users by eliminating redundant network requests for slowly-changing data. The solution operates at the transport layer with zero changes to component code, making it invisible to feature developers.
Performance Metrics
Redundant Homepage Network Requests
Perceived Data Load Time (slow 3G)
SSR/CSR Cache Staleness Fix
Main Thread Impact
Origin Server Load
Qualitative Improvements
- Returning users see homepage content instantly from IndexedDB, even on cold browser starts
- Component developers opt-in with 3 lines of context — no architectural changes needed
- Feature flag provides instant kill-switch without code deployment
- Fail-open design means IndexedDB failures never degrade user experience below baseline
- Per-query TTL granularity allows different cache windows for different data freshness needs
Future Considerations
- →LRU eviction policy: cap the number of IndexedDB entries and evict least-recently-used when exceeded
- →Stale-while-revalidate: serve from IndexedDB immediately, then background-refresh from network and update cache
- →Cross-tab synchronization via BroadcastChannel: when one tab refreshes data, update IndexedDB for all tabs
- →Migration to Cache Storage API: if IndexedDB proves unreliable on specific browsers/devices
- →Automatic TTL tuning: analyze cache hit rates per operation and adjust TTLs dynamically