Skip to main content
E-commerce Platform·Software Engineer — Web Platform·2023

Case Study

GraphQL IDBLink

Persistent IndexedDB Caching Layer for Apollo Client

Apollo ClientIndexedDBTypeScriptReactGraphQL

Impact

~75% reduction in network requests for returning users

Redundant Homepage Network Requests

Impact

~10x faster for cached operations

Perceived Data Load Time (slow 3G)

Impact

~10 lines of code across 3 files, zero runtime cost

SSR/CSR Cache Staleness Fix

01

Background

On a high-traffic e-commerce homepage serving millions of daily users, GraphQL-powered widgets (sliders, recommendations, dynamic banners) were fetching fresh data on every page load — even when the underlying data hadn’t changed. Apollo Client’s built-in InMemoryCache provided fast in-session caching, but it was entirely volatile: every page reload, tab close, or browser restart wiped the cache. Users on slow or unstable mobile networks experienced noticeable delays as identical data was re-fetched from the server on each visit.

Challenges

  • !Apollo InMemoryCache is volatile — lost on page reload, tab close, or navigation away from the SPA
  • !Homepage widgets fetch user-specific data that changes infrequently (hourly promotions, personalized recommendations) but was re-requested on every visit
  • !SSR-rendered pages already had fresh data, but CSR re-renders still triggered redundant network requests to ‘validate’ cache
  • !Mobile users on 3G/slow-4G experienced 200–800ms perceived delays waiting for GraphQL responses that returned identical payloads
  • !No per-query granularity — all-or-nothing caching strategies couldn’t differentiate between frequently-changing and stable data
  • !Server-side rendering created a cache coherency gap: SSR bypasses IndexedDB (no browser APIs), so stale CSR cache entries persisted across render mode transitions

Existing Alternatives & Their Limitations

Apollo InMemoryCache: fast but non-persistent, wiped on every page load
apollo-cache-persist: writes entire cache to localStorage synchronously, blocking the main thread on large caches; no per-query TTL control
Service Workers: complex setup, broad scope (caches entire HTTP responses, not individual GraphQL operations), difficult to configure per-query TTLs
HTTP Cache-Control headers: server-driven, no client-side granularity per operation; homepage aggregator queries return mixed-freshness data

Business Context

The homepage is the highest-traffic page on the platform, serving as the primary entry point for product discovery and conversion. Every 100ms of perceived load time correlates with measurable drops in engagement and add-to-cart rates. Reducing redundant network requests directly impacts infrastructure costs (CDN egress, origin server load) and user experience metrics (FCP, LCP, TTI).

02

Solution

Build a transparent, persistent caching layer that sits inside the Apollo Link chain and intercepts GraphQL operations at the transport level. Instead of replacing Apollo’s InMemoryCache, this solution adds a second, persistent tier using the browser’s IndexedDB API. The system is composed of three cooperating Apollo Links (IDBManageLink, IDBPrecheckLink, IDBLink) that use Apollo’s split mechanism for conditional routing — enabling per-operation opt-in caching with configurable TTL, user-scoped keys, and non-blocking writes.

Design Principles

Opt-in, Not Opt-out. Caching is activated per-query via the Apollo context object. No query is cached unless explicitly opted in with { idb: { enabled: true, ttl: 60 } }. This prevents accidental stale data for mutations, real-time feeds, or sensitive operations.
Transparent to Consumers. Components using useQuery don’t know whether data came from IndexedDB or the network. The link chain handles everything internally — cache checks, writes, and expiration — without changing the component API.
Fail-Open. If IndexedDB is unavailable, a key generation fails, or any IndexedDB operation throws, the system silently falls through to the network link. Caching is an optimization, not a requirement.
Non-Blocking Writes. Cache writes use the postTask scheduler at background priority to avoid impacting user interactions. Reads use user-visible priority for responsiveness.
SSR-Aware Invalidation. Timestamp-based invalidation ensures IndexedDB entries written before an SSR hydration event are treated as expired, preventing stale data from persisting across render mode transitions.

Link Chain Overview

Core Links

auth · error · retry

IDBManageLink

Gate — generate key, set context flags

idb_enabled

IDBPrecheckLink

Check: has(key, ttl)

ready

IDBLink

from IndexedDB

Cache Hit → instant

not ready

HTTP Link

+ write IndexedDB

Cache Miss → network

disabled

HTTP Link

Standard network request

Data Flow

useQuery with idb context

IDBManageLink

  1. 01Check APOLLO_IDB_CACHE_ENABLED flag
  2. 02Read idb context (enabled, ttl, customKey)
  3. 03Generate DJB2 hash key → set __idb_key

IDBPrecheckLink

  1. 01Call has(key, ttl) — check IndexedDB for valid entry
  2. 02Set __idb_cache_ready on context
cache_ready = true

IDBLink

get(key) from IndexedDB

__idb_from_cache: true

cache_ready = false

HTTP Link

Fetch from server

Write response → IndexedDB

Response → Component

IndexedDB Storage Schema

app-idb

IndexedDB Database

apollo-cache

Object Store

KEY

{operationName}#{hash}

DJB2 hash of query + variables

VAL

{ value: <response>, timestamp: <epoch ms> }

Full GraphQL response + write timestamp

IDX

"timestamp"

Enables TTL expiration checks

Example Entry

HomeSliderQuery#a1b2c3{ data: { slides: [...] }, timestamp: 1713600000000 }

TTL checked via timestamp comparison · Expired entries auto-deleted by has()

Key Architectural Decisions

  • IndexedDB over localStorage: async API, no 5MB limit, structured data support, doesn’t block the main thread
  • Three-link architecture over monolithic link: separation of concerns (gate, precheck, serve), each link testable in isolation
  • DJB2 hashing for cache keys: deterministic, fast, compact base-36 output for small key sizes
  • postTask scheduler over raw setTimeout: native browser API for priority-aware background work, graceful fallback
  • Feature flag gating: APOLLO_IDB_CACHE_ENABLED allows instant kill-switch without code deployment
03

Detailed Implementation

1Enable the Feature Flag

The APOLLO_IDB_CACHE_ENABLED constant must be set to a truthy value in your application’s runtime configuration.

// runtime-config.ts
export const APOLLO_IDB_CACHE_ENABLED = true;

2Integrate into the Apollo Link Chain

Insert the IDBLink system at the end of the link chain, wrapping the terminating (HTTP) link using Apollo’s split mechanism.

import { from } from '@apollo/client';
import type { Operation } from '@apollo/client';
import { IDBManageLink, IDBPrecheckLink, IDBLink }
  from '@/client/graphql/link/IDBLink';

const [terminatingLink] = link.slice(-1);
const coreLinks = link.slice(0, link.length - 1);

const composedLink = from([
  ...coreLinks,
  new IDBManageLink().split(
    (op: Operation) => op.getContext().__idb_enabled,
    new IDBPrecheckLink().split(
      (op: Operation) => op.getContext().__idb_cache_ready,
      new IDBLink(),
      terminatingLink,
    ),
    terminatingLink,
  ),
]);

3Initialize Apollo Client

Pass the composed link to createApolloClient to finalize the integration.

import { createApolloClient } from '@/graphql/apolloClient';

const client = createApolloClient({
  link: composedLink,
});

4Call markSSRHydration on Client Mount

In your app’s root component, call markSSRHydration() once during client-side hydration to invalidate stale IndexedDB entries from previous SSR cycles.

import { markSSRHydration } from '@/client/graphql/link/IDBLink/index-db';

function App() {
  useEffect(() => {
    markSSRHydration();
  }, []);

  return <ApolloProvider client={client}>...</ApolloProvider>;
}
04

Impact Analysis

The IDBLink system transformed the homepage loading experience for returning users by eliminating redundant network requests for slowly-changing data. The solution operates at the transport layer with zero changes to component code, making it invisible to feature developers.

Performance Metrics

Redundant Homepage Network Requests

Before6–8 GraphQL queries per page load
After0–2 queries (cached operations served from IndexedDB)
Improvement~75% reduction in network requests for returning users

Perceived Data Load Time (slow 3G)

Before400–800ms per widget
After<50ms from IndexedDB
Improvement~10x faster for cached operations

SSR/CSR Cache Staleness Fix

BeforeStale data displayed until manual refresh
AfterAutomatic invalidation on hydration
Improvement~10 lines of code across 3 files, zero runtime cost

Main Thread Impact

BeforeN/A (no persistent cache)
After1 Date.now() call on hydration, 1 comparison per cache check
ImprovementEffectively zero — all heavy work at background priority

Origin Server Load

BeforeFull query volume on every page load
AfterOnly non-cached or expired queries reach origin
ImprovementEstimated 30–40% reduction in homepage GraphQL traffic

Qualitative Improvements

  • Returning users see homepage content instantly from IndexedDB, even on cold browser starts
  • Component developers opt-in with 3 lines of context — no architectural changes needed
  • Feature flag provides instant kill-switch without code deployment
  • Fail-open design means IndexedDB failures never degrade user experience below baseline
  • Per-query TTL granularity allows different cache windows for different data freshness needs

Future Considerations

  • LRU eviction policy: cap the number of IndexedDB entries and evict least-recently-used when exceeded
  • Stale-while-revalidate: serve from IndexedDB immediately, then background-refresh from network and update cache
  • Cross-tab synchronization via BroadcastChannel: when one tab refreshes data, update IndexedDB for all tabs
  • Migration to Cache Storage API: if IndexedDB proves unreliable on specific browsers/devices
  • Automatic TTL tuning: analyze cache hit rates per operation and adjust TTLs dynamically

GraphQL IDBLink · 2023