ADR-003: Cache-Aside Pattern with KV
ADR-003: Cache-Aside Pattern with KV
Section titled “ADR-003: Cache-Aside Pattern with KV”Status
Section titled “Status”Accepted
2025-09-01
Context
Section titled “Context”WorkerSQL needs a high-performance caching layer to minimize latency for frequently accessed data. The caching strategy must:
- Reduce load on authoritative storage (Durable Objects)
- Provide sub-millisecond read performance globally
- Handle cache invalidation efficiently
- Support complex cache key patterns
- Integrate seamlessly with edge architecture
Caching patterns considered:
- Cache-Aside (Lazy Loading)
- Write-Through Cache
- Write-Behind Cache
- Refresh-Ahead Cache
Cache storage options:
- Cloudflare KV (global, eventually consistent)
- In-Memory Cache (per-worker instance)
- External Cache (Redis, Memcached)
Decision
Section titled “Decision”We implemented Cache-Aside pattern using Cloudflare KV with Stale-While-Revalidate (SWR) semantics for optimal performance.
Rationale
Section titled “Rationale”Cache-Aside Pattern Benefits:
Section titled “Cache-Aside Pattern Benefits:”- Simplicity: Clear separation between cache and storage logic
- Flexibility: Application controls what and when to cache
- Resilience: Cache failures don’t break the application
- Consistency: Easier to reason about data consistency
- Performance: Optimal for read-heavy workloads
Cloudflare KV Advantages:
Section titled “Cloudflare KV Advantages:”- Global Distribution: Cached data available at all edge locations
- Low Latency: Sub-millisecond read performance
- High Availability: Built-in redundancy and failover
- Cost Effective: Pay-per-operation pricing model
- Integration: Native Cloudflare Workers integration
SWR Implementation:
Section titled “SWR Implementation:”interface CacheEntry<T> { data: T; version: number; freshUntil: number; // TTL boundary swrUntil: number; // SWR boundary shardId: string;}
Cache States:
- Fresh:
now < freshUntil
→ Return cached data immediately - Stale:
freshUntil <= now < swrUntil
→ Return stale data, trigger background refresh - Expired:
now >= swrUntil
→ Fetch fresh data, update cache
Implementation Details
Section titled “Implementation Details”Cache Key Strategy:
Section titled “Cache Key Strategy:”// Entity cache: t:<table>:id:<pk>createEntityKey(table: string, id: string): string
// Index cache: idx:<table>:<column>:<value>createIndexKey(table: string, column: string, value: string): string
// Query cache: q:<table>:<hash>createQueryKey(table: string, sql: string, params: unknown[]): Promise<string>
Cache Operations:
Section titled “Cache Operations:”class CacheService { async get<T>(key: string): Promise<T | null>; async set<T>(key: string, value: T, options: CacheOptions): Promise<void>; async delete(key: string): Promise<void>; async deleteByPattern(pattern: string): Promise<void>;}
Invalidation Strategy:
Section titled “Invalidation Strategy:”- Synchronous Invalidation: On data mutations
- Queue-Based Invalidation: For pattern-based cache clearing
- TTL-Based Expiration: Automatic cleanup of stale entries
- Version-Based Invalidation: For consistency across shards
Consequences
Section titled “Consequences”Positive:
Section titled “Positive:”- Ultra-low latency: Sub-millisecond cache hits globally
- High cache hit rates: SWR keeps data available during updates
- Improved user experience: Faster query responses
- Reduced backend load: Fewer requests to Durable Objects
- Cost optimization: Reduced compute usage on expensive operations
- Global consistency: Eventually consistent cache updates
Negative:
Section titled “Negative:”- Eventual consistency: Cache may serve stale data temporarily
- Complex invalidation: Pattern-based invalidation challenging in KV
- Memory overhead: Cache metadata increases payload size
- Cache warming: Cold cache leads to higher latency initially
- Additional complexity: Cache logic adds operational overhead
Trade-offs Accepted:
Section titled “Trade-offs Accepted:”Consistency vs Performance:
- Chose eventual consistency for better performance
- SWR minimizes staleness impact
- Version tracking helps detect inconsistencies
Simplicity vs Optimization:
- Cache-aside is simpler than write-through patterns
- Manual invalidation more predictable than automatic
- Application-controlled caching strategy
Operational Considerations:
Section titled “Operational Considerations:”Monitoring:
- Cache hit/miss ratios
- Cache invalidation patterns
- SWR refresh frequencies
- Cache size and cost metrics
Debugging:
- Cache key debugging tools
- Cache state inspection
- Invalidation trace logging
- Performance impact analysis
Scaling:
- Cache key space management
- Invalidation pattern optimization
- Cost monitoring and alerting
- Performance threshold tuning
Alternative Patterns Rejected:
Section titled “Alternative Patterns Rejected:”Write-Through Cache:
- ❌ Higher write latency
- ❌ Synchronous cache updates required
- ❌ More complex error handling
Write-Behind Cache:
- ❌ Risk of data loss
- ❌ Complex consistency guarantees
- ❌ Difficult debugging
In-Memory Cache:
- ❌ Not shared across workers
- ❌ Lost on worker restarts
- ❌ Limited memory available