A standards-compliant HTTP cache implementation for server-side applications.
SharedCache is an HTTP caching library that follows Web Standards and HTTP specifications. It implements a cache interface similar to the Web Cache API but optimized for server-side shared caching scenarios.
- β¨ Key Features
- π€ Why SharedCache?
- β‘ Quick Decision Guide
- π¦ Installation
- π Quick Start
- π‘ Common Examples
- π Cache Status Monitoring
- π Logging and Debugging
- π Global Setup
- ποΈ Advanced Configuration
- π API Reference
- π Standards Compliance
- β Frequently Asked Questions
- π€ Who's Using SharedCache
- π Acknowledgments
- π License
- π RFC Compliance: Supports RFC 5861 directives like
stale-if-error
andstale-while-revalidate
- π― Smart Caching: Handles complex HTTP scenarios including
Vary
headers, proxy revalidation, and authenticated responses - π§ Flexible Storage: Pluggable storage backend supporting memory, Redis, or any custom key-value store
- π Enhanced Fetch: Extends the standard
fetch
API with caching capabilities while maintaining full compatibility - ποΈ Custom Cache Keys: Cache key customization supporting device types, cookies, headers, and URL components
- β‘ Shared Cache Optimization: Prioritizes
s-maxage
overmax-age
for shared cache performance - π Universal Runtime: Compatible with WinterCG environments including Node.js, Deno, Bun, and Edge Runtime
While the Web fetch
API has become ubiquitous in server-side JavaScript, existing browser Cache APIs are designed for single-user scenarios. Server-side applications need shared caches that serve multiple users efficiently.
SharedCache provides:
- Server-Optimized Caching: Designed for multi-user server environments
- Standards Compliance: Follows HTTP specifications and server-specific patterns
- Production Ready: Battle-tested patterns from CDN and proxy implementations
- Node.js environments - Native
caches
API not available - API response caching - Need to reduce backend load and improve response times
- Cross-runtime portability - Want consistent caching across Node.js, Deno, Bun
- Custom storage backends - Need Redis, database, or distributed caching solutions
- Meta-framework development - Building applications that deploy to multiple environments
- Edge runtimes with native caches - Cloudflare Workers, Vercel Edge already provide
caches
API - Browser applications - Use the native Web Cache API instead (unless you need HTTP cache control directives support)
- Simple in-memory caching - Consider lighter alternatives like
lru-cache
directly - Single-request caching - Basic memoization might be sufficient
// Cache API responses to reduce backend load
const apiFetch = createFetch(cache, {
defaults: { cacheControlOverride: 's-maxage=300' },
});
const userData = await apiFetch('/api/user/profile'); // First: 200ms, subsequent: 2ms
// Cache rendered pages using HTTP cache control directives
export const handler = {
async GET(ctx) {
const response = await ctx.render();
// Set cache control headers for shared cache optimization
response.headers.set(
'cache-control',
's-maxage=60, ' + // Cache for 60 seconds in shared caches
'stale-if-error=604800, ' + // Serve stale content for 7 days on errors
'stale-while-revalidate=604800' // Background revalidation for 7 days
);
return response;
},
};
Integration Requirements: This pattern requires web framework integration with SharedCache middleware or custom cache implementation in your SSR pipeline.
// Same code works in Node.js, Deno, Bun, and Edge Runtime
const fetch = createFetch(cache);
// Deploy anywhere without code changes
// Redis backend for multi-instance applications
const caches = new CacheStorage(createRedisStorage());
const cache = await caches.open('distributed-cache');
npm install @web-widget/shared-cache
# Using yarn
yarn add @web-widget/shared-cache
# Using pnpm
pnpm add @web-widget/shared-cache
Here's a simple example to get you started with SharedCache:
import {
CacheStorage,
createFetch,
type KVStorage,
} from '@web-widget/shared-cache';
import { LRUCache } from 'lru-cache';
// Create a storage backend using LRU cache
const createLRUCache = (): KVStorage => {
const store = new LRUCache<string, any>({ max: 1024 });
return {
async get(cacheKey: string) {
return store.get(cacheKey);
},
async set(cacheKey: string, value: any, ttl?: number) {
store.set(cacheKey, value, { ttl });
},
async delete(cacheKey: string) {
return store.delete(cacheKey);
},
};
};
// Initialize cache storage
const caches = new CacheStorage(createLRUCache());
async function example() {
const cache = await caches.open('api-cache-v1');
// Create fetch with default configuration
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride: 's-maxage=300', // 5 minutes default caching
ignoreRequestCacheControl: true,
},
});
// First request - will hit the network
console.time('First request');
const response1 = await fetch(
'https://httpbin.org/response-headers?cache-control=max-age%3D604800'
);
console.timeEnd('First request'); // ~400ms
// Second request - served from cache
console.time('Cached request');
const response2 = await fetch(
'https://httpbin.org/response-headers?cache-control=max-age%3D604800'
);
console.timeEnd('Cached request'); // ~2ms
// Check cache status
console.log('Cache status:', response2.headers.get('x-cache-status')); // "HIT"
}
example();
This package exports a comprehensive set of APIs for HTTP caching functionality:
import {
createFetch, // Main fetch function with caching
Cache, // SharedCache class
CacheStorage, // SharedCacheStorage class
} from '@web-widget/shared-cache';
const cache = await caches.open('api-cache-v1');
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride: 's-maxage=300',
ignoreRequestCacheControl: true,
},
});
import { createFetch } from '@web-widget/shared-cache';
const cache = await caches.open('api-cache-v1');
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride: 's-maxage=300', // 5 minutes default
},
});
// Simple usage - automatic caching
const userData = await fetch('/api/user/profile');
const sameData = await fetch('/api/user/profile'); // Served from cache
import Redis from 'ioredis';
import {
CacheStorage,
createFetch,
type KVStorage,
} from '@web-widget/shared-cache';
const createRedisStorage = (): KVStorage => {
const redis = new Redis(process.env.REDIS_URL);
return {
async get(key: string) {
const value = await redis.get(key);
return value ? JSON.parse(value) : undefined;
},
async set(key: string, value: any, ttl?: number) {
const serialized = JSON.stringify(value);
if (ttl) {
await redis.setex(key, Math.ceil(ttl / 1000), serialized);
} else {
await redis.set(key, serialized);
}
},
async delete(key: string) {
return (await redis.del(key)) > 0;
},
};
};
const caches = new CacheStorage(createRedisStorage());
const cache = await caches.open('distributed-cache');
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride: 's-maxage=600',
cacheKeyRules: {
header: { include: ['x-tenant-id'] }, // Multi-tenant support
},
},
});
const deviceAwareFetch = createFetch(await caches.open('content-cache'), {
defaults: {
cacheControlOverride: 's-maxage=600',
cacheKeyRules: {
device: true, // Separate cache for mobile/desktop/tablet
search: { exclude: ['timestamp'] },
},
},
});
const response = await deviceAwareFetch('/api/content');
const advancedFetch = createFetch(await caches.open('advanced-cache'), {
defaults: {
cacheControlOverride: 's-maxage=300, stale-while-revalidate=3600',
cacheKeyRules: {
search: { exclude: ['timestamp', '_'] },
header: { include: ['x-api-version'] },
cookie: { include: ['session_id'] },
device: true,
},
},
});
import crypto from 'crypto';
const createEncryptedStorage = (
baseStorage: KVStorage,
key: string
): KVStorage => {
const encrypt = (text: string) => {
const cipher = crypto.createCipher('aes192', key);
let encrypted = cipher.update(text, 'utf8', 'hex');
encrypted += cipher.final('hex');
return encrypted;
};
const decrypt = (text: string) => {
const decipher = crypto.createDecipher('aes192', key);
let decrypted = decipher.update(text, 'hex', 'utf8');
decrypted += decipher.final('utf8');
return decrypted;
};
return {
async get(cacheKey: string) {
const encrypted = await baseStorage.get(cacheKey);
return encrypted ? JSON.parse(decrypt(encrypted as string)) : undefined;
},
async set(cacheKey: string, value: unknown, ttl?: number) {
const encrypted = encrypt(JSON.stringify(value));
return baseStorage.set(cacheKey, encrypted, ttl);
},
async delete(cacheKey: string) {
return baseStorage.delete(cacheKey);
},
};
};
const secureStorage = createEncryptedStorage(baseStorage, 'my-secret-key');
const caches = new CacheStorage(secureStorage);
const tenantFetch = createFetch(await caches.open('tenant-cache'), {
defaults: {
cacheControlOverride: 's-maxage=300',
cacheKeyRules: {
header: { include: ['x-tenant-id'] },
search: true,
},
},
});
// Each tenant gets isolated cache
const response = await tenantFetch('/api/data', {
headers: { 'x-tenant-id': 'tenant-123' },
});
// Production-ready example with automatic token refresh
const createAuthenticatedFetch = (getToken) => {
return async (input, init) => {
const token = await getToken();
const headers = new Headers(init?.headers);
headers.set('Authorization', `Bearer ${token}`);
const response = await globalThis.fetch(input, {
...init,
headers,
});
// Handle token expiration
if (response.status === 401) {
// Token might be expired, retry once with fresh token
const freshToken = await getToken(true); // force refresh
headers.set('Authorization', `Bearer ${freshToken}`);
return globalThis.fetch(input, {
...init,
headers,
});
}
return response;
};
};
const authFetch = createFetch(await caches.open('authenticated-api'), {
fetch: createAuthenticatedFetch(() => getApiToken()),
defaults: {
cacheControlOverride:
'public, ' + // Required: Allow caching of authenticated requests
's-maxage=300',
cacheKeyRules: {
header: { include: ['authorization'] }, // Cache per token
},
},
});
const userData = await authFetch('/api/user/profile');
For applications that need a global cache instance, you can set up the caches
object:
import { CacheStorage, type KVStorage } from '@web-widget/shared-cache';
import { LRUCache } from 'lru-cache';
// Extend global types for TypeScript support
declare global {
interface WindowOrWorkerGlobalScope {
caches: CacheStorage;
}
}
const createLRUCache = (): KVStorage => {
const store = new LRUCache<string, any>({
max: 1024,
ttl: 1000 * 60 * 60, // 1 hour default TTL
});
return {
async get(cacheKey: string) {
return store.get(cacheKey);
},
async set(cacheKey: string, value: any, ttl?: number) {
store.set(cacheKey, value, { ttl });
},
async delete(cacheKey: string) {
return store.delete(cacheKey);
},
};
};
// Set up global cache storage
const caches = new CacheStorage(createLRUCache());
globalThis.caches = caches;
Once the global caches
is configured, you can also register a globally cached fetch
:
import { createFetch } from '@web-widget/shared-cache';
// Replace global fetch with cached version
globalThis.fetch = createFetch(await caches.open('default'), {
defaults: {
cacheControlOverride: 's-maxage=60', // 1 minute default for global fetch
},
});
The createFetch
API allows you to set default cache configuration:
import { createFetch } from '@web-widget/shared-cache';
const cache = await caches.open('api-cache');
// Create fetch with comprehensive defaults
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride: 's-maxage=300',
cacheKeyRules: {
header: { include: ['x-api-version'] },
},
ignoreRequestCacheControl: true,
ignoreVary: false,
},
});
// Use with defaults applied automatically
const response1 = await fetch('/api/data');
// Override defaults for specific requests
const response2 = await fetch('/api/data', {
sharedCache: {
cacheControlOverride: 's-maxage=600', // Override default
},
});
The createFetch
function accepts a custom fetch implementation, allowing you to integrate with existing HTTP clients or add cross-cutting concerns:
// Example: Integration with axios
import axios from 'axios';
const axiosFetch = async (input, init) => {
const response = await axios({
url: input.toString(),
method: init?.method || 'GET',
headers: init?.headers,
data: init?.body,
validateStatus: () => true, // Don't throw on 4xx/5xx
});
return new Response(response.data, {
status: response.status,
statusText: response.statusText,
headers: response.headers,
});
};
const fetch = createFetch(await caches.open('axios-cache'), {
fetch: axiosFetch,
defaults: {
cacheControlOverride: 's-maxage=300',
},
});
// Example: Custom fetch with request/response transformation
const transformFetch = async (input, init) => {
// Transform request
const url = new URL(input);
url.searchParams.set('timestamp', Date.now().toString());
const response = await globalThis.fetch(url, init);
// Transform response
if (response.headers.get('content-type')?.includes('application/json')) {
const data = await response.json();
const transformedData = {
...data,
fetchedAt: new Date().toISOString(),
};
return new Response(JSON.stringify(transformedData), {
status: response.status,
statusText: response.statusText,
headers: response.headers,
});
}
return response;
};
const transformedFetch = createFetch(await caches.open('transform-cache'), {
fetch: transformFetch,
defaults: {
cacheControlOverride: 's-maxage=300',
},
});
SharedCache extends the standard fetch API with caching options via the sharedCache
parameter:
const cache = await caches.open('api-cache');
const fetch = createFetch(cache);
const response = await fetch('https://api.example.com/data', {
// Standard fetch options
method: 'GET',
headers: {
'x-user-id': '1024',
},
// SharedCache-specific options
sharedCache: {
cacheControlOverride: 's-maxage=120',
varyOverride: 'accept-language',
ignoreRequestCacheControl: true,
ignoreVary: false,
cacheKeyRules: {
search: false,
device: true,
header: {
include: ['x-user-id'],
},
},
},
});
Override or extend cache control directives when APIs don't provide optimal caching headers:
// Add shared cache directive
sharedCache: {
cacheControlOverride: 's-maxage=3600';
}
// Combine multiple directives
sharedCache: {
cacheControlOverride: 's-maxage=3600, must-revalidate';
}
Add additional Vary headers to ensure proper cache segmentation:
sharedCache: {
varyOverride: 'accept-language, user-agent';
}
Control whether to honor cache-control directives from the request:
// Ignore client cache-control headers (default: true)
sharedCache: {
ignoreRequestCacheControl: false;
}
Disable Vary header processing for simplified caching:
sharedCache: {
ignoreVary: true; // Cache regardless of Vary headers
}
Customize how cache keys are generated to optimize cache hit rates and handle different caching scenarios:
sharedCache: {
cacheKeyRules: {
// URL components
search: true, // Include query parameters (default)
// Request context
device: false, // Classify by device type
cookie: { // Include specific cookies
include: ['session_id', 'user_pref']
},
header: { // Include specific headers
include: ['x-api-key'],
checkPresence: ['x-mobile-app']
}
}
}
Default cache key rules:
{
search: true,
}
search
: Control query parameter inclusion
Query Parameter Control:
// Include all query parameters (default)
search: true;
// Exclude all query parameters
search: false;
// Include specific parameters
search: {
include: ['category', 'page'];
}
// Include all except specific parameters
search: {
exclude: ['timestamp', 'nonce'];
}
Automatically classify requests as mobile
, desktop
, or tablet
based on User-Agent:
cacheKeyRules: {
device: true; // Separate cache for different device types
}
Include specific cookies in the cache key:
cacheKeyRules: {
cookie: {
include: ['user_id', 'session_token'],
checkPresence: ['is_premium'] // Check existence without value
}
}
Include request headers in the cache key:
cacheKeyRules: {
header: {
include: ['x-api-version'],
checkPresence: ['x-feature-flag']
}
}
Restricted Headers: For security and performance, certain headers cannot be included:
- High-cardinality headers:
accept
,accept-charset
,accept-encoding
,accept-language
,user-agent
,referer
- Cache/proxy headers:
cache-control
,if-*
,range
,connection
- Authentication headers:
authorization
,cookie
(handled separately by cookie rules) - Headers handled by other features:
host
SharedCache provides comprehensive monitoring through the x-cache-status
header for debugging and performance analysis.
Status | Description | When It Occurs |
---|---|---|
HIT |
Response served from cache | The requested resource was found in cache and is still fresh |
MISS |
Response fetched from origin | The requested resource was not found in cache |
EXPIRED |
Cached response expired, fresh response fetched | The cached response exceeded its TTL |
STALE |
Stale response served | Served due to stale-while-revalidate or stale-if-error |
BYPASS |
Cache bypassed | Bypassed due to cache control directives like no-store |
REVALIDATED |
Cached response revalidated | Response validated with origin (304 Not Modified) |
DYNAMIC |
Response cannot be cached | Cannot be cached due to HTTP method or status code |
The x-cache-status
header is automatically added to all responses:
- Header Values:
HIT
,MISS
,EXPIRED
,STALE
,BYPASS
,REVALIDATED
,DYNAMIC
- Always Present: The header is always added for monitoring and debugging
- Non-Standard: Custom header for debugging - should not be used for application logic
SharedCache provides a comprehensive logging system with structured output for monitoring and debugging cache operations.
interface Logger {
info(message?: unknown, ...optionalParams: unknown[]): void;
warn(message?: unknown, ...optionalParams: unknown[]): void;
debug(message?: unknown, ...optionalParams: unknown[]): void;
error(message?: unknown, ...optionalParams: unknown[]): void;
}
import { createLogger, LogLevel } from '@web-widget/shared-cache';
// Create a simple console logger
const logger = {
info: console.info.bind(console),
warn: console.warn.bind(console),
debug: console.debug.bind(console),
error: console.error.bind(console),
};
// Create SharedCache with logger
const cache = new SharedCache(storage, {
logger,
});
- Purpose: Detailed operational information for development and troubleshooting
- Content: Cache lookups, key generation, policy decisions
- Example Output:
SharedCache: Cache miss { url: 'https://api.com/data', cacheKey: 'api:data', method: 'GET' } SharedCache: Cache item found { url: 'https://api.com/data', cacheKey: 'api:data', method: 'GET' }
- Purpose: Normal operational messages about successful operations
- Content: Cache hits, revalidation results, stale responses
- Example Output:
SharedCache: Cache hit { url: 'https://api.com/data', cacheKey: 'api:data', cacheStatus: 'HIT' } SharedCache: Serving stale response - Revalidating in background { url: 'https://api.com/data', cacheKey: 'api:data', cacheStatus: 'STALE' }
- Purpose: Potentially problematic situations that don't prevent operation
- Content: Network errors with fallback, deprecated usage
- Example Output:
SharedCache: Revalidation network error - Using fallback 500 response { url: 'https://api.com/data', cacheKey: 'api:data', error: [NetworkError] }
- Purpose: Critical issues that prevent normal operation
- Content: Storage failures, revalidation failures, validation errors
- Example Output:
SharedCache: Put operation failed { url: 'https://api.com/data', error: [StorageError] } SharedCache: Revalidation failed - Server returned 5xx status { url: 'https://api.com/data', status: 503, cacheKey: 'api:data' }
const productionLogger = {
info: (msg, ctx) =>
console.log(JSON.stringify({ level: 'INFO', message: msg, ...ctx })),
warn: (msg, ctx) =>
console.warn(JSON.stringify({ level: 'WARN', message: msg, ...ctx })),
debug: () => {}, // No debug in production
error: (msg, ctx) =>
console.error(JSON.stringify({ level: 'ERROR', message: msg, ...ctx })),
};
const cache = new SharedCache(storage, {
logger: productionLogger,
});
const devLogger = {
info: console.info.bind(console),
warn: console.warn.bind(console),
debug: console.debug.bind(console),
error: console.error.bind(console),
};
const cache = new SharedCache(storage, {
logger: devLogger,
});
import { createLogger, LogLevel } from '@web-widget/shared-cache';
class CustomLogger {
info(message: unknown, ...params: unknown[]) {
this.log('INFO', message, ...params);
}
warn(message: unknown, ...params: unknown[]) {
this.log('WARN', message, ...params);
}
debug(message: unknown, ...params: unknown[]) {
this.log('DEBUG', message, ...params);
}
error(message: unknown, ...params: unknown[]) {
this.log('ERROR', message, ...params);
}
private log(level: string, message: unknown, ...params: unknown[]) {
const timestamp = new Date().toISOString();
const context = params[0] || {};
console.log(
JSON.stringify({
timestamp,
level,
service: 'shared-cache',
message,
...context,
})
);
}
}
const customLogger = new CustomLogger();
const structuredLogger = createLogger(customLogger, LogLevel.DEBUG);
const cache = new SharedCache(storage, {
logger: customLogger,
});
All log messages include structured context data:
interface SharedCacheLogContext {
url?: string; // Request URL
cacheKey?: string; // Generated cache key
status?: number; // HTTP status code
duration?: number; // Operation duration (ms)
error?: unknown; // Error object
cacheStatus?: string; // Cache result status
ttl?: number; // Time to live (seconds)
method?: string; // HTTP method
[key: string]: unknown; // Additional context
}
- Use appropriate log levels: Don't log normal operations at ERROR level
- Include relevant context: URL, cache key, and timing information help with debugging
- Filter by environment: Use DEBUG level in development, INFO+ in production
- Monitor error logs: Set up alerts for ERROR level messages
- Structure your data: Use consistent context object structures for easier parsing
- DEBUG level: Can be verbose in high-traffic scenarios. Use sparingly in production
- Structured data: Context objects are not deeply cloned. Avoid modifying context after logging
- Async operations: Background revalidation errors are properly caught and logged without blocking responses
import { createLogger, LogLevel } from '@web-widget/shared-cache';
let hitCount = 0;
let totalCount = 0;
const monitoringLogger = {
info: (message, context) => {
if (context?.cacheStatus) {
totalCount++;
if (context.cacheStatus === 'HIT') hitCount++;
// Log hit rate every 100 requests
if (totalCount % 100 === 0) {
console.log(
`Cache hit rate: ${((hitCount / totalCount) * 100).toFixed(2)}%`
);
}
}
console.log(message, context);
},
warn: console.warn,
debug: console.debug,
error: console.error,
};
const cache = new SharedCache(storage, {
logger: createLogger(monitoringLogger, LogLevel.INFO),
});
const performanceLogger = {
info: (message, context) => {
if (context?.duration) {
console.log(`${message} - Duration: ${context.duration}ms`, context);
} else {
console.log(message, context);
}
},
warn: console.warn,
debug: console.debug,
error: console.error,
};
const alertingLogger = {
info: console.log,
warn: console.warn,
debug: console.debug,
error: (message, context) => {
console.error(message, context);
// Send alerts for critical cache errors
if (context?.error && message.includes('Put operation failed')) {
sendAlert(`Cache storage error: ${context.error.message}`);
}
},
};
Main Functions:
createFetch(cache?, options?)
- Create cached fetch functioncreateLogger(logger?, logLevel?, prefix?)
- Create logger with level filtering
Classes:
Cache
- Main cache implementationCacheStorage
- Cache storage manager
Key Types:
KVStorage
- Storage backend interfaceSharedCacheRequestInitProperties
- Request cache configurationSharedCacheKeyRules
- Cache key generation rules
Creates a fetch function with shared cache configuration.
function createFetch(
cache?: Cache,
options?: {
fetch?: typeof fetch;
defaults?: Partial<SharedCacheRequestInitProperties>;
}
): SharedCacheFetch;
Parameters:
cache
- Optional SharedCache instance (auto-discovered from globalThis.caches if not provided)options.fetch
- Custom fetch implementation to use as the underlying fetcher (defaults to globalThis.fetch)options.defaults
- Default shared cache options to apply to all requests
Returns: SharedCacheFetch
- A fetch function with caching capabilities
Basic Usage:
const cache = await caches.open('my-cache');
const fetch = createFetch(cache, {
defaults: { cacheControlOverride: 's-maxage=300' },
});
Request-level cache configuration:
interface SharedCacheRequestInitProperties {
cacheControlOverride?: string;
cacheKeyRules?: SharedCacheKeyRules;
ignoreRequestCacheControl?: boolean;
ignoreVary?: boolean;
varyOverride?: string;
waitUntil?: (promise: Promise<unknown>) => void;
}
Cache key generation rules:
interface SharedCacheKeyRules {
cookie?: FilterOptions | boolean;
device?: FilterOptions | boolean;
header?: FilterOptions | boolean;
search?: FilterOptions | boolean;
}
Storage backend interface:
interface KVStorage {
get: (cacheKey: string) => Promise<unknown | undefined>;
set: (cacheKey: string, value: unknown, ttl?: number) => Promise<void>;
delete: (cacheKey: string) => Promise<boolean>;
}
class Cache {
match(request: RequestInfo | URL): Promise<Response | undefined>;
put(request: RequestInfo | URL, response: Response): Promise<void>;
delete(request: RequestInfo | URL): Promise<boolean>;
}
class CacheStorage {
constructor(storage: KVStorage);
open(cacheName: string): Promise<Cache>;
}
const logger = createLogger(console, LogLevel.INFO, 'MyApp');
Creates a structured logger with level filtering and optional prefix.
type SharedCacheStatus =
| 'HIT'
| 'MISS'
| 'EXPIRED'
| 'STALE'
| 'BYPASS'
| 'REVALIDATED'
| 'DYNAMIC';
Status values are automatically added to response headers as x-cache-status
.
Complete API documentation available in TypeScript definitions and source code.
SharedCache demonstrates exceptional HTTP standards compliance, fully adhering to established web caching specifications:
Complete Compliance Features:
- Cache Control Directives: Proper handling of
no-store
,no-cache
,private
,public
,s-maxage
, andmax-age
- HTTP Method Support: Standards-compliant caching for GET/HEAD methods with correct rejection of non-cacheable methods
- Status Code Handling: Appropriate caching behavior for 200, 301, 404 responses and proper rejection of 5xx errors
- Vary Header Processing: Full content negotiation support with intelligent cache key generation
- Conditional Requests: Complete ETag and Last-Modified validation with 304 Not Modified handling
- stale-while-revalidate: Background revalidation with immediate stale content serving
- stale-if-error: Graceful degradation serving cached content during network failures
- Fault Tolerance: Robust error handling and recovery mechanisms
SharedCache implements a subset of the standard Web Cache API interface, focusing on core caching operations:
interface Cache {
match(request: RequestInfo | URL): Promise<Response | undefined>; // β
Implemented
put(request: RequestInfo | URL, response: Response): Promise<void>; // β
Implemented
delete(request: RequestInfo | URL): Promise<boolean>; // β
Implemented
// Not implemented - throw "not implemented" errors
add(request: RequestInfo | URL): Promise<void>; // β Throws error
addAll(requests: RequestInfo[]): Promise<void>; // β Throws error
keys(): Promise<readonly Request[]>; // β Throws error
matchAll(): Promise<readonly Response[]>; // β Throws error
}
Implementation Status:
- β
Core Methods:
match()
,put()
,delete()
- Fully implemented with HTTP semantics - β Convenience Methods:
add()
,addAll()
- Useput()
instead - β Enumeration Methods:
keys()
,matchAll()
- Not available in server environments
Options Parameter Differences:
SharedCache's CacheQueryOptions
interface differs from the standard Web Cache API:
interface CacheQueryOptions {
ignoreSearch?: boolean; // β Not implemented - throws error
ignoreMethod?: boolean; // β
Supported
ignoreVary?: boolean; // β Not implemented - throws error
}
Supported Options:
- β
ignoreMethod
: Treat request as GET regardless of actual HTTP method
Unsupported Options (throw errors):
- β
ignoreSearch
: Query string handling not customizable - β
ignoreVary
: Vary header processing not bypassable
Standard | Status | Coverage |
---|---|---|
RFC 7234 (HTTP Caching) | β Fully Compliant | 100% |
RFC 5861 (stale-* extensions) | β Fully Compliant | 100% |
Web Cache API | β Subset Implementation | Core Methods |
WinterCG Standards | β Fully Supported | 100% |
- Professional HTTP Semantics: Powered by
http-cache-semantics
for RFC compliance - Intelligent Cache Strategies: Advanced cache key generation with URL normalization
- Robust Error Handling: Comprehensive exception handling with graceful degradation
- Performance Optimized: Efficient storage backends with configurable TTL
- Privacy Compliance: Correct handling of
private
directive for user-specific content - Shared Cache Optimization: Priority given to
s-maxage
overmax-age
for multi-user environments - Authorization Header Handling: Automatic compliance with HTTP specification - responses to requests with
Authorization
headers are not cached in shared caches unless explicitly permitted by response cache control directives - Cache Isolation: Proper separation of cached content based on user context and authentication state
- Secure Defaults: Conservative caching policies with explicit opt-in for sensitive operations
π Important Security Note: SharedCache automatically enforces HTTP caching security rules. Requests containing Authorization
headers will not be cached unless the response explicitly allows it with directives like public
, s-maxage
, or must-revalidate
. This ensures compliance with shared cache security requirements.
SharedCache is production-ready and battle-tested, providing enterprise-grade HTTP caching with full standards compliance for server-side applications.
A: Absolutely! SharedCache supports any storage backend that implements the KVStorage
interface:
// Redis example
const redisStorage: KVStorage = {
async get(key) {
return JSON.parse((await redis.get(key)) || 'null');
},
async set(key, value, ttl) {
await redis.setex(key, ttl / 1000, JSON.stringify(value));
},
async delete(key) {
return (await redis.del(key)) > 0;
},
};
A: SharedCache handles concurrent requests efficiently by serving cache entries and avoiding duplicate network requests.
A: SharedCache is technically compatible with edge runtimes, but it's typically not needed in edge environments. Most edge runtimes (Cloudflare Workers, Vercel Edge Runtime, Deno Deploy) already provide native caches
API implementation.
Primary Use Cases for SharedCache:
- Node.js environments - Where the
caches
API is not natively available - Development environments - For consistent caching behavior across different runtimes
- Meta-frameworks - Like Web Widget that enable seamless migration between environments
- Custom storage backends - When you need Redis, database, or other storage solutions
Migration Benefits:
When using SharedCache with meta-frameworks, you can develop with a consistent caching API and deploy to any environment - whether it has native caches
support or not. This provides true runtime portability for your caching logic.
A: These RFC 5861 extensions provide significant performance and reliability benefits:
- stale-while-revalidate: Serves cached content immediately while updating in background, providing zero-latency responses
- stale-if-error: Serves cached content when origin servers fail, improving uptime and user experience
// Best practice: Use both directives together
const fetch = createFetch(cache, {
defaults: {
cacheControlOverride:
's-maxage=300, stale-while-revalidate=86400, stale-if-error=86400',
},
});
- Web Widget Meta Framework: Cache middleware
- InsMind.com: Page Cache
- Gaoding.com: Page Cache (Million-level URLs)
SharedCache draws inspiration from industry-leading caching implementations:
- Cloudflare Cache Key - Cache key customization patterns
- Next.js Data Cache - Server-side caching strategies
- nodejs/undici - Web Standards implementation
- http-cache-lru - HTTP cache semantics
- Cloudflare Miniflare - Edge runtime patterns
- Cloudflare Workers SDK - Worker environment optimizations
- ultrafetch - Fetch API extensions
- island.is Cache Middleware - Production caching patterns
- make-fetch-happen - HTTP caching with retry and offline support
MIT License - see LICENSE file for details.