-
-
Notifications
You must be signed in to change notification settings - Fork 173
Valkey Glide
Rate limiting with Valkey Glide - for more details about glide see Valkey Glide - Nodejs.
const { RateLimiterValkeyGlide } = require("rate-limiter-flexible");
const { GlideClient } = require("@valkey/valkey-glide");
// Create a Valkey Glide client
const glideClient = await GlideClient.createClient({
addresses: [{ host: "127.0.0.1", port: 8080 }],
useTls: false,
requestTimeout: 1000,
clientName: "myApp",
});
const opts = {
// Basic options
storeClient: glideClient,
points: 5, // Number of points
duration: 5, // Per second(s)
// Custom options
// Add any custom options here if needed
};
const rateLimiterValkeyGlide = new RateLimiterValkeyGlide(opts);
rateLimiterValkeyGlide
.consume(userEmail)
.then((rateLimiterRes) => {
// ... Some app logic here ...
})
.catch((rejRes) => {
if (rejRes instanceof Error) {
// Some Valkey error
// Never happen if `insuranceLimiter` set up
// Decide what to do with it in other case
} else {
// Can't consume
// If there is no error, promise rejected with number of ms before next request allowed
const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
res.set("Retry-After", String(secs));
res.status(429).send("Too Many Requests");
}
});
RateLimiterValkeyGlide
works seamlessly with Valkey clusters. Use GlideClusterClient
to connect to your Valkey cluster:
const { RateLimiterValkeyGlide } = require("rate-limiter-flexible");
const { GlideClusterClient } = require("@valkey/valkey-glide");
// Connect to Valkey cluster
const glideClusterClient = await GlideClusterClient.createClient({
addresses: [{ host: "127.0.0.1", port: 8081 }],
useTLS: false,
requestTimeout: 1000,
});
const rateLimiter = new RateLimiterValkeyGlide({
storeClient: glideClusterClient,
points: 2,
duration: 5,
});
You can set up an insurance limiter to handle cases when the Valkey connection fails:
const {
RateLimiterValkeyGlide,
RateLimiterMemory,
} = require("rate-limiter-flexible");
const rateLimiter = new RateLimiterValkeyGlide({
storeClient: glideClient,
points: 5,
duration: 5,
insuranceLimiter: new RateLimiterMemory({
points: 5,
duration: 5,
}),
});
// If the Valkey connection fails, the in-memory limiter will be used automatically
RateLimiterValkeyGlide
supports custom Lua scripts for rate limiting logic. This allows you to customize the rate limiting behavior:
// Custom Lua script that starts counting from 1 instead of 0
const customScript = `local key = KEYS[1]
local pointsToConsume = tonumber(ARGV[1])
local secDuration = tonumber(ARGV[2])
-- Start counting from 1 (instead of 0) when key doesn't exist
local exists = server.call('exists', key)
if exists == 0 then
if secDuration > 0 then
server.call('set', key, "1", 'EX', secDuration)
else
server.call('set', key, "1")
end
local pttl = server.call('pttl', key)
return {1, pttl}
end
-- Handle duration case
if secDuration > 0 then
server.call('set', key, "0", 'EX', secDuration, 'NX')
end
-- Handle increment and return result
local consumed = server.call('incrby', key, pointsToConsume)
local pttl = server.call('pttl', key)
return {consumed, pttl}`;
const rateLimiter = new RateLimiterValkeyGlide({
storeClient: glideClient,
points: 2,
duration: 5,
customFunction: customScript,
});
When providing a custom Lua script via customFunction
, it must:
-
Accept parameters:
-
KEYS[1]
: The key being rate limited -
ARGV[1]
: Points to consume (as string, use tonumber() to convert) -
ARGV[2]
: Duration in seconds (as string, use tonumber() to convert)
-
-
Return an array with exactly two elements:
-
[0]
: Consumed points (number) -
[1]
: TTL in milliseconds (number)
-
-
Handle scenarios:
- New key creation: Initialize with expiry for fixed windows.
- Key updates: Increment existing counters.
In addition to the common options, RateLimiterValkeyGlide
supports these specific options:
-
storeClient
: Required. The Valkey Glide client instance (GlideClient
orGlideClusterClient
). -
blockDuration
: Duration in seconds to block a key if points are consumed over the limit. Default is0
(no blocking). -
rejectIfValkeyNotReady
: When set totrue
, rejects immediately when Valkey connection is not ready. Default isfalse
. -
execEvenly
: Distribute actions evenly over the duration. Default isfalse
. -
execEvenlyMinDelayMs
: Minimum delay in milliseconds between actions whenexecEvenly
is true. -
customFunction
: Custom Lua script for rate limiting logic. Overrides the default script. -
inMemoryBlockOnConsumed
: Points threshold for triggering in-memory blocking. -
inMemoryBlockDuration
: Duration in seconds for in-memory blocking. -
customFunctionLibName
: Custom name for the Lua function library. Defaults to'ratelimiter'
. Use a custom name only if you need different libraries for different rate limiters. -
insuranceLimiter
: A backupRateLimiterAbstract
instance to use if the Valkey client fails.
You can distribute actions evenly over the duration to smooth out traffic using the execEvenly
option:
const rateLimiter = new RateLimiterValkeyGlide({
storeClient: glideClient,
points: 10,
duration: 1,
execEvenly: true, // Enable even distribution
execEvenlyMinDelayMs: 20, // Minimum delay between actions
});
Avoid extra requests to Valkey with in-memory blocking:
const rateLimiter = new RateLimiterValkeyGlide({
storeClient: glideClient,
points: 5,
duration: 1,
inMemoryBlockOnConsumed: 10, // Block when consumed 10 points
inMemoryBlockDuration: 10, // Block for 10 seconds in memory
});
Read more about In-memory Block Strategy
RateLimiterValkeyGlide
registers and uses Lua functions in Valkey. It requires permissions to execute Lua functions if acl is in use.
valkey glide also supports Redis OSS, but since it uses functions, it is limited to version 7.0 or higher.
For high-traffic scenarios, consider:
- Using in-memory blocking to reduce load on Valkey
- Configuring an insurance limiter for failover
- Setting up proper Valkey connection management with error handling
- Using cluster mode for enhanced scalability
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Valkey Glide
- IoValkey
- Redis
- Memory
- DynamoDB
- Prisma
- Etcd
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- SQLite
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Periodic sync to reduce number of requests
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting