Skip to content
Roman edited this page Nov 10, 2023 · 20 revisions

RateLimiterRedis

Redis >=2.6.12

Usage

It works with ioredis by default.

const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });

const { RateLimiterRedis } = require('rate-limiter-flexible');

// It is recommended to process Redis errors and setup some reconnection strategy
redisClient.on('error', (err) => {
  
});

const opts = {
  // Basic options
  storeClient: redisClient,
  points: 5, // Number of points
  duration: 5, // Per second(s)
  
  // Custom
  blockDuration: 0, // Do not block if consumed more than points
  keyPrefix: 'rlflx', // must be unique for limiters with different purpose
};

const rateLimiterRedis = new RateLimiterRedis(opts);

rateLimiterRedis.consume(remoteAddress)
    .then((rateLimiterRes) => {
      // ... Some app logic here ...
    })
    .catch((rejRes) => {
      if (rejRes instanceof Error) {
        // Some Redis error
        // Never happen if `insuranceLimiter` set up
        // Decide what to do with it in other case
      } else {
        // Can't consume
        // If there is no error, rateLimiterRedis promise rejected with number of ms before next request allowed
        const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
        res.set('Retry-After', String(secs));
        res.status(429).send('Too Many Requests');
      }
    });

See all options here

Redis client should be created with offline queue switched off when:

  • it should not wait a connection to Redis to process requests, but it should throw an error.
  • it may queue too much requests on high traffic and then process them all at once, if Redis is down and then up again.
  • it is important to keep an order of requests in distributed environment, while there is no connection to Redis. Every node.js process has isolated offline queue, so there is no way to control order of processing those queues.

redis package

Set useRedisPackage flag to true, if your package is the redis package of version 4+. Note, there are some bugs in rate-limiter-flexible if you use redis of version 3 or lower. You can try and set useRedis3AndLowerPackage. Fixes are most welcomed. If you don't want to fix it, use rate-limiter-flexible version 2.

See all options here

const redis = require('redis');
const redisClient = await createClient({ enable_offline_queue: false })
  .connect();

const { RateLimiterRedis } = require('rate-limiter-flexible');

// It is recommended to process Redis errors and setup some reconnection strategy
redisClient.on('error', (err) => {
  
});

const opts = {
  // Basic options
  storeClient: redisClient,
  useRedisPackage: true, // use this flag for the latest redis package
  points: 5, // Number of points
  duration: 5, // Per second(s)
  
  // Custom
  blockDuration: 0, // Do not block if consumed more than points
};

const rateLimiterRedis = new RateLimiterRedis(opts);

rateLimiterRedis.consume(remoteAddress)
    .then((rateLimiterRes) => {
      // ... Some app logic here ...
    })
    .catch((rejRes) => {
      if (rejRes instanceof Error) {
        // Some Redis error
        // Never happen if `insuranceLimiter` set up
        // Decide what to do with it in other case
      } else {
        // Can't consume
        // If there is no error, rateLimiterRedis promise rejected with number of ms before next request allowed
        const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
        res.set('Retry-After', String(secs));
        res.status(429).send('Too Many Requests');
      }
    });

See all options here

Get all keys example

function scan(prefix, cb) {
  let objs = []
  redisScan({
    redis: redisClient,
    pattern: keyPrefix + "*",
    each_callback: function (type, key, subkey, dummy, value, next) {
      objs.push({ key: key, value: JSON.parse(value) })
      next()
    },
    done_callback: function (err) {
      if(cb) {
        cb(err, objs)
      }
    },
  })
}

RateLimiterRedis benchmark

Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie and redis:4.0.10-alpine Docker containers by PM2 with 4 workers

By bombardier -c 1000 -l -d 30s -r 2000 -t 5s http://127.0.0.1:8000

Test with 1000 concurrent requests with maximum 2000 requests per sec during 30 seconds

Statistics        Avg      Stdev        Max
  Reqs/sec      2015.20     511.21   14570.19
  Latency        2.45ms     7.51ms   138.41ms
  Latency Distribution
     50%     1.95ms
     75%     2.16ms
     90%     2.43ms
     95%     2.77ms
     99%     5.73ms
  HTTP codes:
    1xx - 0, 2xx - 53556, 3xx - 0, 4xx - 6417, 5xx - 0

Heap snapshot statistic on high traffic

Heap snapshot Redis