Skip to content

PostgreSQL

Roman edited this page Mar 9, 2019 · 14 revisions

RateLimiterPostgres

PostgreSQL >= 9.5

Usage

Note: Carefully test performance, if your application limits more than 500 requests per second.

It works with pg, knex and sequelize.

It is recommended to use pool of connections.

By default, RateLimiterPostgres creates separate table by keyPrefix for every limiter.

All limits are stored in one table if tableName option is set.

Limits data, which expired more than an hour ago, are removed every 5 minutes by setTimeout.

It is recommended to provide ready callback as the second option of new RateLimiterPostgres(opts, ready) to react on errors during creating table(s) for rate limiters. See example below. ready callback can be omitted, if process is terminated on unhandled errors.

See detailed options description here

const { Pool } = require('pg');

const client = new Pool({
  host: '127.0.0.1',
  port: 5432,
  database: 'root',
  user: 'root',
  password: 'secret',
});

const opts = {
  // Basic options
  storeClient: client,
  points: 5, // Number of points
  duration: 1, // Per second(s)

  // Custom options
  tableName: 'mytable', // if not provided, keyPrefix used as table name
  keyPrefix: 'myprefix', // must be unique for limiters with different purpose

  blockDuration: 10, // Block for 10 seconds if consumed more than `points`
};

const ready = (err) => {
  if (err) {
   // log or/and process exit 
  } else {
    // table checked/created
  }
};

// if second parameter is not a function or not provided, it may throw unhandled error on creation a table

const rateLimiter = new RateLimiterPostgres(opts, ready);

rateLimiter.consume(userIdOrIp)
  .then((rateLimiterRes) => {
    // There were enough points to consume
    // ... Some app logic here ...
  })
  .catch((rejRes) => {
    if (rejRes instanceof Error) {
      // Some Postgres error
      // Never happen if `insuranceLimiter` set up
      // Decide what to do with it in other case
    } else {
      // Can't consume
      // If there is no error, rateLimiterRedis promise rejected with number of ms before next request allowed
      // consumed and remaining points
      const secs = Math.round(rejRes.msBeforeNext / 1000) || 1;
      res.set('Retry-After', String(secs));
      res.status(429).send('Too Many Requests');
    }
  });

Sequelize and Knex support

It gets internal connection from Sequelize or Knex to make raw queries. Connection is released after any query or transaction, so workflow is clean.

const rateLimiter = new RateLimiterPostgres({
      storeClient: sequelizeInstance,
}, ready);

const rateLimiter = new RateLimiterPostgres({
      storeClient: knexInstance,
      storeType: `knex`, // knex requires this option 
}, ready);

See detailed options description here

Benchmark

Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie and postgres:latest Docker containers with 4 workers

User key is random number from 0 to 300.

Endpoint is limited by RateLimiterPostgres with config:

new RateLimiterPostgres({
  storeClient: pgClient,
  points: 5, // Number of points
  duration: 1, // Per second(s)
});
Statistics        Avg      Stdev        Max
  Reqs/sec       995.09     303.79    2010.29
  Latency        7.48ms     5.30ms    51.60ms
  Latency Distribution
     50%     5.25ms
     75%     8.07ms
     90%    16.36ms
     95%    21.85ms
     99%    29.42ms
  HTTP codes:
    1xx - 0, 2xx - 8985, 3xx - 0, 4xx - 21024, 5xx - 0

Heap snapshot statistic on high traffic

Heap snapshot Postgres