Autoscale AWS DynamoDB using an AWS Lambda function
- 5 minute setup process
- Serverless design
- Flexible code over configuration style
- Autoscale table and global secondary indexes
- Autoscale multiple tables
- Optimised performance using concurrent queries
- Statistics via 'measured'
- AWS credential configuration via 'dotenv'
- Optimised lambda package via 'webpack'
- ES7 code
Any reliance you place on dynamodb-lambda-autoscale is strictly at your own risk.
In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this code.
- Build and package the code
- Fork the repo
- Clone your fork
- Create a new file in the root folder called 'config.env.production'
- Put your AWS credentials into the file in the following format
AWS_ACCESS_KEY_ID="###################"
AWS_SECRET_ACCESS_KEY="###############"
- Run 'npm install'
- Run 'npm run build'
- Verify this has created a 'dist.zip' file
- Optionally, run a local test by running 'npm run start'
- Follow the steps in 'Running locally'
- Create an AWS Policy and Role
- Create a policy called 'DynamoDBLambdaAutoscale'
- Use the following content to give access to dynamoDB, cloudwatch and lambda logging
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:ListTables",
"dynamodb:DescribeTable",
"dynamodb:UpdateTable",
"cloudwatch:GetMetricStatistics",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
- Create a role called 'DynamoDBLambdaAutoscale'
- Attach the newly created policy to the role
- Create a AWS Lambda function
- Skip the pre defined functions step
- Set the name to 'DynamoDBLambdaAutoscale'
- Set the runtime to 'Node.js 4.3'
- Select upload a zip file and select 'dist.zip' which you created earlier
- Set the handler to 'index.handler'
- Set the Role to 'DynamoDBLambdaAutoscale'
- Set the Memory to the lowest value initially but test different values at a later date to see how it affects performance
- Set the Timeout to approximately 5 seconds (higher or lower depending on the amount of tables you have and the selected memory setting)
- Once the function is created, attach a 'scheduled event' event source and make it run every minute
The default setup of the configuration is to apply autoscaling to all tables, allowing for a no touch quick setup.
dynamodb-lambda-autoscale takes a different approach to autoscaling configuration compared to other community projects. Rather than making well defined changes to a config file this provides a callback function called 'getTableUpdate' which must be implemented.
{
connection: {
dynamoDB: { apiVersion: '2012-08-10', region: 'us-east-1' },
cloudWatch: { apiVersion: '2010-08-01', region: 'us-east-1' }
},
getTableUpdate: (description, consumedCapacityDescription) => {
// Logic goes here....
}
};
DescribeTable.ResponseSyntax UpdateTable.ResponseSyntax
The function is given the information such as the table name, current table provisioned throughput and the consumed throughput for the past minute. Table updates will only be sent to AWS if the values are different for the current, this approach follows the popular code first pattern used in React.
In most cases the default Config.js which uses the supplied ConfigurableProvisioner.js will provide enough functionality out of box such that additional coding is not required. The default provisioner provides the following features.
- Separate 'Read' and 'Write' capacity adjustment
- Separate 'Increment' and 'Decrement' capacity adjustment
- Read/Write provisioned capacity increased
- if capacity utilisation > 90%
- by either 100% or 3 units, which ever is the greater
- with hard min/max limits of 1 and 10 respectively
- Read/Write provisioned capacity decreased
- if capacity utilisation < 30% AND
- if at least 60 minutes have passed since the last increment AND
- if at least 60 minutes have passed since the last decrement AND
- if the adjustment will be at least 3 units AND
- if we are allowed to utilise 1 of our 4 AWS enforced decrements
- to the consumed throughput value
- with a hard min limit of 1
As AWS only allows 4 table decrements in a calendar day we have an intelligent algorithm which segments the remaining time to midnight by the amount of decrements we have left. This logic allows us to utilise each 4 decrements efficiently. The increments are unlimited so the algorithm follows a unique 'sawtooth' profile, dropping the provisioned throughput all the way down to the consumed throughput rather than gradually. Please see RateLimitedDecrement.js for full implementation.
This project has the following main dependencies:
- aws-sdk - Access to AWS services
- dotenv - Environment variable configuration useful for lambda
- measured - Statistics gathering
The source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.