Skip to content
Brian D. Burns edited this page May 4, 2013 · 22 revisions

Databases

Currently supported databases:

  • MySQL
  • PostgreSQL
  • MongoDB
  • Redis
  • Riak

MySQL

Backup::Model.new(:my_backup, 'My Backup') do
  database MySQL do |db|
    # To dump all databases, set `db.name = :all` (or leave blank)
    db.name               = "my_database_name"
    db.username           = "my_username"
    db.password           = "my_password"
    db.host               = "localhost"
    db.port               = 3306
    db.socket             = "/tmp/mysql.sock"
    # Note: when using `skip_tables` with the `db.name = :all` option,
    # table names must be prefixed with a database name.
    # e.g. ["db_name.table_to_skip", ...]
    db.skip_tables        = ["skip", "these", "tables"]
    db.only_tables        = ["only", "these" "tables"]
    db.additional_options = ["--quick", "--single-transaction"]
  end
end

MySQL database dumps produce a single output file created using the mysqldump utility. This dump file will be stored within your final backup package as databases/MySQL.sql.

If a Compressor has been added to the backup, the database dump will be piped through the selected compressor. So, if Gzip is the selected compressor, the output would be databases/MySQL.sql.gz.

PostgreSQL

Backup::Model.new(:my_backup, 'My Backup') do
  database PostgreSQL do |db|
    # To dump all databases, set `db.name = :all` (or leave blank)
    db.name               = "my_database_name"
    db.username           = "my_username"
    db.password           = "my_password"
    db.host               = "localhost"
    db.port               = 5432
    db.socket             = "/tmp/pg.sock"
    # When dumping all databases, `skip_tables` and `only_tables` are ignored.
    db.skip_tables        = ['skip', 'these', 'tables']
    db.only_tables        = ['only', 'these' 'tables']
    db.additional_options = []
  end
end

PostgreSQL database dumps produce a single output file created using the pg_dump utility. This dump file will be stored within your final backup package as databases/PostgreSQL.sql.

If a Compressor has been added to the backup, the database dump will be piped through the selected compressor. So, if Gzip is the selected compressor, the output would be databases/PostgreSQL.sql.gz.

MongoDB

Backup::Model.new(:my_backup, 'My Backup') do
  database MongoDB do |db|
    db.name               = "my_database_name"
    db.username           = "my_username"
    db.password           = "my_password"
    db.host               = "localhost"
    db.port               = 27017
    db.ipv6               = false
    db.only_collections   = ['only', 'these' 'collections']
    db.additional_options = []
    db.lock               = false
    db.oplog              = false
  end
end

MongoDB database dumps are created using the mongodump utility, which will output several files in a folder hierarchy like <databases>/<collections>. Backup creates this hierarchy under a directory named MongoDB. If you specified a database_id (see below), that will be appended. e.g. MongoDB-my_id.

Once the dump is complete, Backup packages this folder into a single tar archive. This archive will be in your final backup package as databases/MongoDB.tar.

If a Compressor has been added to the backup, the packaging of this folder will be piped through the selected compressor. So, if Gzip is the selected compressor, the output would be databases/MongoDB.tar.gz.

db.lock

If db.lock is set to true, Backup will issue a fsyncLock() command to force mongod to flush all pending write operations to disk and lock the entire mongod instance for the duration of the dump. Note that if you have Profiling enabled on your instance, this will be disabled (and will not be re-enabled when the dump completes).

db.oplog

If db.oplog is set to true, the --oplog option will be added to the mongodump command. This creates a database dump that includes an oplog to create a point-in-time snapshot of the current state of the mongod instance.

This is available for all nodes that maintain an oplog, including all members of a replica set, as well as master nodes in master/slave replication deployments. This is preferable over using db.lock, since the node being dumped does not need to be locked.

Redis

Backup::Model.new(:my_backup, 'My Backup') do
  database Redis do |db|
    ##
    # From `dbfilename` in your `redis.conf` under SNAPSHOTTING.
    # Do not include the '.rdb' extension. Defaults to 'dump'
    db.name               = 'dump'
    ##
    # From `dir` in your `redis.conf` under SNAPSHOTTING.
    db.path               = '/var/lib/redis'
    db.password           = 'my_password'
    db.host               = 'localhost'
    db.port               = 6379
    db.socket             = '/tmp/redis.sock'
    db.additional_options = []
    db.invoke_save        = true
  end
end

The Redis database dump file for the above configuration would be copied from /var/lib/redis/dump.rdb to databases/Redis.rdb.

If a Compressor has been added to the backup, then the database dump file would be copied using the selected compressor. So, if Gzip is the selected compressor, the result would be databases/Redis.rdb.gz.

db.invoke_save

If db.invoke_save is set to true, it'll perform a SAVE command using redis-cli before backing up the dump file, so that the dump file is at it's most recent state.

Riak

Backup::Model.new(:my_backup, 'My Backup') do
  database Riak do |db|
    ##
    # The node from which to perform the backup.
    # default: '[email protected]'
    db.node = 'riak@hostname'
    ##
    # The Erlang cookie/shared secret used to connect to the node.
    # default: 'riak'
    db.cookie = 'cookie'
    ##
    # The user for the Riak instance.
    # default: 'riak'
    db.user = 'riak  
  end
end

Riak database dumps produce a single output file created using the riak-admin backup command. This dump file will be stored within your final backup package as databases/Riak-<node>

If a Compressor has been added, then the resulting dump file will be compressed using the selected compressor. So, if Gzip is the selected compressor, the result would be databases/Riak-<node>.gz.

Note A backup run with a Riak Database configured must be run as either the root user or a user that has password-less sudo privileges.

Database Identifiers

All Databases allow you to specify a database_id. For example:

database MySQL, :my_id do |db|
  # etc...
end

This database_id will be added to your dump filename. e.g. databases/MySQL-my_id.sql.

When only one of a specific type of Database (i.e. MySQL, PostgreSQL, etc) is added to your backup model, this database_id is optional. However, if multiple Databases of the same type are added to your model, then a database_id will be required for each. This database_id keeps the dumps from each Database separate. Therefore, if multiple Databases of a single type are detected on your model and any of these do not define a database_id, one will be auto-generated and a warning will be logged.

Default Configuration

If you are backing up multiple databases, you may want to specify default configuration so that you don't have to rewrite the same lines of code for each of the same database types. For example, say that the MySQL database always has the same username, password and additional_options. You could add the following in your config.rb file:

Backup::Database::MySQL.defaults do |db|
  db.username           = "my_username"
  db.password           = "my_password"
  db.additional_options = ["--single-transaction"]
end

So now for every MySQL database you wish to back up that requires the username, password and additional_options to be filled in with the defaults we just specified above, you may omit them in the actual database block, like so:

database MySQL do |db|
  db.name = "my_database_name"
  # no need to specify username
  # no need to specify password
  # no need to specify additional_options
end

You can set defaults for MongoDB, by changing Database::MySQL to Database::MongoDB.

Backup::Database::MongoDB.defaults do |db|
  # ...and so forth for every supported database.
end
Clone this wiki locally