scenographer is a Python script that can create a subset of a postgres database, without losing referential integrity.
The goal is to be able to spawn data-correct databases to easily create new environments that can be used for testing and / or demo'ing.
Relevant links:
Use pip to install scenographer
.
pip install scenographer
Scenographer requires a configuration file. An empty one, to serve as a starting point, is available by running scenographer empty-config
.
After adjusting the configuration file, it's easy to start the sampling run:
scenographer bin/scenographer sample config.json
or if the schema doesn't need to be recreated in the target database:
scenographer bin/scenographer sample config.json --skip-schema
The connection string for the source database. Only Postgres is supported.
The connection string for the target database. Only Postgres is supported.
Scenographer works by traversing a DAG graph created from the foreign key constraints of the database. However, it's not always the case that the database forms a DAG. To handle those cases, some foreign keys can be ignored by adding exceptions in this form:
IGNORE_RELATIONS = [
{"pk": "product.id", "fk": "client.favorite_product_id"}
]
In other ocasions, the actual foreign key constraint is not present in the database, although it exists in the business-side of things (like Rails does it).
Additional relations can be added to handle those cases. The relations take the same format of IGNORE_RELATIONS
.
Some tables are extra. They may not matter, they may require a special solution or they are part of different components. Either way, you can ignore them.
For some cases, it's useful to tap into the actual queries being made. For that, you can add an entry here. Here's an example:
QUERY_MODIFIERS={
"_default": {"conditions": [], "limit": 300},
"users": {"conditions": ["email ilike '%@example.com'"]},
}
Each entry is a table, with the exception of _default
which is applied to all queries. Its values can have a conditions
and/or limit
key. For conditions you can write plain sql
.
At some point, the data is converted into CSV files to be imported into postgres. This is the directory for said CSV files. If you don't care about it, feel free to ignore. If it's not declared, it will create and use a temporary directory instead.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.