-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interest in a 'psql -f' migrater? #23
Comments
I think you can achieve something equivalent by just using the I'm not sure that this case of "I have a single migration I'd like to apply" is special enough to deserve its own migrator, but at the very least I should document it. One question for you — is there any reason you need to use |
Another question for you is how are you generating that single migration / snapshot — |
We indeed creating the file with pg_dump. I don't know thát much about it but I believe the pg_dump output is kinda special and can't really be run in a single db.ExecContext. But, to be honest, that is just something i have in my head and i can't find a reliable source right now to back that up. |
Got it. As an experiment that would help me learn some stuff, could you try applying your A "raw sql applier" migrator could be useful to add, regardless if it calls |
Good idea. With our prod schema and just running pg_dump without any flags, and then executing it with db.ExecContext it complains with:
This is probably because of these lines in the dump file (for loading data):
If i run pg_dump with some flags: "--schema-only", "--no-comments", "--no-owner". It does run fine with db.ExecContext. It happens to be for us that we don't care about the data in the snapshot so we can run it with these flags. But for others it might be important. |
For what it's worth, I'm using a simple migrator in my tests to run zero or more SQL files: type execSQLFromFileMigrator []string
func (m execSQLFromFileMigrator) Hash() (string, error) {
hash := sha256.New()
for _, filename := range m {
fmt.Fprintln(hash, filename)
data, err := os.ReadFile(filename)
if err != nil {
return "", err
}
if _, err := hash.Write(data); err != nil {
return "", err
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (m execSQLFromFileMigrator) Migrate(ctx context.Context, db *sql.DB, config pgtestdb.Config) error {
for _, filename := range m {
sqlBytes, err := os.ReadFile(filename)
if err != nil {
return err
}
if _, err := db.ExecContext(ctx, string(sqlBytes)); err != nil {
return err
}
}
return nil
} connURL := pgtestdb.Custom(t, pgtestdb.Config{
// ...
}, execSQLFromFileMigrator{
"schema.sql",
}).URL() |
Thank you for this library, it was instantly useful to our team! The only think that required some minimal effort for us is that for migrations we instead use a "snapshot" file. A .sql file that describes the schema whole (not as migrations). We simply populate the database using
psql -f
. This was simple enough for us to setup by just implementing a custom migrater as seen below.I wondered if there was interest in having this migrater in this repo? It is specific to postgresql but so is "tern" so maybe it would be ok. The other thing is that it depends on an external binary which makes it more fragile, but parsing the sql file in go is not worth it for us.
The text was updated successfully, but these errors were encountered: