Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(se): render constraint renames as separate sql statements #4906

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
59 changes: 33 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,24 @@ and test them.

This repository contains four engines:

- *Query engine*, used by the client to run database queries from Prisma Client
- *Schema engine*, used to create and run migrations and introspection
- *Prisma Format*, used to format prisma files
- _Query engine_, used by the client to run database queries from Prisma Client
- _Schema engine_, used to create and run migrations and introspection
- _Prisma Format_, used to format prisma files

Additionally, the *psl* (Prisma Schema Language) is the library that defines how
Additionally, the _psl_ (Prisma Schema Language) is the library that defines how
the language looks like, how it's parsed, etc.

You'll also find:
- *libs*, for various (small) libraries such as macros, user facing errors,
various connector/database-specific libraries, etc.

- _libs_, for various (small) libraries such as macros, user facing errors,
various connector/database-specific libraries, etc.
- a `docker-compose.yml` file that's helpful for running tests and bringing up
containers for various databases
containers for various databases
- a `flake.nix` file for bringing up all dependencies and making it easy to
build the code in this repository (the use of this file and `nix` is
entirely optional, but can be a good and easy way to get started)
build the code in this repository (the use of this file and `nix` is
entirely optional, but can be a good and easy way to get started)
- an `.envrc` file to make it easier to set everything up, including the `nix
shell`
shell`

## Documentation

Expand Down Expand Up @@ -76,7 +77,7 @@ compiled binaries inside the repository root in the `target/debug` (without

## Prisma Schema Language

The *Prisma Schema Language* is a library which defines the data structures and
The _Prisma Schema Language_ is a library which defines the data structures and
parsing rules for prisma files, including the available database connectors. For
more technical details, please check the [library README](./psl/README.md).

Expand All @@ -86,23 +87,25 @@ also used as input for the query engine.

## Query Engine

The *Query Engine* is how Prisma Client queries are executed. Here's a brief
The _Query Engine_ is how Prisma Client queries are executed. Here's a brief
description of what it does:

- takes as inputs an annotated version of the Prisma Schema file called the
DataModeL (DML),
DataModeL (DML),
- using the DML (specifically, the datasources and providers), it builds up a
[GraphQL](https://graphql.org) model for queries and responses,
[GraphQL](https://graphql.org) model for queries and responses,
- runs as a server listening for GraphQL queries,
- it translates the queries to the respective native datasource(s) and
returns GraphQL responses, and
returns GraphQL responses, and
- handles all connections and communication with the native databases.

When used through Prisma Client, there are two ways for the Query Engine to
be executed:

- as a binary, downloaded during installation, launched at runtime;
communication happens via HTTP (`./query-engine/query-engine`)
communication happens via HTTP (`./query-engine/query-engine`)
- as a native, platform-specific Node.js addon; also downloaded during
installation (`./query-engine/query-engine-node-api`)
installation (`./query-engine/query-engine-node-api`)

### Usage

Expand All @@ -115,7 +118,7 @@ Notable environment flags:

- `RUST_LOG_FORMAT=(devel|json)` sets the log format. By default outputs `json`.
- `QE_LOG_LEVEL=(info|debug|trace)` sets the log level for the Query Engine. If
you need Query Graph debugging logs, set it to "trace"
you need Query Graph debugging logs, set it to "trace"
- `FMT_SQL=1` enables logging _formatted_ SQL queries
- `PRISMA_DML_PATH=[path_to_datamodel_file]` should point to the datamodel file
location. This or `PRISMA_DML` is required for the Query Engine to run.
Expand Down Expand Up @@ -147,13 +150,15 @@ Navigate to `http://localhost:3000` to view the Grafana dashboard.

## Schema Engine

The *Schema Engine* does a couple of things:
The _Schema Engine_ does a couple of things:

- creates new migrations by comparing the prisma file with the current state of
the database, in order to bring the database in sync with the prisma file
the database, in order to bring the database in sync with the prisma file
- run these migrations and keeps track of which migrations have been executed
- (re-)generate a prisma schema file starting from a live database

The engine uses:

- the prisma files, as the source of truth
- the database it connects to, for diffing and running migrations, as well as
keeping track of migrations in the `_prisma_migrations` table
Expand All @@ -168,13 +173,14 @@ a node package. You can read more [here](./prisma-schema-wasm/README.md).
## Debugging

When trying to debug code, here's a few things that might be useful:

- use the language server; being able to go to definition and reason about code
can make things a lot easier,
can make things a lot easier,
- add `dbg!()` statements to validate code paths, inspect variables, etc.,
- you can control the amount of logs you see, and where they come from using the
`RUST_LOG` environment variable; see [the documentation](https://docs.rs/env_logger/0.9.1/env_logger/#enabling-logging),
`RUST_LOG` environment variable; see [the documentation](https://docs.rs/env_logger/0.9.1/env_logger/#enabling-logging),
- you can use the `test-cli` to test migration and introspection without having
to go through the `prisma` npm package.
to go through the `prisma` npm package.

## Testing

Expand All @@ -186,7 +192,7 @@ integration tests.

You can find them across the whole codebase, usually in `./tests` folders at
the root of modules. These tests can be executed via `cargo test`. Note that
some of them will require the `TEST_DATABASE_URL` enviornment variable set up.
some of them will require the `TEST_DATABASE_URL` environment variable set up.

- **Integration tests**: They run GraphQL queries against isolated
instances of the Query Engine and asserts that the responses are correct.
Expand Down Expand Up @@ -268,6 +274,7 @@ You can trigger releases from this repository to npm that can be used for testin
(Since July 2022). Any branch name starting with `integration/` will, first, run the full test suite in Buildkite `[Test] Prisma Engines` and, second, if passing, run the publish pipeline (build and upload engines to S3 & R2)

The journey through the pipeline is the same as a commit on the `main` branch.

- It will trigger [`prisma/engines-wrapper`](https://github.com/prisma/engines-wrapper) and publish a new [`@prisma/engines-version`](https://www.npmjs.com/package/@prisma/engines-version) npm package but on the `integration` tag.
- Which triggers [`prisma/prisma`](https://github.com/prisma/prisma) to create a `chore(Automated Integration PR): [...]` PR with a branch name also starting with `integration/`
- Since in `prisma/prisma` we also trigger the publish pipeline when a branch name starts with `integration/`, this will publish all `prisma/prisma` monorepo packages to npm on the `integration` tag.
Expand All @@ -276,8 +283,9 @@ The journey through the pipeline is the same as a commit on the `main` branch.
This end to end will take minimum ~1h20 to complete, but is completely automated :robot:

Notes:

- in `prisma/prisma` repository, we do not run tests for `integration/` branches, it is much faster and also means that there is no risk of tests failing (e.g. flaky tests, snapshots) that would stop the publishing process.
- in `prisma/prisma-engines` the Buildkite test pipeline must first pass, then the engines will be built and uploaded to our storage via the Buildkite release pipeline. These 2 pipelines can fail for different reasons, it's recommended to keep an eye on them (check notifications in Slack) and restart jobs as needed. Finally, it will trigger [`prisma/engines-wrapper`](https://github.com/prisma/engines-wrapper).
- in `prisma/prisma-engines` the Buildkite test pipeline must first pass, then the engines will be built and uploaded to our storage via the Buildkite release pipeline. These 2 pipelines can fail for different reasons, it's recommended to keep an eye on them (check notifications in Slack) and restart jobs as needed. Finally, it will trigger [`prisma/engines-wrapper`](https://github.com/prisma/engines-wrapper).

#### Manual integration releases from this repository to npm

Expand All @@ -292,7 +300,6 @@ rust-analyzer. To avoid this. Open VSCode settings and search for `Check on Save
--target-dir:/tmp/rust-analyzer-check
```


## Community PRs: create a local branch for a branch coming from a fork

To trigger an [Automated integration releases from this repository to npm](#automated-integration-releases-from-this-repository-to-npm) or [Manual integration releases from this repository to npm](#manual-integration-releases-from-this-repository-to-npm) branches of forks need to be pulled into this repository so the Buildkite job is triggered. You can use these GitHub and git CLI commands to achieve that easily:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,7 @@ impl SqlRenderer for PostgresFlavour {
fn render_alter_table(&self, alter_table: &AlterTable, schemas: MigrationPair<&SqlSchema>) -> Vec<String> {
let AlterTable { changes, table_ids } = alter_table;
let mut lines = Vec::new();
let mut separate_lines = Vec::new();
let mut before_statements = Vec::new();
let mut after_statements = Vec::new();
let tables = schemas.walk(*table_ids);
Expand All @@ -241,7 +242,7 @@ impl SqlRenderer for PostgresFlavour {
"DROP CONSTRAINT {}",
Quoted::postgres_ident(tables.previous.primary_key().unwrap().name())
)),
TableChange::RenamePrimaryKey => lines.push(format!(
TableChange::RenamePrimaryKey => separate_lines.push(format!(
"RENAME CONSTRAINT {} TO {}",
Quoted::postgres_ident(tables.previous.primary_key().unwrap().name()),
Quoted::postgres_ident(tables.next.primary_key().unwrap().name())
Expand Down Expand Up @@ -304,14 +305,14 @@ impl SqlRenderer for PostgresFlavour {
};
}

if lines.is_empty() {
if lines.is_empty() && separate_lines.is_empty() {
return Vec::new();
}

if self.is_cockroachdb() {
let mut out = Vec::with_capacity(before_statements.len() + after_statements.len() + lines.len());
out.extend(before_statements);
for line in lines {
for line in lines.into_iter().chain(separate_lines) {
out.push(format!(
"ALTER TABLE {} {}",
QuotedWithPrefix::pg_from_table_walker(tables.previous),
Expand All @@ -321,12 +322,16 @@ impl SqlRenderer for PostgresFlavour {
out.extend(after_statements);
out
} else {
let alter_table = format!(
"ALTER TABLE {} {}",
QuotedWithPrefix::pg_new(tables.previous.namespace(), tables.previous.name()),
lines.join(",\n")
);
let table = QuotedWithPrefix::pg_new(tables.previous.namespace(), tables.previous.name());
for line in separate_lines {
after_statements.push(format!("ALTER TABLE {} {}", table, line))
}

if lines.is_empty() {
return before_statements.into_iter().chain(after_statements).collect();
}

let alter_table = format!("ALTER TABLE {} {}", table, lines.join(",\n"));
before_statements
.into_iter()
.chain(std::iter::once(alter_table))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1312,3 +1312,61 @@ fn alter_constraint_name(mut api: TestApi) {
migration.expect_contents(expected_script)
});
}

#[test_connector(exclude(Mysql, Sqlite, Mssql))]
fn alter_constraint_name_and_alter_columns_at_same_time(mut api: TestApi) {
let plain_dm = api.datamodel_with_provider(
r#"
model A {
id Int @id
a String
}
"#,
);

let dir = api.create_migrations_directory();
api.create_migration("plain_migration", &plain_dm, &dir).send_sync();

let custom_dm = api.datamodel_with_provider(&format!(
r#"
model A {{
id Int @id{}
a String
b String?
}}
"#,
if api.is_sqlite() || api.is_mysql() || api.is_mssql() {
""
} else {
r#"(map: "CustomId")"#
}
));

let is_postgres = api.is_postgres();
let is_postgres15 = api.is_postgres_15();
let is_postgres16 = api.is_postgres_16();
let is_cockroach = api.is_cockroach();

api.create_migration("custom_migration", &custom_dm, &dir)
.send_sync()
.assert_migration_directories_count(2)
.assert_migration("custom_migration", move |migration| {
let expected_script = if is_cockroach {
expect![[r#"
-- AlterTable
ALTER TABLE "A" ADD COLUMN "b" STRING;
ALTER TABLE "A" RENAME CONSTRAINT "A_pkey" TO "CustomId";
"#]]
} else if is_postgres || is_postgres15 || is_postgres16 {
expect![[r#"
-- AlterTable
ALTER TABLE "A" ADD COLUMN "b" TEXT;
ALTER TABLE "A" RENAME CONSTRAINT "A_pkey" TO "CustomId";
"#]]
} else {
panic!()
};

migration.expect_contents(expected_script)
});
}
35 changes: 35 additions & 0 deletions schema-engine/sql-migration-tests/tests/schema_push/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,41 @@ fn alter_constraint_name_push(api: TestApi) {
});
}

#[test_connector(exclude(Sqlite, Mysql))]
fn alter_constraint_name_and_alter_columns_at_same_time_push(api: TestApi) {
let dm1 = r#"
model A {
id Int @id
name String?
}
"#;

api.schema_push_w_datasource(dm1).send().assert_green();

let id = if api.is_sqlite() || api.is_mysql() {
""
} else {
r#"(map: "CustomId")"#
};

let dm2 = format!(
r#"
model A {{
id Int @id{id}
name String?
lastName String?
}}
"#
);

api.schema_push_w_datasource(dm2).send().assert_green();

api.assert_schema().assert_table("A", |table| {
table.assert_pk(|pk| pk.assert_constraint_name("CustomId"));
table.assert_columns_count(3).assert_has_column("lastName")
});
}

#[test_connector(tags(Sqlite))]
fn sqlite_reserved_name_space_can_be_used(api: TestApi) {
let plain_dm = r#"
Expand Down
Loading