Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.203.0 - Supabase CLI analytics is not healthy #2737

Open
ThingEngineer opened this issue Oct 8, 2024 · 12 comments
Open

v1.203.0 - Supabase CLI analytics is not healthy #2737

ThingEngineer opened this issue Oct 8, 2024 · 12 comments

Comments

@ThingEngineer
Copy link

ThingEngineer commented Oct 8, 2024

Describe the bug
v1.203.0 will not start without either:

disabling health checks
supabase start --ignore-health-check

or disabling analytics

[analytics]
enabled = false

To Reproduce
Steps to reproduce the behavior:

  1. Have analytics enabled and working
  2. supabase stop
  3. Update from v1.200.3 to v1.203.0
  4. supabase start

Expected behavior
Supabase passes all health checks and starts.

System information

  • Ticket ID: 5d8ce60231584e9c9f985f8ea417ca9d
  • Version of OS: macOS Ventura 13.6.9 (22G830)
  • Version of CLI: v1.203.0
  • Version of Docker: Docker Desktop 4.34.2 (167172)
  • Versions of services:
SERVICE IMAGE LOCAL LINKED
supabase/postgres 15.1.0.133 15.1.0.133
supabase/gotrue v2.162.1 v2.162.1
postgrest/postgrest v11.2.2 v11.2.2
supabase/realtime v2.30.34 -
supabase/storage-api v1.11.7 v1.11.7
supabase/edge-runtime v1.58.11 -
supabase/studio 20240930-16f2b8e -
supabase/postgres-meta v0.83.2 -
supabase/logflare 1.4.0 -
supabase/supavisor 1.1.56 -

Additional context
If applicable, add any other context about the problem here.

  • Browser [e.g. chrome, safari]
  • "supabase": "^1.203.0"
  • "@supabase/supabase-js": "^2.45.4"
  • I also successfully relinked with supabase link, stoped supabase restarted the host OS, docker, then attempted a normal supabase start again with no joy.
  • TODO: try supabase beta, (wish that I could downgrade with brew), re-init the supabase cli, give up on relying on local analytics being available. jk <3
  • The output after starting supabase cli with health checks disabled:
supabase start --ignore-health-check
WARNING: analytics requires mounting default docker socket: /var/run/docker.sock
supabase_analytics_PROJECT_NAME container logs:

18:25:28.983 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:28.983 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:30.099 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:31.769 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:32.039 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:35.782 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:38.280 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:39.858 [error] Could not create schema migrations table. This error usually happens due to the following:

  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)

To fix the first issue, run "mix ecto.create" for the desired MIX_ENV.

To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create", both for the desired MIX_ENV. Alternatively you may
configure Ecto to use another table and/or repository for managing
migrations:

    config :logflare, Logflare.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations

The full error report is shown below.

** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 10959ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:

  1. Ensuring your database is available and that you can connect to it
  2. Tracking down slow queries and making sure they are running fast enough
  3. Increasing the pool_size (although this increases resource consumption)
  4. Allowing requests to wait longer by increasing :queue_target and :queue_interval

See DBConnection.start_link/2 for more information

    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:1005: Ecto.Adapters.SQL.execute_ddl/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:738: Ecto.Migrator.verbose_schema_migration/3
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:552: Ecto.Migrator.lock_for_migrations/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:428: Ecto.Migrator.run/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:170: Ecto.Migrator.with_repo/3
    nofile:1: (file)

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4719.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4728.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4724.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4718.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4729.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4722.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4725.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.412 [error] Postgrex.Protocol (#PID<0.4723.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.413 [error] Postgrex.Protocol (#PID<0.4726.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:43.413 [error] Postgrex.Protocol (#PID<0.4727.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Cainophile.Adapters.Postgres',{error,fatal,<<\"3D000\">>,invalid_catalog_name,<<\"database \\"_supabase\\" does not exist\">>,[{file,<<\"postinit.c\">>},{line,<<\"941\">>},{routine,<<\"InitPostgres\">>},{severity,<<\"FATAL\">>}]}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Cainophile.Adapters.Postgres',{error,fatal,<<"3D000">>,invalid_catalog_name,<<"database \"_supabase\" does not exist">>,[{file,<<"postinit.c">>},{line,<<"941">>},{routine,<<"InitPostgres">>},{severity,<<"FATAL">>}]}}},{'Elixir.Logflare.Application',start,[normal,[]]}}})

Crash dump is being written to: erl_crash.dump...done

18:25:46.279 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:46.279 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:47.632 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:47.752 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:49.404 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:49.763 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:52.330 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:54.582 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:55.513 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:25:57.188 [error] Could not create schema migrations table. This error usually happens due to the following:

  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)

To fix the first issue, run "mix ecto.create" for the desired MIX_ENV.

To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create", both for the desired MIX_ENV. Alternatively you may
configure Ecto to use another table and/or repository for managing
migrations:

    config :logflare, Logflare.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations

The full error report is shown below.

** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 10972ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:

  1. Ensuring your database is available and that you can connect to it
  2. Tracking down slow queries and making sure they are running fast enough
  3. Increasing the pool_size (although this increases resource consumption)
  4. Allowing requests to wait longer by increasing :queue_target and :queue_interval

See DBConnection.start_link/2 for more information

    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:1005: Ecto.Adapters.SQL.execute_ddl/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:738: Ecto.Migrator.verbose_schema_migration/3
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:552: Ecto.Migrator.lock_for_migrations/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:428: Ecto.Migrator.run/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:170: Ecto.Migrator.with_repo/3
    nofile:1: (file)

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4729.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4727.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4724.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4723.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4722.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4719.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4728.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4718.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4725.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist

18:26:00.220 [error] Postgrex.Protocol (#PID<0.4726.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name) database "_supabase" does not exist
supabase_analytics_PROJECT_NAME container is not ready: starting
Started supabase local development setup.

         API URL: http://127.0.0.1:54321
     GraphQL URL: http://127.0.0.1:54321/graphql/v1
  S3 Storage URL: http://127.0.0.1:54321/storage/v1/s3
          DB URL: postgresql://postgres:[email protected]:54322/postgres
      Studio URL: http://127.0.0.1:54323
    Inbucket URL: http://127.0.0.1:54324
      JWT secret: 
        anon key: 
service_role key: 
   S3 Access Key: 
   S3 Secret Key: 
       S3 Region: local
@sweatybridge
Copy link
Contributor

Could you try supabase stop --no-backup followed by start? We switched to a new initialisation routine that's not compatible with local db state.

@ThingEngineer
Copy link
Author

Could you try supabase stop --no-backup followed by start? We switched to a new initialization routine that's not compatible with local db state.

Tried and failed, started with health checked disabled and tried to start normally again after stop --no-backup but still fails. Maybe if I had stoped that way before the upgrade?

I'd rather not re-init the CLI if it can be avoided, any other ideas to get analytics back?

@evelant
Copy link

evelant commented Oct 8, 2024

@ThingEngineer I started with analytics.enabled = false then did supabase db reset and that worked for me to create the missing database and schemas. Obviously this removes all your local data.

@ThingEngineer
Copy link
Author

ThingEngineer commented Oct 8, 2024

@ThingEngineer I started with analytics.enabled = false then did supabase db reset and that worked for me to create the missing database and schemas. Obviously this removes all your local data.

Good to hear, I was in the process of making sure my seed data was up to date so I could try that.

I also tried supabase-beta:

brew unlink supabase
brew install supabase/tap/supabase-beta
brew link supabase-beta
brew upgrade supabase-beta

But the issue persisted there so I switched back.

@avallete
Copy link
Member

avallete commented Oct 9, 2024

Another possibility to keep your local data would be to manually connect to the database container and manually run the migration:

CREATE DATABASE _supabase WITH OWNER postgres;
-- connect to _supabase database
\c _supabase
CREATE SCHEMA IF NOT EXISTS _analytics;
ALTER SCHEMA _analytics OWNER TO postgres;

CREATE SCHEMA IF NOT EXISTS _supavisor;
ALTER SCHEMA _supavisor OWNER TO postgres;

@ThingEngineer
Copy link
Author

ThingEngineer commented Oct 9, 2024

Another possibility to keep your local data would be to manually connect to the database container and manually run the migration:

CREATE DATABASE _supabase WITH OWNER postgres;
-- connect to _supabase database
\c _supabase
CREATE SCHEMA IF NOT EXISTS _analytics;
ALTER SCHEMA _analytics OWNER TO postgres;

CREATE SCHEMA IF NOT EXISTS _supavisor;
ALTER SCHEMA _supavisor OWNER TO postgres;

FIX #2737, #2742
This worked, used psql tool in pgAdmin to connect. Supabase starts normally and analytics are back online.

psql (16.0, server 15.1 (Ubuntu 15.1-1.pgdg20.04+1))
Type "help" for help.

postgres=> CREATE DATABASE _supabase WITH OWNER postgres;
CREATE DATABASE
postgres=> \c _supabase
psql (16.0, server 15.1 (Ubuntu 15.1-1.pgdg20.04+1))
You are now connected to database "_supabase" as user "postgres".
_supabase=> CREATE SCHEMA IF NOT EXISTS _analytics;
CREATE SCHEMA
_supabase=> ALTER SCHEMA _analytics OWNER TO postgres;
ALTER SCHEMA
_supabase=> CREATE SCHEMA IF NOT EXISTS _supavisor;
CREATE SCHEMA
_supabase=> ALTER SCHEMA _supavisor OWNER TO postgres;
ALTER SCHEMA
_supabase=> \q

On another project I fixed this by deleting all of the supabase_* docker volumes so that supabase start reinitialized the db from /migrations and seed.sql. (Note that you'll loose your storage files doing this.)

A little breaking change warning would have been nice but in all fairness, following the recommended upgrade procedure in the docs would have prevented this.

@t1mmen
Copy link

t1mmen commented Oct 12, 2024

This remains an issue on v1.204.3. Thanks to the comments above, I resolved it by:

  1. disable analytics in config.toml.
  2. $ supabase start
  3. $ supabase db reset
  4. $ supabase stop
  5. re-enable analytics in config.toml.
  6. $ supabase start

I did try manually creating the schema (after step #2), as describe here, but that did not work for me.

@ThingEngineer
Copy link
Author

@t1mmen Curious.
I think the initial issue is obviously upgrading the CLI without first stopping supabase with the no back up flag (verified on other projects). And it seems whether or not you have an analytics enabled at that time makes a difference as well.

The group of various fixes all seem to work eventually, but in different combinations for different projects.

@t1mmen
Copy link

t1mmen commented Oct 12, 2024

@ThingEngineer That sounds right. Our dev setup script (snippet below for reference) does stop the db, but using supabase stop --backup behind the db:stop command)

async function ensureSupabaseInstalled() {
  const result = await checkCommand('supabase --version');

  if (result) {
    const version = result.match(/\d+\.\d+\.\d+/)?.[0] || 'unknown';

    if (semver.gte(version, '1.204.3')) {
      console.log(
        `🟢 ${chalk.bold('supabase')} is installed with an acceptable version (${version}).`
      );
    } else {
      console.log(
        `⚡ ${chalk.bold(
          'supabase'
        )} is installed but the version is too old (${version}). Updating...`
      );
      await runYarnTask('db:stop', 'stopping supabase before updating to latest version...');
      await execaCommand('brew upgrade supabase/tap/supabase', { stdio: 'inherit' });
      console.log(`✓ ${chalk.bold('supabase')} updated to a newer version.`);

      console.log(
        `⚡ ${chalk.bold('In case of issues')} you may need to wipe the database via ${chalk.bold(
          'yarn db:reset'
        )} -- sorry!`
      );
    }
  } else {
    console.log(`⚡ ${chalk.bold('supabase')} is not installed. Installing...`);
    await execaCommand('brew update', { stdio: 'inherit' });
    await execaCommand('brew install supabase/tap/supabase', { stdio: 'inherit' });
    console.log(`✓ ${chalk.bold('supabase')} installed.`);
  }
}

FWIW, it'd be nice if "Backup and stop running containers" step was baked into upgrade process in a nicer way (so as to not clutter up seed.sql which might be accidentally committed)

PS: This is the first time I recall getting into a state which could not be resolved with a supabase db reset after upgrade, after almost 2 years using Supabase.

@ThingEngineer
Copy link
Author

it'd be nice if "Backup and stop running containers" step was baked into upgrade process in a nicer way (so as to not clutter up seed.sql which might be accidentally committed).

Agreed, that is an issue when using branching. Just something us wildcard people will have to add to the cognitive load for the time.

Also, same on the db reset after update. But having access to the code base makes for a pretty quick fix on most issues.

@marcelorl
Copy link

I just cant believe this is an issue. OH MY.

@troyshu
Copy link

troyshu commented Oct 31, 2024

Unfortuantely @t1mmen's steps above didn't fix these errors for me. I had to delete any old supabase docker containers as a previous comment mentioned. I did that with the following commands:

docker volume ls --filter name=supabase_ to see all the containers with the supabase_ prefix
docker volume rm $(docker volume ls --filter name=supabase_ -q) to remove them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants