-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
External Postgres Database Setup Issues #49304
Comments
Update I found this Video (https://www.youtube.com/watch?v=YX8wyrlLKNU), and realized that with a fresh postgres install I needed to manually create the CREATE DATABASE airbyte WITH OWNER = postgres;
CREATE DATABASE temporal WITH OWNER = postgres;
CREATE DATABASE temporal_visibility WITH OWNER = postgres; (where "OWNER" is the same user set in the secret I still get an error from helm, but when I port forward the service Error: INSTALLATION FAILED: failed post-install: 1 error occurred:
* timed out waiting for the condition Note - the RDS instance is using postgres version 13.15 I also updated my helm chart to 1.3.0, but I got the same error until I created the three databases. However, this still doesn't address the issues related to repointing the external database to a Postgres RDS database that was created from a restore of a postgres database previously running in the airbyte cluster. |
I think part of the issue was that the postgres versions were not matching with RDS, but the Postgres version isn't specified in the docs. Using Postgres 13.15 worked for me. I was able to bacckup and restore from one instance to another by copying over each database individually: First manually create the new databases: CREATE DATABASE airbyte WITH OWNER = postgres;
CREATE DATABASE temporal WITH OWNER = postgres;
CREATE DATABASE temporal_visibility WITH OWNER = postgres; Then test backing up then restoring from the on prem/old database to the new external database pg_dump -h <rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres airbyte > airbyte.sql
pg_dump -h <rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres temporal > temporal.sql
pg_dump -h <rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres temporal_visibility > temporal_visibility.sql
# create the dbs first
psql -h <new rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres -d airbyte -f airbyte.sql
psql -h <new rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres -d temporal -f temporal.sql
psql -h <new rds db name>.<rds id>.<rds region>.rds.amazonaws.com -U postgres -d temporal_visibility -f temporal_visibility.sql |
Also Note - |
Helm Chart Version
1.1.0
What step the error happened?
On deploy
Relevant information
I've been trying to point our Airbyte instance to an external postgres database with no luck. To troubleshoot I've tried to restore the external database from an existing Aribyte Postgres database, and tried with a new blank external database. Neither are working for me, but they result in different errors/issues.
When specifying an new blank external database in the helm chart, the database gets populated with the airbyte tables (state, stream_stats, jobs, etc...), but the deployment fails. The bootloader pod will not throw an error and show up as the
Completed
status, but the next step of the process gets stuck somewhere and no other deployment pods get created.When specifying an external database in the helm chart that has been restored from a backup the whole stack spins up, but I get an error when testing out the web app.
In both cases I'm following the aribyte documentation (https://docs.airbyte.com/deploying-airbyte/integrations/database), but there isn't information about if anything else needs to be set up for the Postgres database. Config changes
Secrets created first (objects are create in the external database, so I don't think secrets/connectivity is the issue):
Updated fields in values.yaml (the rest of the the vaulues are just the defaults)
When I try to restore from an existing Airbyte deployment in a new database it seems like there is some sort of issue . All of the pods spin up, but there is a communication error in the frontend. This is the error it throws when I port forward the web app service to
localhost:8000
and try to access the app at
localhost:8000
:I'm not sure why it wouldn't be able to access the server, since I can do the same thing without the external database info in the values.yaml and this works.
Other Notes:
Relevant log output
The text was updated successfully, but these errors were encountered: