Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgres DB wiped after connection error #166

Open
coolibre opened this issue Jan 12, 2024 · 1 comment
Open

Postgres DB wiped after connection error #166

coolibre opened this issue Jan 12, 2024 · 1 comment
Labels
bug Something isn't working question Further information is requested

Comments

@coolibre
Copy link
Contributor

coolibre commented Jan 12, 2024

Summary

We had a database connection problem which was caused by something else. After this database connection problem was solved and freescout container was restarted a few times for debugging the database was remigrated with a fresh setup, and our exisiting data was overwritten. Fortunately only on a test system.

Steps to reproduce

Thats hard to reproduce but it seems include a database which is not connectable and a freescout container which is in startup state. At some point in time when the database connectivity is reestablished the data is wiped.
It seems that this needs a lot of unfortune on a timing perspective as i could not reproduce this by hand again. Due to the startup script psql having a retry behaviour i did not achieve a wipe by trying to reproduce. But maybe originally this was happening differently which I dont know exactly.

What is the expected correct behavior?

Database should reconnect. I can live with a manual container restart for now. But data should never be wiped if the database contained a working freescout schema.

Relevant logs and/or screenshots


Environment

Kubernetes, Cloud Provided RDS Service, Postgres 12

Possible fixes

In the startup script I can identify the following problem.

### Check to see if DB is populated
if [ "${DB_TYPE}" = "mysql" ]; then
  mysql -u $DB_USER -p$DB_PASS -h$DB_HOST -P$DB_PORT -e "use "$DB_NAME"; SELECT * FROM users;" > /dev/null 2>&1
elif [ "${DB_TYPE,,}" = "pgsql" ] || [ "${DB_TYPE,,}" = "postgres" ]; then
  export PGPASSWORD="${DB_PASS}"
  psql --dbname=postgresql://$DB_USER@$DB_HOST:$DB_PORT/$DB_NAME -c "SELECT * FROM users;" > /dev/null 2>&1
fi
 
if [ $? -gt 0 ]; then
## migrate with fresh state

Maybe in my case there was no sql error but a connection related problem or even a timeout which is handled the same way as the query with a fresh migration afterwards. I dont know yet how I got past this point as you use pg_isready in aplinge. But it happened somehow. Maybe the later migration with laravel has also a retry logic and from a timing my database came back exactly on these queries.

The psql has different error states which can also point out connection problems. Maybe this sql statement is not save enough and it would be better to check the resultcount or in case of error not start the migration right away.

@coolibre coolibre added the bug Something isn't working label Jan 12, 2024
@coolibre coolibre changed the title Postres DB wiped after connection error Postgres DB wiped after connection error Jan 12, 2024
@tiredofit
Copy link
Owner

Sorry to hear that you have experienced data loss.

I'm struggling to see where the issue could lie with a simple check of a table existence - but I won't rule it out, perhaps a connection error occurred which allowed for the actual execution of the artisan:migrate command.

Let's keep this open to see if there is any more news or any other reports of similar issues to attempt to understand in detail what is happening..

@tiredofit tiredofit added the question Further information is requested label Jan 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants