-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DRAFT] Bump auction id on staging #2913
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we not just bump the auction_id
sequence in the DBs manually?
Few reasons:
So, doing it this way was simple, fast and easy enough to drop other options. |
While we shouldn't update the DB willy nilly there are ways to make this safe and we should all be able to do it if it's ever necessary for oncall stuff. We can do that together if you like.
Yeah, but on the other hand we'll have to do this PR for all future networks as well which seems not great.
I guess if you do it like in this PR yes. But if you atomically bump the sequence number of the auction counter the next auction will be created with the increased value without any race conditions.
I think this stuff should probably even be documented in the readme because it's not obvious why the IDs are so different across environments so I'd be fine with not having this PR. Besides the PR is already here and we are discussing it so there is already a trace in github. 😅 |
I assume you are referring to doing a "transaction like" query and maybe even doing a sql query review by other team members before applying. Still not much better than this PR.
The same stands for manual intervention.
But how can you avoid having the same auction appear with two different auction ids if you have two writing operations that race:
It does make sense to split this issue in two:
This PR does not tackle 2. Maybe we should create the issue for 2 and add a configuration parameter for auction_id starting number per environment. Edit: after some thinking, I should probably adjust this PR and make sure both (1) and (2) are resolved with it. |
There are no races when updating sequence counters because they are atomic. And in fact your suggestion will not cause new auctions to not be created with the original counter range when the PR gets reverted.
For example adding this to your test will fail because the new id is actually 8 instead of 101. let id_ = save(&mut db, &value).await.unwrap();
assert_eq!(101, id_); |
Good catch. I've updated the query so that it restarts the sequence number. With that, racing argument doesn't stand anymore indeed, as the id is changed only once the auction is replaced. Still thinking this should be resolved with code as we need to support new networks in the future, so will add the configuration soon. |
I still think this is nothing the business logic needs to concern itself with. The repo code does not have any control over the fact that there might be multiple environments so IMO it should also not try to adjust the counters based on that. |
The db migration files are part of this repo and those create the We also have other configuration parameters set differently by different environments, that the repo code reacts on. If you really want to avoid doing code for this then what can we do?
|
The fact that this code had an issue originally shows imo that it's not as trivial and I kind of agree that doing this hacky one-off backend change feels like a weird way to achieve the desired outcome, but I also understand the concern of doing this on 4 chains (and potentially in the future). How about adding a DB migration which sets the sequence number in case we specify a placeholder parameter? The parameter could be unset by default (so locally and in production we don't change the sequence number) but in staging we pass it in via command line arguements or env var here. This way it would work for all networks (also new ones) automatically, we would have a github trace, and still wouldn't butcher the main services logic. |
This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed. |
Description
Fixes #2848
Will be reverted before Friday, as soon as I make sure all staging environments bumped their ids.