Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] Bump auction id on staging #2913

Closed
wants to merge 2 commits into from
Closed

[DRAFT] Bump auction id on staging #2913

wants to merge 2 commits into from

Conversation

sunce86
Copy link
Contributor

@sunce86 sunce86 commented Aug 20, 2024

Description

Fixes #2848

Will be reverted before Friday, as soon as I make sure all staging environments bumped their ids.

@sunce86 sunce86 self-assigned this Aug 20, 2024
@sunce86 sunce86 requested a review from a team as a code owner August 20, 2024 10:11
Copy link
Contributor

@MartinquaXD MartinquaXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we not just bump the auction_id sequence in the DBs manually?

@sunce86
Copy link
Contributor Author

sunce86 commented Aug 21, 2024

Why do we not just bump the auction_id sequence in the DBs manually?

Few reasons:

  1. It's stressful 😄 .
  2. Would have to do it manually for several networks which is error prone
  3. We expose get_auction API endpoint, so calling this endpoint before and after our manual update could yield different auction_id for the same Auction.
  4. There is a github trace when we made the change if we do it via code change.

So, doing it this way was simple, fast and easy enough to drop other options.

@MartinquaXD
Copy link
Contributor

It's stressful 😄

While we shouldn't update the DB willy nilly there are ways to make this safe and we should all be able to do it if it's ever necessary for oncall stuff. We can do that together if you like.

Would have to do it manually for several networks which is error prone

Yeah, but on the other hand we'll have to do this PR for all future networks as well which seems not great.

We expose get_auction API endpoint, so calling this endpoint before and after our manual update could yield different auction_id for the same Auction.

I guess if you do it like in this PR yes. But if you atomically bump the sequence number of the auction counter the next auction will be created with the increased value without any race conditions.

There is a github trace when we made the change if we do it via code change.

I think this stuff should probably even be documented in the readme because it's not obvious why the IDs are so different across environments so I'd be fine with not having this PR. Besides the PR is already here and we are discussing it so there is already a trace in github. 😅

@sunce86
Copy link
Contributor Author

sunce86 commented Aug 21, 2024

While we shouldn't update the DB willy nilly there are ways to make this safe and we should all be able to do it if it's ever necessary for oncall stuff. We can do that together if you like.

I assume you are referring to doing a "transaction like" query and maybe even doing a sql query review by other team members before applying. Still not much better than this PR.

Yeah, but on the other hand we'll have to do this PR for all future networks as well which seems not great.

The same stands for manual intervention.

I guess if you do it like in this PR yes. But if you atomically bump the sequence number of the auction counter the next auction will be created with the increased value without any race conditions.

But how can you avoid having the same auction appear with two different auction ids if you have two writing operations that race: next_auction call and manual intervention. This issue is just one that it's obvious, maybe there are others?

I think this stuff should probably even be documented in the readme because it's not obvious why the IDs are so different across environments so I'd be fine with not having this PR. Besides the PR is already here and we are discussing it so there is already a trace in github. 😅

It does make sense to split this issue in two:

  1. How to update existing networks
  2. How to make sure new networks follow the same rule in the future.

This PR does not tackle 2. Maybe we should create the issue for 2 and add a configuration parameter for auction_id starting number per environment.

Edit: after some thinking, I should probably adjust this PR and make sure both (1) and (2) are resolved with it.

@MartinquaXD
Copy link
Contributor

But how can you avoid having the same auction appear with two different auction ids if you have two writing operations that race: next_auction call and manual intervention. This issue is just one that it's obvious, maybe there are others?

There are no races when updating sequence counters because they are atomic. And in fact your suggestion will not cause new auctions to not be created with the original counter range when the PR gets reverted.
What would happen is this:

  1. counter is at 123
  2. insert new auction which automatically increases the counter to 124 and stores the auction with id 123
  3. update latest auction to have id i64::MAX / 2
  4. counter is still 124
  5. insert new auction which automatically increases the counter to 125 and stores the auction with id 124
  6. update last auction to have id i64:MAX / 2 which is a conflict and still doesn't change the sequence counter to the desired value.

For example adding this to your test will fail because the new id is actually 8 instead of 101.

        let id_ = save(&mut db, &value).await.unwrap();
        assert_eq!(101, id_);

@sunce86
Copy link
Contributor Author

sunce86 commented Aug 21, 2024

Good catch. I've updated the query so that it restarts the sequence number. With that, racing argument doesn't stand anymore indeed, as the id is changed only once the auction is replaced.

Still thinking this should be resolved with code as we need to support new networks in the future, so will add the configuration soon.

@sunce86 sunce86 changed the title Bump auction id on staging [DRAFT] Bump auction id on staging Aug 21, 2024
@MartinquaXD
Copy link
Contributor

I still think this is nothing the business logic needs to concern itself with. The repo code does not have any control over the fact that there might be multiple environments so IMO it should also not try to adjust the counters based on that.
To me this strictly seems like a DB configuration thing.

@sunce86
Copy link
Contributor Author

sunce86 commented Aug 21, 2024

I still think this is nothing the business logic needs to concern itself with. The repo code does not have any control over the fact that there might be multiple environments so IMO it should also not try to adjust the counters based on that.

The db migration files are part of this repo and those create the auctions table and set the sequence number. Why shouldn't updating this number not be part of the repo?

We also have other configuration parameters set differently by different environments, that the repo code reacts on.

If you really want to avoid doing code for this then what can we do?

  1. Update manually for existing networks.
  2. Add special step when deploying new network to manually update the sequence for staging environment?
    This is bad if someone forgets to do (2) and in the code we assume auction ids never overlap.

@fleupold
Copy link
Contributor

The fact that this code had an issue originally shows imo that it's not as trivial and I kind of agree that doing this hacky one-off backend change feels like a weird way to achieve the desired outcome, but I also understand the concern of doing this on 4 chains (and potentially in the future).

How about adding a DB migration which sets the sequence number in case we specify a placeholder parameter? The parameter could be unset by default (so locally and in production we don't change the sequence number) but in staging we pass it in via command line arguements or env var here.

This way it would work for all networks (also new ones) automatically, we would have a github trace, and still wouldn't butcher the main services logic.

Copy link

This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed.

@github-actions github-actions bot added the stale label Aug 30, 2024
@github-actions github-actions bot closed this Sep 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

chore: Bump staging auction_id to avoid clashing of auction ids in different environments
3 participants