-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert proposed change state to open on merge failure #5564
Conversation
CodSpeed Performance ReportMerging #5564 will not alter performanceComparing Summary
|
Could you have a test that creates a proposed change for a branch with a data conflict and try to merge it? |
The thing is that we already have a test like that here: https://github.com/opsmill/infrahub/blob/infrahub-v1.1.4/backend/tests/integration/proposed_change/test_proposed_change_conflict.py#L149-L152 The problem in this case was that the current logic also checks for resolutions on the check object itself and we're going to remove that so I don't see the point of adding a test for when the resolution is solved that way. So we'd need something else to cause the failure. Do you have any other thoughts as to what could cause a failure? Otherwise I'll merge this as is for now. |
Hmm. Not really. Maybe you could patch |
If we merge a proposed change and there's an error we previously didn't revert the state back to "open" from "merging", this PR corrects this behaviour. Previously we did revert this if a user tried to merge a proposed change with data conflicts without providing a resolution to all conflicts. Fixes #5563
cf15dfe
to
2c6627c
Compare
I added a test please take another look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
branch1 = await client.branch.create(branch_name="failing_branch") | ||
steve = await Node.init(schema=TestKind.PERSON, db=db, branch=branch1.name) | ||
await steve.new(db=db, name="Steve", height=178) | ||
await steve.save(db=db) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if you still need this fixture
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's mostly just following the current convention where we have some test setup that is happening within a fixture. It's following what we are doing in the other tests. But I can move this code do the actual test instead if that's what you mean as we typically don't reuse these fixtures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant that I don't think this fixture is actually used in the test_merge_failure
test b/c that test seems to use the conflict_free
branch and not the failing_branch
branch created here. But I could be missing something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok I see the confusion. I managed to change the name of the proposed change to "failing_branch", where I should really have named it "failing_branch". I think it's probably cleaner if I swap that part out so the branches doesn't interact with each other.
2c6627c
to
cd9d598
Compare
If we merge a proposed change and there's an error we previously didn't revert the state back to "open" from "merging", this PR corrects this behaviour. Previously we did revert this if a user tried to merge a proposed change with data conflicts without providing a resolution to all conflicts.
Fixes #5563
Note: Today we could reach this state where the proposed change was stuck in the "merging" state if there was a data conflict and the user tried to resolve the conflict using the old way and do it within the Checks tab. I'm not certain how to create a scenario where we end up here? I.e if we want to create a test that hits this path. Does anyone have any ideas about that?