Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store check result modified during interpretation #3540

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

psss
Copy link
Collaborator

@psss psss commented Feb 18, 2025

When presenting results to the user it should be clear at the first sight which check did not fail as expected or which checks are informational only.

Pull Request Checklist

  • implement the feature
  • extend the test coverage
  • include a release note

When presenting results to the user it should be clear at the
first sight which check did not fail as expected or which checks
are informational only.
@martinhoyer
Copy link
Collaborator

This doesn't feel right to me. Not sure I can pin-point why, but for one I presume this will break when multiple checks of same name are present. Also there are before/after phases and users don't specify expected outcome per phase.

Hard to wrap a head around this, but if I understand it correctly, when changing the check.result like this, it could mean that when 1 out of 4 things fails, we convert all four to fail. As it's now, results would stay and only the actual test result would be affected with added note as for why.

I'm guessing the pain-point is the disconnect between Test Result and Check Result, where in Test Result, we are simply flipping it (and creating original_outcome), while in Check Result it's currently not quite possible treat it the same way.

iirc, there are more questions...

  • Will there be checks which do different things in before and after phase? i.e. coredump/abrt/kdump..., where before-test it's a "setup" and after-test is actual check if something went wrong?
  • Even if not, users would still benefit from being able to set expected outcome for each phase
  • If expected outcome is set to check as a whole and only one phase fails, I don't think we should flip results of both phases, it's just....wrong? :)
    oh and what about scenario, where before-test dmesg check fails, which clears the logs; and then after test the check passes, but everything is red.

Maybe I'm missing something, but couldn't most of this be solved by doing Check.SubResult or PhaseResult? (alongside moving away from using name as a identifier)

@happz
Copy link
Collaborator

happz commented Feb 19, 2025

This doesn't feel right to me. Not sure I can pin-point why, but for one I presume this will break when multiple checks of same name are present. Also there are before/after phases and users don't specify expected outcome per phase.

Hard to wrap a head around this, but if I understand it correctly, when changing the check.result like this, it could mean that when 1 out of 4 things fails, we convert all four to fail. As it's now, results would stay and only the actual test result would be affected with added note as for why.

I'm guessing the pain-point is the disconnect between Test Result and Check Result, where in Test Result, we are simply flipping it (and creating original_outcome), while in Check Result it's currently not quite possible treat it the same way.

iirc, there are more questions...

  • Will there be checks which do different things in before and after phase? i.e. coredump/abrt/kdump..., where before-test it's a "setup" and after-test is actual check if something went wrong?

AVC check does "setup" before the test, and checks collected events after the test. But it does not emit a "before" result - although I would like it to return a "skip" result if SELinux is not available (#3498).

  • Even if not, users would still benefit from being able to set expected outcome for each phase

Maybe. My views: I for one consider the "before" and "after" stages as a single check, if one of them fails, the check fails.

  • If expected outcome is set to check as a whole and only one phase fails, I don't think we should flip results of both phases, it's just....wrong? :)

IMHO same as above, I would expect the check to be marked as failed, with a nice note.

oh and what about scenario, where before-test dmesg check fails, which clears the logs; and then after test the check passes, but everything is red.

IMHO same as above, I would expect the check to be marked as failed, with a nice note.

Maybe I'm missing something, but couldn't most of this be solved by doing Check.SubResult or PhaseResult? (alongside moving away from using name as a identifier)

Maybe. Having one bucket to report both "before" and "after" content would simplify the evaluation and questions above, although it would not help with name as an identificator as test still could have multiple checks of the same kind.

We really need that unit test to collect all combinations and corner cases and test what we expect to happen. We are discussing various combinations and it's easy to get lost - with a test, we would point a finger at one combination stating "This one does not feel right".

@martinhoyer
Copy link
Collaborator

Maybe. My views: I for one consider the "before" and "after" stages as a single check, if one of them fails, the check fails.

That would be fine, except the use-cases like the feedback we got where a user wants to ignore the before-test dmesg fail as it creates false positives.

Still, without check subresults, each phase is a check? So while we have check as a whole resultsoutcome, we are flipping phase.result? (for lack of better words)

We really need that unit test to collect all combinations and corner cases and test what we expect to happen. We are discussing various combinations and it's easy to get lost - with a test, we would point a finger at one combination stating "This one does not feel right".

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants