Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify criteria for assessing tests and results #7

Open
csarven opened this issue May 18, 2022 · 4 comments
Open

Specify criteria for assessing tests and results #7

csarven opened this issue May 18, 2022 · 4 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@csarven
Copy link
Member

csarven commented May 18, 2022

This issue is an ACTION of https://github.com/solid/test-suite-panel/blob/main/meetings/2022-05-13.md#proposal-3-specify-criteria-for-assessing-tests-and-results due 2022-05-27. To document a mutual understanding (see #considerations) towards the Test Suite Panel Charter proposal.

Related issues:

Considerations:

  1. What's the test review policy?
  2. What's the test review checklist?
  3. ...
@michielbdejong
Copy link

Is this for which tests we may / may not run/trust in the process of composing the reports on https://solidservers.org , or for which tests in specification-tests we will mark as 'approved'?

I think we can run pretty much any test in the process of composing the reports on https://solidservers.org , as long as we manually check and verify any test failures, and with that I mean actually creating a "minimal steps to reproduce" document, using e.g. curl. Even if there would be a bug in the test, a bug in a curl command should easy to spot in a public discussion.

Regarding which tests in specification-tests to mark as 'approved', yes, let's define a procedure for that. This should ultimately involve the spec editors as approvers, IIUC.

@edwardsph
Copy link
Contributor

See proposed process for specification-tests: https://github.com/solid-contrib/specification-tests/blob/main/CONTRIBUTING.md

@csarven
Copy link
Member Author

csarven commented May 27, 2022

Status of this comment: Draft

  1. Review policy:
  • Test review has a URI and its contents are publicly accessible when dereferenced.
  • Test reviewer can be anyone (other than the original test author) that has the required experience with the specification. TBD whether at least one reviewer must be the author of the specification.
  1. Review checklist:
  • The test has a URI and its contents are publicly accessible when dereferenced.
  • The test links to specification requirements.
  • The CI jobs on the pull request have passed. (TBD)
  • It is obvious what the test is trying to test.
  • The test passes when it’s supposed to pass.
  • The test fails when it’s supposed to fail.
  • The test is testing what it thinks it’s testing.
  • The specification backs up the expected behaviour in the test.
  • The test is automated as - TBD - unless there’s a very good reason for it not to be.
  • The test does not use external resources. (TBD)
  • The test does not use proprietary features (vendor-prefixed or otherwise).
  • The test does not contain commented-out code
  • The test is placed in the relevant location.
  • The test has a reasonable and concise (file)name.
  • If the test needs to be run in some non-standard configuration or needs user interaction, it is a manual test.
  • The title is descriptive but not too wordy.

@edwardsph
Copy link
Contributor

That looks good to me but I realise my previous comment was related to process and this is focussed on criteria (as the issue title says). However, I think we need to capture the process too - is that a separate issue?

@csarven csarven added the documentation Improvements or additions to documentation label Jan 17, 2023
@csarven csarven self-assigned this Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants