Skip to content

Support test coverage analysis #355

Open
@hendrikvanantwerpen

Description

@hendrikvanantwerpen

We should have a coverage command that tells us

  1. whether we test all TSG rules, and
  2. whether all tests are relevant.

Sketch for how we could do this:

  • Add tracing support to TSG so we can track which statements are being executed. The rest might be more convenient if we can control execution from the tracer as well, e.g., if we can return a value indicating the current statement should be skipped.

  • Run the test suite against the full set of TSG rules. All tests should succeed. Record which tests trigger which stanzas and statements.

    • If a stanza or statement is never executed, report it as untested.
  • Rerun the test suite against modified TSG rules, where one edge or attr statement is skipped. Optimize by running only against the tests that actually hit the omitted statement.

    • If none of the tests fail, report statement as untested / irrelevant. Record which test(s) failed.
    • If the stack graph construction fails, report the skipped statement as untestable. This could happen if groups of attributes that are required together are split over separate attr statements.
  • Report any test that never failed for any of the modified runs as irrelevant.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions