Skip to content
This repository has been archived by the owner on Feb 22, 2024. It is now read-only.

Reports 2.0 design document

Jiří Holuša edited this page Feb 15, 2017 · 5 revisions

This document should describe in detail purposes of various reports, usecases, capabilities etc.

Functionality common for all reports

  • String name.
  • String description (not escaped, thus allowing HTML tags)
  • Every report has its assigned permissions controlling authorization. There can be multiple permissions assigned to one report. Single permission has following properties:
    • Defines access type - read or write.
    • Defines access level - public, user or group level.
      • If a user level is selected, you must also select a user that will be assigned to this permission.
      • If a group level is selected, you must also select a group that will be assigned to this permission.
      • If public access level is selected, you don't provide anything else.
    • Public write should not be allowed.

Table comparison report

Description

This is the simplest report. It allows to compare (several) test executions between each other. Table comparison report will replace the simple comparison report known from version 1.x.

Usecase

We have several versions of a project and we want to know which one is better in some configuration. We want to generate simple comprehensive table report that shows differences of test executions in various metrics.

Features

  • The core entity is a table. Table is the core logical unit of this report representing comparison of several test executions (they must be "compatible" meaning they need to have at least one common metric. If they don't, informative string is diplayed instead of the table). Table has following attributes/functions:
    • String name.
    • String description. It should also allow to display a HTML tags (hence no escaping).
    • Integer threshold level (see details later).
    • Header of the table contains names of the test executions being compared. Header is a clickable link pointing to the test execution detail page.
    • First column of every row is a name of the metric that all the test executions have in common.
    • Now you need to be able to select the test executions that are involved in the comparison, let's call it comparison item. Comparison item has following properties:
      • Every item has it's own string alias which is displayed in the column header.
      • Item can be selected in 3 ways - execution ID, tags query, parameters query (= new feature).
      • For every comparison item you need to select a test, from which it will be selected. However, you can do it on per-item basis, meaning you can compare test executions from various tests (assuming they are "compatible").
      • Item selection way (exec ID, tag query and parameters query) should be possible also on per-item basis (= new feature).
      • You should be able to select one of the items as a baseline, e.g. using radio button.
    • Once you select all the configuration parameters of comparison reports, table should be created.
    • Cell displays the percentual difference between the baseline. If you hover over the percentage, a tooltip with actual raw value is displayed.
    • Cell can have 3 colors - red, green, neutral (gray?). If the percentual difference is lower than threshold (e.g. more than 5 %) and the metric is higher better, the cell is red. Analogically for the green and lower better metrics, see existing implementation in version 1.x for details, it's pretty simple.
    • If the test execution values for given metrics are multi-value, no numbers will be shown, instead there will be e.g. a link like "show multivalue comparison" which displays a chart comparing the multivalues. (= new feature in table comparison report, but already existing in 1.x version in simple comparison report).
  • Tables can be grouped into groups. Each group has:
    • String name.
    • String description (no escaping to allow HTML tags).
    • List of tables, see above.
  • This report will be integrated together with search test execution page. User must be able to select some test executions there, than click something like "Compare" and a table comparison report is generated out of it. This is very simple to implement, once you have the selection by ID ready, since it's just providing the IDs from search page to the report.
  • Drag and drop functionality for changing order of tables/groups would be nice to have.

Metric history report

Description

This report allows to see a progress of a specific metric in history, hence the report's name. It produces a line chart.

Usecase

Using continuous integration, we produce results every day. We want to see if some commit didn't cause a performance regression. Using this metric history report, we can see immediately a drop in performance in a simple line chart allowing us to easily identify the breaking commit.

Features

  • The core entity is a chart. Chart has following properties:
    • String name.
    • String description (not escaped, thus allowing HTML)
    • Maximum number of test executions that is used for series (e.g. display only last 100 results)
    • In one chart, you can specify multiple series (lines) and baselines:
      • Configuration of various series/baseline should be visible on per-chart basis, unlike current 1.x implementation where all the series and baselines are specified in the beginning of the report, which is completely confusing.
      • Series is a "live" line, meaning that the chart will change every time a new test execution matching the search criteria is added into PerfRepo. Series properties:
        • String name. This name will appear in the chart in legend.
        • You must select a test from which the test executions will be chosen.
        • You must select a metric that will be used for drawing. The metric must be single-value, but we can leave it to the user that he will guarantee to select correct test executions.
        • Series can be selected by tag query or parameters query (pretty much the same like with table comparison report)
      • Baseline is a horizontal line that is drawn in the chart. It's just a result of one metric of one specific test execution. Baseline properties:
        • String name. This name will appear in the chart in legend.
        • You must select a metric that will be used.
        • You must provide the test execution ID to identify the baseline. You can enter it simple by a number, but it would be truly awesome if we could do some sort of "search-in-popup", so we wouldn't have to do "Search test executions" -> "Detail" -> write down the ID -> enter the ID into baseline.
    • Every data point in the chart (which is corresponding to one test execution) must be clickable. When you click on a data point, it's possible to:
      • See details about the test execution. You see name, date, associated test (link), tags and values of parameters that are selected as favorite parameters (this is used typically for parameters like git.commit-hash etc.).
      • Add the test execution to the comparison, just like you were selecting it from test execution search page, see details on table comparison report.
    • Drag and drop functionality to change order of charts would be nice to have.

Test group report

TODO

Boxplot report

TODO: correct it a bit but the idea is true. Boxplot report is completely the same as metric history report with only one difference - it is targeted to multivalue test executions, so for every data point, it draws a box plot instead of a single point.