Skip to content
This repository has been archived by the owner on Feb 22, 2024. It is now read-only.

Introduction to PerfRepo

Jiří Holuša edited this page Aug 19, 2015 · 2 revisions

Introduction to PerfRepo

This page should introduce you basic logical concept of entities in PerfRepo. The key is to understand what for can every of them be used and what it represents. Let's start!

Test

Test is the starting entity. It represents the test case - what is tested, how it's tested (description). Test doesn't hold any actual results from testing, it's a basically just a template. Test can have following properties:

  • id - automatically generated
  • name - name of the test, e.g. "Infinispan performance in Client/Server mode" or "Infinispan performance using Radargun"
  • uid - because (later on) we need to associate actual test runs (test executions, see below) to this test, we need to get a reference, it's ID, on it. But it's pretty inconvenient to remember actual ID in database, hence we can provide a unique identifier using which we can also retrieve the test. Examples: "infinispan_testsuite_cs", "infinispan_performance_radargun"
  • description - more detailed description of this test. Can be used for methodology, where to find automated tests that fill the results, any info needed.
  • groupId - string identifier (so far, this is about to change, see: https://issues.jboss.org/browse/PERFREPO-129) of the group (also see below together with user) that this test belongs to, group that owns this test. Usually the primary group that the user, who created the test, belongs to.
  • metrics - because the test is a template that basically groups actual test runs, we measure (and store in PerfRepo) results of some specified metrics, so we need to specify the metrics that make sense for this test. Example: for "Infinispan stress test" it might make sense to store maximum client load, average response time, number of requests per second, but for "Infinispan Lucene Directory test" it might make sense to store something like number of Lucene directory searches and number of Lucene directory write, etc. See details on metric entity below.

Metric

As stated above, to every test we want to specify set of metrics that make sense to measure by the test, so we assign Metric entities to test. Metric can hold following information:

  • name - name of the metric. Please note that metrics are unique in whole PerfRepo, since they're by nature the same of all tests. Example: if the metric is "throughput", it means exactly that, throughput is throughput in every test. So it makes sense to assing "throughput" metric to "Infinispan performance in Client/Server mode" as well as to "PerfRepo REST endpoint performance".
  • description - more detailed description of this metric. Usually what it represents, how it's measured or computed etc.
  • comparator - two accepted values HB (Higher better), LB (Lower better). Metrics are different by nature, e.g. with response time metric, the lower time it is, the better. In opposite, with throughput metric, the higher, the better. This property is used in some reporters (e.g. TestGroup report) to correctly draw the chart.

Test execution

The most important entity in PerfRepo is Test execution. It represents one single build/run of specific test. This entity holds the actural results, so called values, for every metric assigned to the test. Besides holding values, it can provide much information about the test run. Let's look at all the properties that we can assign to Test execution.

  • id - automatically generaged
  • name - just for comfort we can provide a name for specific test execution. Usually, this is assigned automaticaly by e.g. Jenkins job (build ID), but sometimes it's convenient to name the test execution by some specific name. Example: We do a performance test of last stable release or some older version, so it would be nice to see right in the search table that this is test of specific version, not just some periodical build.
  • started date - timestamp when the test execution was created.
  • tags- every test execution can be tagged by any number of tags. Tags are just strings giving us some information about the test execution configuration. Tags are extremely effective with searching, therefore designing a sophisticated strategy of tagging is the key for good result organization in PerfRepo. Example: Infinispan provides transactional and non-trasactional functionality and we want to compare these two configurations. We run the same test, measure the same metric and we want to differ these test executions somehow. That's where tags come, we can add 'tx' and 'non-tx' tags and these will efficiently differ them. Tags cannot contain spaces. If it does, it will be treated as two separate tags, use e.g. underscore instead.
  • comment - additional comment for the test execution, just like description. This could be especially usefull when there is a regression and we know what caused it, so we can add a comment to this broken test execution explaining the results. Note that this is not the right place to store configuration etc., test execution parameters are the right place.
  • parameters - when hunting for performance regression, we usually need very much information about the environment, configurations etc. Parameter is just a double (key, value) and we can store arbitrary number of them to every test execution. Note that we can also search by these parameters.
  • attachments - to every test execution we can also add attachement which can be any binary file. This attachment is stored as is, PerfRepo knows nothing about it internal structure. One possible usecase can be to store used configuration for later debug purposes.

User

TBD

Clone this wiki locally