This document should help you with writing and running tests for your pull requests. It is recommended to add a new test if a new functionality is added or if a code that is not covered by any test is updated. Another recommendation is to add your test in a separate commit.
All the tests reside in the tests directory. It has multiple subdirectories which should represent various parts of the OpenSCAP library and its utilities. When you contribute to some part of the OpenSCAP project you should put a test for your contribution into the corresponding subdirectory in the tests directory. Use your best judgement when deciding where to put the test for your pull request and if you are not sure don’t be affraid to ask in the pull request, someone will definitely help you with that.
Note
|
OpenSCAP project uses the GNU automake which has built-in support for test suites. |
To run a specific test or all tests you first need to compile the OpenSCAP library and then install additional packages required for testing. See the Compilation section in the README.md for more details.
In this guide we will use an example to describe the process of writing a test
for the OpenSCAP project. Let’s suppose you want to write a new test for
the Script Check Engine (SCE) to test its basic functionality. SCE allows you
to define your own scripts (usually written in Bash or Python) to extend XCCDF
rule checking capabilities. Custom check scripts can be referenced from
an XCCDF rule using <check-content-ref>
element, for example:
<check system="http://open-scap.org/page/SCE"> <check-content-ref href="YOUR_BASH_SCRIPT.sh"/> </check>
In our example, we are testing the SCE module, therefore we will look for its subdirectory in the tests direcotory and we will find it at the following link: tests/sce. We will add our new test into this subdirectory.
This will happen most of the times. As stated above in our example we will place our test into the tests/sce directory.
This might happen if your test covers a part of the OpenSCAP which has no tests at all. In this case you would need to add a new directory with suitable name into the tests directory structure and create/update Makefiles.
To have an example also for this scenario, let’s suppose we want to add the
foo
subdirectory into the tests directory. We need to:
-
Create the
foo
subdirectory in the tests directory. -
Add a new entry for
foo
subdirectory into theSUBDIRS
variable in the tests/Makefile.am. -
Create
Makefile.am
inside thetests/foo
directory and list all your test scripts inside that directory in theTESTS
variable.
Please see the GNU automake documentation about test suites for more details.
When writing tests for OpenSCAP you should use the common test library which is located at tests/test_common.sh.in.
Note
|
The configure script will generate the tests/test_common.sh file from
the tests/test_common.sh.in adding
configuration specific settings.
|
You will need to source the tests/test_common.sh
in your test scripts to use
the functions which it provides. When sourcing the file, you need to specify
a relative path to it from the directory with your test script. For example,
when your test script will be in the tests/foo
directory:
#!/usr/bin/env bash
set -o pipefail
. ../test_common.sh
test1
test2
...
Always use $OSCAP
variable instead of the plain oscap
in your tests when
calling the oscap
command line tool. This is because tests might be run with
CUSTOM_OSCAP
variable.
You can use $XMLDIFF
in your tests which will call the
tests/xmldiff.pl script.
It is also possible to do XPath queries using $XPATH
variable, the usage is:
$XPATH FILE 'QUERY'
It is always good to set pipefail
option for all your test scripts so you
accidentally don’t lose non-zero exit statuses of commands in a pipeline:
set -o pipefail
The next option you can consider is errexit
which exits your script
immediately if a command exits with a non-zero status. You might want to use
this option if tests are somehow dependent and it doesn’t make sense to continue
testing after one test fails.
set -e
The function expects that you provide it with a name of a test log file.
All tests which will be run after test_init
using test_run
function
will report results into this log file:
test_init test_foo.log
Note
|
The specified log file will contain output of both stdout and stderr
of every test executed by the test_run function.
|
The function is responsible for executing a test script file or a function and
logging its result into the log file created by the test_init
function.
test_run "DESCRIPTION" TEST_FUNCTION|$srcdir/TEST_SCRIPT_FILE ARG [ARG...]
The $srcdir
variable contains the path to the directory with the test script.
The test_run
function reports the following results into the log file:
-
PASS when script/function returns 0,
-
FAIL when script/function returns 1,
-
SKIP when script/function returns 255,
-
WARN when script/function returns none of the above exit statuses.
The result of every test executed by the test_run
function will be reported
in the log file in a following way:
TEST: DESCRIPTION
<test stdout + stderr output>
RESULT: PASS/FAIL/SKIP/WARN
Note
|
This function should be called after the test_init function.
|
The function is responsible for cleaning-up the testing environment. You can call it without arguments or with one argument — a script/function which will do additional clean-up tasks.
test_exit [CLEAN_SCRIPT|CLEAN_FUNCTION]
Checks if requirements are in the $PATH
, use it as follows:
require 'program' || return 255
Verifies that there is the COUNT
number of results of selected OVAL TYPE
in
a RESULTS_FILE
:
verify_results TYPE CONTENT_FILE RESULTS_FILE COUNT
Note
|
This function expects that the OVAL TYPE is numbered from 1 to COUNT
in the RESULTS_FILE .
|
Does an XPath query to a file specified in the $result
variable and checks if
number of results matches with an expected number specified as an argument:
result="relative_path_to_file"
assert_exists EXPECTED_NUMBER_OF_RESULTS XPATH_QUERY_STRING
For example, let’s say you want to check that in the results.xml
file the
result of the rule xccdf_com.example.www_rule_test
is fail:
result="./results.xml"
my_rule_="xccdf_com.example.www_rule_test"
assert_exists 1 "//rule-result[@idref=\"$my_rule\"]/result[text()=\"fail\"]"
Now, as we know where a new test should go and what functions and capabilities are provided by the common test library, we can add test files which will contain test scripts and content required for testing.
To sum up, we are adding a tests to check the basic functionality of the Script Check Engine (SCE) and we have decided that the test will go into the tests/sce directory.
We will add the tests/sce/test_sce.sh
script which will contain our test and
tests/sce/sce_xccdf.xml, an XML file with
XCCDF rules which are referencing various check scripts (grep the
check-content-ref
element to see the referenced files). All the referenced
check script files are set to always pass and the
tests/sce/test_sce.sh script will perform
evaluation of the tests/sce/sce_xccdf.xml
XCCDF document file and it will check that all rule results are pass
.
You need to plug your test into the test library so it will be run automatically
everytime make check
is run. To do this, you need to add all the test files
into the Makefile.am
. The Makefile.am
which you need to modify is located
in the same directory as your test files.
We will demonstrate this on our example with the SCE test. We have prepared our
test script, the XML document file with custom rules and various check scripts
for testing. We placed all our test files into the
tests/sce directory. Now we will modify the
tests/sce/Makefile.am and we will add all our
test files into the EXTRA_DIST
variable and our test script into the TESTS
variable which will make sure that our test will be executed by the make check
:
... TESTS = test_sce.sh \ ... ... EXTRA_DIST = test_sce.sh \ sce_xccdf.xml \ bash_passer.sh \ python_passer.py \ python_is16.py \ lua_passer.lua \ ...
To run your new test you first need to compile the OpenSCAP library. See the
Compilation section in the README.md for more details.
Also you don’t need to run all the tests, only tests in a directory where you
have added your new test. To do so, first change directory to the directory
with your test and then run make check
from there, for example:
$ cd tests/sce
$ make check
Log file with your test results can be located under the name which is set
using the test_init
function in your test script. In our example,
in the tests/sce/test_sce.sh test script,
the log file is set as test_sce.log
.
Note
|
Our Jenkins runs make distcheck instead of make check so please make
sure that your test works for both of these modes. More details about the GNU
Build System build tree and source tree can be found in the
GNU automake documentation.
|