teuthology
is an automation framework for Ceph, written in Python. It is used to run the vast majority of its tests
and was developed because the unique requirements of testing such a highly
distributed system with active kernel development meant that no other framework
existed that could do its job.
The name 'teuthology' refers to the study of cephalopods.
The general mode of operation of teuthology
is to remotely orchestrate
operations on remote hosts over SSH, as implemented by Paramiko. A typical job consists of multiple nested
tasks, each of which perform operations on a remote host over the network.
When testing, it is common to group many jobs together to form a test run.
If you are new to teuthology and simply want to run existing tests, check out :ref:`intro_testers`
- :ref:`teuthology` - Run individual jobs
- :ref:`teuthology-kill` - Kill running jobs or entire runs
- :ref:`teuthology-lock` - Lock, unlock, and update status of machines
- :ref:`teuthology-ls` - List job results by examining an archive directory
- :ref:`teuthology-openstack` - Use OpenStack backend (wrapper around
teuthology-suite
) - :ref:`teuthology-queue` - List, or delete, jobs in the queue
- :ref:`teuthology-report` - Submit test results to a web service (we use paddles)
- :ref:`teuthology-results` - Examing a finished run and email results
- :ref:`teuthology-schedule` - Schedule a single job
- :ref:`teuthology-suite` - Schedule a full run based on a suite (see suites in ceph-qa-suite)
- :ref:`teuthology-updatekeys` - Update SSH host keys for a machine
- :ref:`teuthology-worker` - Worker daemon to monitor the queue and execute jobs
For a description of the distinct services that utilities interact with see :ref:`components`.
See :ref:`installation_and_setup`.
The examples in this document are based on the lab machine configuration used by the Red Hat Ceph development and quality assurance teams (see :ref:`lab_setup`). Other instances of a Ceph Lab being used in a development or testing environment may differ from these examples.
See :ref:`detailed_test_config`.
For OpenStack support, see :ref:`openstack-backend`
For 'vps' support using downburst, see :ref:`downburst_vms`
Each suite name is determined by the name of the directory in ceph-qa-suite
that contains that suite. The directory contains subdirectories and yaml files,
which, when assembled, produce valid tests that can be run. The test suite
application generates combinations of these files and thus ends up running a
set of tests based off the data in the directory for the suite.
To run a suite, enter:
teuthology-suite -s <suite> [-c <ceph>] [-k <kernel>] [-e email] [-f flavor] [-t <teuth>] [-m <mtype>]
where:
suite
: the name of the suite (the directory in ceph-qa-suite).ceph
: ceph branch to be used.kernel
: version of the kernel to be used.email
: email address to send the results to.flavor
: the ceph packages shaman flavor to run againstteuth
: version of teuthology to runmtype
: machine type of the runtemplates
: template file used for further modifying the suite (optional)
For example, consider:
teuthology-suite -s rbd -c wip-fix -k distro -e [email protected] -f default -t jewel -m mira
The above command runs the rbd suite using the wip-fix branch of ceph, the
jewel kernel, with a 'default' ceph shaman build packages flavor, and the teuthology jewel branch
will be used. It will run on mira machines and send an email to
[email protected] when it's completed. For more details on
teuthology-suite
, please consult the output of teuthology-suite --help
.
Read more about running integration tests using teuthology at docs.ceph.com.
In order for a queued task to be run, a teuthworker thread on
teuthology.front.sepia.ceph.com
needs to remove the task from the queue.
On teuthology.front.sepia.ceph.com
, run ps aux | grep teuthology-worker
to view currently running tasks. If no processes are reading from the test
version that you are running, additonal teuthworker tasks need to be started.
To start these tasks:
copy your build tree to
/home/teuthworker
onteuthology.front.sepia.ceph.com
.Give it a unique name (in this example, xxx)
start up some number of worker threads (as many as machines you are testing with, there are 60 running for the default queue):
/home/virtualenv/bin/python /var/lib/teuthworker/xxx/virtualenv/bin/teuthworker /var/lib/teuthworker/archive --tube xxx --log-dir /var/lib/teuthworker/archive/worker_logs Note: The threads on teuthology.front.sepia.ceph.com are started via ~/teuthworker/start.sh. You can use that file as a model for your own threads, or add to this file if you want your threads to be more permanent.
Once the suite completes, an email message is sent to the users specified, and
a large amount of information is left on teuthology.front.sepia.ceph.com
in
/var/lib/teuthworker/archive
.
This is symbolically linked to /a for convenience. A new directory is created
whose name consists of a concatenation of the date and time that the suite was
started, the name of the suite, the ceph branch tested, the kernel used, and
the flavor. For every test run there is a directory whose name is the pid
number of the pid of that test. Each of these directory contains a copy of the
teuthology.log
for that process. Other information from the suite is
stored in files in the directory, and task-specific yaml files and other logs
are saved in the subdirectories.
These logs are also publically available at
http://qa-proxy.ceph.com/teuthology/
.