Teaster: automate your dirty tester work and take time for a relaxing tea.
sudo zypper in docker
sudo zypper in docker-compose
sudo systemctl start docker
finally run docker-compose up
into the root dir
You can install all the necessary parts manually :)
Please, follow the below instructions
sudo zypper in rabbitmq-server rabbitmq-server-plugins
sudo systemctl start rabbitmq-server
sudo rabbitmq-plugins enable rabbitmq_management
sudo zypper in docker
sudo rabbitmqctl add_user celery celery
sudo rabbitmqctl add_vhost celery
sudo rabbitmqctl set_user_tags celery celery
sudo rabbitmqctl set_permissions -p celery celery ".*" ".*" ".*"
antonio@linux-h1g7:~/dev/teaster> workon teaster
(teaster) antonio@linux-h1g7:~/dev/teaster> sudo pip install -r requirements.txt
See all the steps below
(teaster) β teaster git:(master) β python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 208-391-154
Before running a consumer, be sure to have rabbitmq-server up and running
sudo systemctl status rabbitmq-server
then run your consumer
(teaster) β teaster git:(master) β python consumer_leap.py
[Consumer] registering a consumer to teaster localhost
[Consumer] response registration
{
"id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"product": "leap:42.3",
"runenv": "docker"
}
registration result
{
"id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"product": "leap:42.3",
"runenv": "docker"
}
consumer, connecting to rabbit:
{
"exchange": "test",
"exchange_type": "direct",
"host": "localhost",
"queue": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"routing": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"vhost": null,
"wait_for_rabbit": false
}
[*] Waiting for messages. To exit press CTRL+C
As you can see above, what a consumer does:
-
it registers itself to the system
During this phase an id is generated by the system for the consumer. You can use this id to send requests to that specific consumer. A queue named with that id is created on rabbitmq.
-
It connects to the new created queue (see id before)
-
It waits for a new message
You can verify that a consumer now exists from:
- rabbitmq side
(teaster) β teaster git:(master) β sudo rabbitmqctl list_consumers -p /
Listing consumers
6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b <[email protected]>
- teaster side
(teaster) β teaster git:(master) β curl http://localhost:5000/consumers
{
"6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b": {
"id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"product": "leap:42.3",
"runenv": "docker"
}
}
It accepts building request for your celery tasks
(teaster) β teaster git:(master) β python icelery.py
* Serving Flask app "icelery" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:6000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 208-391-154
Be sure to have docker up and running
sudo systemctl status docker
(teaster) β teaster git:(master) β celery worker -A icelery.celery --loglevel=info
-------------- celery@linux-peu5 v4.2.2 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.12.14-lp150.12.16-default-x86_64-with-glibc2.2.5 2019-03-31 12:50:41
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: icelery:0x7fbf0ac45250
- ** ---------- .> transport: amqp://celery:**@localhost:5672/celery
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. icelery.build_docker
[2019-03-31 12:50:42,123: INFO/MainProcess] Connected to amqp://celery:**@127.0.0.1:5672/celery
[2019-03-31 12:50:42,408: INFO/MainProcess] mingle: searching for neighbors
[2019-03-31 12:50:43,705: INFO/MainProcess] mingle: all alone
[2019-03-31 12:50:43,757: INFO/MainProcess] celery@linux-peu5 ready.
1 create and a new github/gitlab project
echo "# deleteme" >> README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin https://github.com/kinderp/deleteme.git
git push -u origin master
- prepare a request
import requests
payload = {
"id":"6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"provenv":["zypper --non-interactive in telnet","zypper --non-interactive in vim"],
"yourtag":"registry.gitlab.com/caristia/antonio_suse/new_image",
"reproducer":{
"prova":"",
"repo":"https://github.com/kinderp/deleteme.git"
}
}
r = requests.post("http://localhost:5000/couples", json=payload)
id
is the consumer's id you want to reach outprovenv
is a list of commands to install your rpms (--non-ineteractive
is important, teaster is not smart enough)yourtag
is the name of your image (it will be pushed into your docker registry)reproducer.repo
is the git url where teaster will commit a new Dockerfile for your runtime env
-
input interface output
- it receives the request and publishes that onto the correct queue using the id field
[CoupleList:post] arrived data:
{
"id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"provenv": [
"zypper --non-interactive in telnet",
"zypper --non-interactive in vim"
],
"reproducer": {
"prova": "",
"repo": "https://github.com/kinderp/deleteme.git"
},
"yourtag": "registry.gitlab.com/caristia/antonio_suse/new_image"
}
[CoupleList:post] producer, connecting to rabbit:
{
"exchange": "test",
"exchange_type": "direct",
"host": "localhost",
"queue": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"routing": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b",
"vhost": null,
"wait_for_rabbit": false
}
[*] Published message: {"provenv": ["zypper --non-interactive in telnet", "zypper --non-interactive in vim"], "reproducer": {"repo": "https://github.com/kinderp/deleteme.git", "prova": ""}, "yourtag": "registry.gitlab.com/caristia/antonio_suse/new_image", "id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b"}
127.0.0.1 - - [31/Mar/2019 13:12:02] "POST /couples HTTP/1.1" 200 -
- consumer output
- it just dequeues the request and creates a runtime source (a Dockerfile in this case)
- it sends a building request for this Dockerfile to the celery interface
[x] Received '{"provenv": ["zypper --non-interactive in telnet", "zypper --non-interactive in vim"], "reproducer": {"repo": "https://github.com/kinderp/deleteme.git", "prova": ""}, "yourtag": "registry.gitlab.com/caristia/antonio_suse/new_image", "id": "6dd47d81e014ad9de81161951814bf50e4e1246bb7a43404ffb84ab31ef7d18b"}'
FROM opensuse:42.3
WORKDIR /workdir
COPY . /workdir
RUN zypper --non-interactive in telnet && \
zypper --non-interactive in vim
CMD None
- celery input interface output
127.0.0.1 - - [31/Mar/2019 13:27:14] "POST /build_docker HTTP/1.1" 200 -
- worker output
[2019-03-31 13:27:14,130: WARNING/ForkPoolWorker-1] https://github.com/kinderp/deleteme.git
[2019-03-31 13:27:14,131: WARNING/ForkPoolWorker-1] 1232456abc
[2019-03-31 13:27:14,131: WARNING/ForkPoolWorker-1] opensuse
[2019-03-31 13:27:14,132: WARNING/ForkPoolWorker-1] ==Cloning...==
Username for 'https://github.com': kinderp
Password for 'https://[email protected]':
[2019-03-31 13:27:25,422: WARNING/ForkPoolWorker-1] ==Building...==
... A lot of output ...
[2019-03-31 13:29:05,146: WARNING/ForkPoolWorker-1] {u'progressDetail': {}, u'aux': {u'Tag': u'latest', u'Digest': u'sha256:ec733ee8182c33da16543909fba2c74cec9cd84e7cd007b918a832c70d75c867', u'Size': 1156}}
[2019-03-31 13:29:05,147: INFO/ForkPoolWorker-1] Task icelery.build_docker[bd16221d-d4c0-4c21-b7d5-9adaf73029f8] succeeded in 111.017807808s: None
At the end of this process a new docker image is appeared
β deleteme git:(master) docker images|grep antonio_suse
registry.gitlab.com/caristia/antonio_suse/new_image latest bcf7e2ce44ed 5 minutes ago 238MB
A dockerfile has been pushed on deleteme project (in a new branch named opensuse, it is a static name but it will be fixed)
β deleteme git:(master) ls
README.md
β deleteme git:(master) git status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
β deleteme git:(master) git pull
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 5 (delta 2), reused 3 (delta 0), pack-reused 0
Unpacking objects: 100% (5/5), done.
From https://github.com/kinderp/deleteme
* [new branch] opensuse -> origin/opensuse
Already up to date.
β deleteme git:(master) ls
README.md
β deleteme git:(master) git checkout opensuse
Branch 'opensuse' set up to track remote branch 'opensuse' from 'origin'.
Switched to a new branch 'opensuse'
β deleteme git:(opensuse) ls
Dockerfile README.md
β deleteme git:(opensuse) cat Dockerfile
FROM opensuse:42.3
WORKDIR /workdir
COPY . /workdir
RUN zypper --non-interactive in telnet && \
zypper --non-interactive in vim
CMD None
The new image has been pushed on your personal registry so it's ready to be pulled from another machine In this case we just remove the image and re-pull it to verify
β deleteme git:(opensuse) docker rmi $(docker images -qa)
β deleteme git:(opensuse) docker images|grep antonio_suse
β deleteme git:(opensuse) docker login registry.gitlab.com
Login Succeeded
β deleteme git:(opensuse) docker pull registry.gitlab.com/caristia/antonio_suse/new_image
Using default tag: latest
latest: Pulling from caristia/antonio_suse/new_image
adec38add5d1: Already exists
935c6b6c3290: Already exists
545e84933fdc: Pull complete
3c034a37c68e: Pull complete
Digest: sha256:ec733ee8182c33da16543909fba2c74cec9cd84e7cd007b918a832c70d75c867
Status: Downloaded newer image for registry.gitlab.com/caristia/antonio_suse/new_image:latest
β deleteme git:(opensuse) docker run -it registry.gitlab.com/caristia/antonio_suse/new_image /bin/bash
9e7a756c12a5:/workdir # ls
Dockerfile README.md
So a new couple (run env, prov env) has been correctly created
Now you have your runtime env for testing. It's time to create a reproducer.
β deleteme git:(opensuse) cat reproducer.sh
#!/bin/bash
echo 'i am a reproducer :)'
and push it (remember that you are on opensuse branch)
β deleteme git:(opensuse) β git add reproducer.sh
β deleteme git:(opensuse) β git commit -m"added reproducer"
[opensuse d723885] added reproducer
1 file changed, 3 insertions(+)
create mode 100644 reproducer.sh
β deleteme git:(opensuse) git push
Username for 'https://github.com': kinderp
Password for 'https://[email protected]':
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 362 bytes | 362.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/kinderp/deleteme.git
3fb0c41..d723885 opensuse -> opensuse
Now that you have a reproducer you can submit a triple creation request and teaster will put your reproducer on top of your runtime env.
create a request as below, be sure to use the correct consumer id and fill the reproducer.command
field
with the correct name of your reproducer on github (teaster is not smart enough to add x permission until now)
import requests
payload = {
"id":"56b2479526326f276d6f426689f561f21c8563b561b2f4ba20f89895286f2dbf",
"provenv":["zypper --non-interactive in telnet","zypper --non-interactive in vim"],
"yourtag":"registry.gitlab.com/caristia/antonio_suse/new_image",
"reproducer":{
"command":"chmod u+x reproducer.sh && sh reproducer.sh",
"repo":"https://github.com/kinderp/deleteme.git"
}
}
r = requests.post("http://localhost:5000/triples", json=payload)
You will see a very similar output to the previous couple request. Let's verify that all has gone fine.
If you run your container this time without -it
you will execute your reproducer, let's see.
(teaster) β teaster git:(triple) β docker run --name test_triple registry.gitlab.com/caristia/antonio_suse/new_image
i am a reproducer :)
Your reproducer has been inserted on top of your runenv and now can be shared easily.
Delete test_triple
container and the associated image (you should delete also the container created during couple phase)
(teaster) β teaster git:(triple) β docker rm test_triple
test_triple
teaster) β teaster git:(triple) β docker rmi $(docker images -qa)
Untagged: registry.gitlab.com/caristia/antonio_suse/new_image:latest
Untagged: registry.gitlab.com/caristia/antonio_suse/new_image@sha256:f7f13bc05cd6c39cbf1024f156e2fc4bdf8724e270b4afa59356cf8bb09f7b0c
Deleted: sha256:634af52888bcab13c3a8626ec8006550a096be4cbe577a1698fe29d1e7e91d33
Deleted: sha256:5e6c026602db588ad56499e002d618a8e570e51f17008fd7bf3d4fbb4d8c5b8f
Deleted: sha256:fa2859d62cb8b4f5afe685383add59355abef3b7ad2684d8308ca120e4828119
Deleted: sha256:8b3e40b03c3cf0e4d04ad92e29b83e69098e147e8a4db86dd20cfae767ffca05
Deleted: sha256:caccaf2c751098c32780b66ef022a0e3f2f62177cbd31f5835d7f0d827c08026
(teaster) β teaster git:(triple) β docker images|grep antonio_suse
pull and rerun your reproducer
(teaster) β teaster git:(triple) β docker pull registry.gitlab.com/caristia/antonio_suse/new_image
Using default tag: latest
latest: Pulling from caristia/antonio_suse/new_image
adec38add5d1: Already exists
935c6b6c3290: Already exists
d3f94c2bd985: Pull complete
bc5ba9e9ff9a: Pull complete
Digest: sha256:f7f13bc05cd6c39cbf1024f156e2fc4bdf8724e270b4afa59356cf8bb09f7b0c
Status: Downloaded newer image for registry.gitlab.com/caristia/antonio_suse/new_image:latest
(teaster) β teaster git:(triple) β docker run --name test_triple_other_tester registry.gitlab.com/caristia/antonio_suse/new_image:latest
i am a reproducer :)
Good, we have a working reproducer and it can be shared easily. We are happy \o/ :)
Here just some definitions to speak the same language.
As a tester your work can be summarized in these steps:
-
Get instructions from bugzilla page and try to build a reprodcer.
A reproducer can be:
- a cli reproducer (a bash script or an openqa testmodule)
- a gui reproducer (an openqa testmodule + needles)
Any reproducer needs a runtime environment to be executed.
A runtime environment can be:
- a container
- a virtual machine
A runtime environment + reproducer can be packed in a automation environments.
An automation environment contains:
- a runtime environment
- all the provisioned software (before or after). to be consistent we'll name that provisioning environment
- a reproducer
An automation environment can be represented by a tuple:
(runtime env, provisioning env, reproducer)
All the possible combinations of these three components create an automation environment:
Cli automation environments
- (container, packages before update, script bash)
- (container, packages after update, script bash)
- (container + openqa instance, packages before update, openqa testmodule)
- (cotaniner + openqa instance, packages after update, openqa testmodule)
- (vm, packages before update, script bash)
- (vm, packages after update, script bash)
- (vm + openqa instance, packages before update, openqa testmodule)
- (vm + openqa instance, packages after update, openqa testmodule)
Gui automation environments
- (container + openqa instance, packages before update, openqa testmodule + needles)
- (cotaniner + openqa instance, packages after update, openqa testmodule + needles)
- (vm + openqam instance, packages before update, openqa testmodule + needles)
- (vm + openqa instance, packages after update, openqa testmodule + needles)
-
Provisioning of the runtime environment (and installation tests)
We need to be sure that the packages we are testing can be installed on top of our runtime environment. We are helped on that by tools like mtui but sometimes it needs some manual work and anyway it's not a completely automated process.
-
Reproducing
Reproduce the bug before and after (update and downgrade) Even during this phase mtui is our friend.
-
Comparison
In this scenario just only the first phase (the mental process to figure out the better reproducer) really requires an human intellectual intervention.
The same reproducer is applied (manually) to different products. So all the steps of the entire process are repeated by testers with a lof of wasting time.
As a tester before going through all the entire process i want:
Know if a tester has already worked on that bug, in other words if a reproducer for that particul bug already exists for other products. (We do that manually searching for a bug in qam.suse)
if it's the first time that bug is tested (no reproducer for that bug)
(a)
- Concentrate just only on the reproducer
- Get automagically a runtime env + a provisioning env (e.g. a container with all provisioned packages)
- Once finished creating the reproducer, pack all (run env, prov env, reproducer) and push that one somewheree and share with all other testers in the future. In other words we're creating the tuple, the automation environment.
if a reproducer already exists:
(b)
- Get automagically an automation environment with the testing package installed (provisioning) and the reproducer ready to be executed
- Chose the runtime environment i like (container or vm)
- Run the reproducer (before and after)
- Compare the results (before and after)
In the first case (a) we build an automation environment joining our reproducer,the packages and the runtime environment. In the second one (b) we use an automation envinronment to test our bugs.
-
Autoregister (uc#0)
Actors:
- A consumer: it wants to inform the system about its presence and capabilities. A consumer is able to handle only a runtime env and only a product
Input:
- (run env, product) e.g. (docker, sle12sp3)
Output:
- a consumer id. The system assigns an id to the consumer.
Description:
- Before starting its lifecycle the consumer needs to inform the system about which particular run env and product it is able to handle
-
Search for a consumer (uc#1)
Actors:
- Started by guy: He wants to know if does exist a consumer able to handle (run env, product) for a prov env. - Tester : same above. A consumer can handle only one run env and only one product. A prov env are the packages to test.
Input:
- (run env, product) e.g. (docker, sle12sp3)
Output:
- a consumer id or an error.
Description:
- Search for a consumer that is able to create a couple for a particular product and a particular run env. - The system responds with a consumer id in case of success or some sort of error.
-
Search for a reproducer (uc#2)
Actors:
- Started by guy: He wants to know if already exist a reprodcuer for a bug. - Tester : same above.
Input:
- (bug_id, reproducer_type)
Output:
- (reproducer_id, link to the reproducer) link to a bash script link to a testmodule link to testmodule and needles
Description:
- Search for a reproducer - The system responds with a reproducer id and a link to view the reproducer or an error. - The reproduer id will be used to forward the triple creation request (run env, prov env, reproducer) to the correct consumer. See above uc#1 to search for the correct consumer for a run env and a product
-
Create a triple (uc#3)
In this case the reproducer already exists. We got an id from #uc2.
Actors:
- Started by guy: He wants to create the triple (run env, prov env, reproducer) for a new update in the queue. - Tester : He wants to create the couple (run env, prov env, reproducer) to investigate about something. - A consumer : It knows how to handle that particular run env and product. It contacts the builder. - A builder : It creates the triple: an automation env The first 2 actors must know: 1. the consumer id (from uc#1) 2. the reproducer id (from uc#2)
Input:
- (consumer id, reproducer id, prov env) e.g. (1234, xxxxx, [a.x.y.z])
Output:
- An url poiting to (run env, prov env, reproducer). A registry url: hashicorp/precise64 opensuse/tumbleweed
Description:
- The first 2 actors submit the request to the consumer, using the correct id. The consumer creates or modifies the source (Dockerfile, Vagranfile) for provisioning of the testing packages and reproducer. Then the consumer forwards the modified source to the builder (destionation). The builder builds (run env, prov env, reproducer) (Note) the building process will be an asynchronous process so we need some sort of notification.
-
Create a couple (uc#4)
In this case the reproducer does not exist. We got an error from #uc2.
Actors:
- Started by guy: He wants to create the couple (run env, prov env) for a new update in the queue. - Tester : He wants to create the couple (run env, prov env) to investigate about something. - A consumer : It knows how to handle that particular run env and product. It contacts the builder. - A builder : It creates the run env. The first 2 actors must know: 1. the consumer id (from uc#1)
Input:
- (consumer id, prov env) e.g. (1234, [a.x.y.z])
Output:
- An url poiting to (run env, prov env). A registry url: hashicorp/precise64 opensuse/tumbleweed
Description:
- The first 2 actors submit the request to the consumer, using the correct id. The consumer creates or modifies the source (Dockerfile, Vagranfile) for provisioning of testing packages. Then the consumer forwards the modified source to the builder (destionation). The builder builds (run env, prov env) (Note) the building process will be an asynchronous process so we need some sort of notification.
-
Share a reproducer (uc#5)
Actors:
- Tester : He wants to share a reproducer for a particular bug.
Input:
- (bug id, reproducer)
Output:
- (link to the reproducer) or an error link to a bash script link to a testmodule link to testmodule and needles
Description:
- The tester sends the reproducer. The system saves the reproducer somewhere and returns a link.
Some notes about classes and patterns
- We'll use a Factory Method to instance the concrete RuntimeSource
- We'll use a Adapter to create a RuntimeeSourceFeed concrete object from json data requests. In this way, we put all the creation logic into the adapter and we are free to change the internal interface holding the external one (flask reuqest) always the same.
RuntimeSourceFeed:
it represents all the infos needed to feed Dockerfile or Vagrantfile templates.
it is an abstract class and defines the interface for all the concrete RuntimeSourceFeed objects
RuntimeSourceFeedDocker,RuntimeSourceFeedVagrant
RuntimeSourceFeedDocker: it is a concrete class.
it contains all the data needed to fill a Dockerfile Template
RuntimeSourceFeedVagrant: it is a concrete class.
it contains all the data needed to fill a Vagrantfile Template
RuntimeSource:
it is an abstract class and defines the interface for all the concrete RuntimeSource objects
RuntimeSourceDocker,RuntimeSourceVagrant
A runtime source concrete object is created by a RuntimeSourceCreator factory object from:
1. A RuntimeSourceTemplate concrete object (template)
2. A RuntimeSourceFeed concrete object (data filling the gaps)
We'll use jinja2 for templating.
RuntimeSourceDocker: it is a concrete class.
it implements RuntimeSource's interface.
it represents a Dockerfile
RuntimeSourceVagrant: it is a concrete class.
it implements RuntimeSource's interface.
it represents a Vagranfile
RuntimeSourceTemplate: it is an abstract class.
it defines the interface for the templates object instances
RuntimeSourceTemplateDocker, RuntimeSourceTemplateVagrant
RuntimeSourceTemplateDocker:
RuntimeSourceTemplateVagrant:
RuntimeSourceCreator: it declares the factory method that creates a RuntimeSource object
In order to create a RuntimeSourceObject it needs:
1. A RuntimeSourceTemplate concrete object (template)
2. A RuntimeSourceFeed concrete object (data filling the gaps)
RuntimeSourceCreator: it defines the interface for the concrete runtime creator objects:
RuntimeSourceCreatorDocker: concrete creator for RuntimeSourceDocker objects
it overrides the RuntimeSourceCreator's factory method.
it to create a concrete RuntimeSourceDocker instance.
RuntimeSourceCreatorVagrant: concrete creator for RuntimeSourceVagrant objects.
it overrides the RuntimeSourceCreator's factory method
it creates a concrete RuntimeSourceVagrant instance.
Factory Method's actors
- Product: RuntimeSource
- ConcreteProduct: RuntimeSourceDocker, RuntimeSourceVagrant
- Crator: RuntimeSourceCreator
- ConcreteCreator : RuntimeSourceCreatorDocker, RuntimeSourceCreatorVagrant
- accept requests to create the couple (runtime environment, provisioning environment) and publish that one somewhere
- accept requests to create the tuple (runtime environment, provisioning environment, reproducer). So it create an automation env and make it available to the tester.
it's a consumer. A consumer get in input a request for an environment and works to produce in output the required env. A consumer knows how to handle only a specific product. So sle15 and sle12sp3 need two different consumer.
-
a source
it contain infos to create the required environment.
For example:
if the required env is a container a source will be the url for a Dockerfile if the required env is a vm it will be the Vagranfile (Docker is a specifi instance of a container runtime env and Vagrant it is for vm runtime env. any instance of a runtime env cuold be taken in consideration)
-
a destination
an url to build the environment.
For example:
if the required env is a container, a url will point to a service that will build the image (from the Dockerfile) for us.
it builds our environment, it is bound to the url knokn by the consumer. It get infos (Dockerfile, Vagrantfile) from the consumer.
For example:
- docker builder
- vagrant builder
-
pip install virtualenvwrapper
-
mkdir $HOME/dev
-
sudo find / -name virtualenvwrapper.sh
-
Add three lines to your shell startup file (.bashrc, .profile, etc.) to set the location where the virtual environments should live, the location of your development project directories, and the location of the script installed with this package:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/dev
source /usr/local/bin/virtualenvwrapper.sh # use the path obtained at point 3
-
cd $HOME/dev; git clone https://github.com/kinderp/teaster.git; cd $HOME/dev/teaster
-
mkvirtualenv teaster
Activate and deativate your env using workon
and deactivate
commands, see here for details https://virtualenvwrapper.readthedocs.io/en/latest/install.html
- Install docker and docker-compose
- cd
$HOME/dev/teaster
docker-compose up