Skip to content

Latest commit

 

History

History
198 lines (159 loc) · 5.73 KB

README.md

File metadata and controls

198 lines (159 loc) · 5.73 KB

Selenium Logo Docker Logo

Distributed Testing with Selenium & Docker

Fast & Stable Test Execution Done Remotely

Report Issue

Table of Contents
  1. About
  2. Getting Started
  3. Running Tests
  4. Contact

About

This repo contains boilerplate code to get up-and-running with Distributed, Automated UI Testing using Selenium & Docker. It is intended to be used as a foundation for writing / adding your own tests that can then be run remotely via Selenium Grid.

Built With

  • Selenium - a toolset for web browser automation that uses the best techniques available to remotely control browser instances and emulate a user’s interaction with the browser
  • Selenium Grid - enables execution of Selenium WebDriver scripts on remote machines that can be configured with different browser types & versions
  • Docker Engine - a tool to containerize applications
  • Docker Compose - a tool to define & run multi-container applications
  • PyTest - a framework that makes building simple and scalable tests easy

Getting Started

Follow the steps below to get a local development instance up & running.

Prerequisites

  • Python3
  • pipenv
  • Docker
  • docker-compose

Installation

git clone
cd project_dir
pipenv shell
pipenv install

Test Data

Some tests in this suite require a Hacker News account. If you do not have an account go to: https://news.ycombinator.com/login

If you already have an account (or once you've created one), you'll need to add the username and password to a .env file in project root.

USERNAME=tester_account
PASSWORD=yourpassword

Test Configuration

Use the config.json to define how your tests are run - namely, which browser and wether to run remotely (via Selenium Grid) or locally. Supported broswer types are:

  • Chrome Local
  • Chrome Remote
  • Firefox Local
  • Firefox Remote

Running Tests

Locally

  • run all tests with parallelization support (2 threads) and automatically rerun failed tests (up to two times)
python -m pytest -n 2 --reruns 2
  • after test run is completed, a result report can be viewed at: ./logs/pytest_html_report.html

Remotely

To run test remotely, you'll need an instance of Selenium Grid running. To do some, we'll spin up some docker containers.

  • First, download the images:
$ docker pull selenium/hub
$ docker pull selenium/node-chrome
$ docker pull selenium/node-chrome-debug
$ docker pull selenium/node-firefox
$ docker pull selenium/node-firefox-debug
  • Second, be sure to have a docker-compose.yml file at project root. It should look like:
version: "3"
services:
  selenium-hub:
    image: selenium/hub
    ports:
      - "4444:4444"
    environment:
      GRID_MAX_SESSION: 16
      GRID_BROWSER_TIMEOUT: 300
      GRID_TIMEOUT: 300

  chrome:
    image: selenium/node-chrome
    depends_on:
      - selenium-hub
    environment:
      HUB_PORT_4444_TCP_ADDR: selenium-hub
      HUB_PORT_4444_TCP_PORT: 4444
      NODE_MAX_SESSION: 4
      NODE_MAX_INSTANCES: 4

  firefox:
    image: selenium/node-firefox
    depends_on:
      - selenium-hub
    environment:
      HUB_PORT_4444_TCP_ADDR: selenium-hub
      HUB_PORT_4444_TCP_PORT: 4444
      NODE_MAX_SESSION: 4
      NODE_MAX_INSTANCES: 4
  • Third, run the containers:
docker-compose up -d

Check that hub was successfully started by going to: http://localhost:4444/grid/console

  • Run the tests (be sure either Chrome Remote or Firefox Remote is set as browser in your config file)
python -m pytest
  • if there are any issues while starting/running the tests, check the container logs for error messages:
docker logs grid_docker_demo_firefox_1

Scaling

More containers can be added to the grid in order to further scale-up testing. For example, to spin up 3 Chrome containers (with 4 browser instances each as noted in the docker-compose.yml file):

$ docker-compose up -d --scale chrome=3

If you have numerous tests (and they are designed to run in parallel) with many containers running then text execute time should decrease when running tests. This example would kick off testing running via 4 threads and rerun any failures twice:

python -m pytest -n 4 --reruns 2

Contact

Tim Corley | @tcor215 | [email protected]