Tuesday, 27 February 2018

Executing Parallel Tests using Behave BDD, Docker-compose and Bamboo CI


In this post, we will look at how to execute BDD tests parallelly using the combination of Behave, Selenium, Selenium Grid, Docker-Compose, and Bamboo CI.  The Selenium scripts have been written using python language. Executing tests parallelly has an edge over sequential execution since it is time-saving. Modern development best practices such as continuous integration and delivery require frequent and rapid functional tests. The execution of pre-scripted tests on a web or mobile app saves considerable time, plus having test data accessible in detailed reports is valuable to development teams who can use this information to quickly identify issues. Parallel testing in the cloud allows you to execute test suites continually as developers submit and integrate code changes throughout the day.

Outline
We will run our parallel test execution with Selenium grid that will have one container that will work as a hub and four different containers working as chrome nodes triggered by Bamboo CI. For the containers, we have to use the docker-compose.

A brief outline of the post is mentioned below:

Python file for Parallel test execution: We need to execute test parallel for doing that we need to create a python file named as 'behave_parallel.py'  this file will find all the feature files, step definitions and run them parallel.
Add the following code: 

behave_parallel.py
from multiprocessing import Pool
from subprocess import call, Popen, PIPE
from glob import glob
import logging
import argparse
import json
from functools import partial

logging.basicConfig(level=logging.INFO,
                    format="[%(levelname)-8s %(asctime)s] %(message)s")
logger = logging.getLogger(__name__)


def parse_arguments():
    """
    Parses commandline arguments
    :return: Parsed arguments
    """
    parser = argparse.ArgumentParser('Run Ikarus in parallel mode')
    parser.add_argument('--processes', '-p', type=int, help='Maximum number of processes. Default = 5',      default=5)
    parser.add_argument('--verbose', '-v', action='store_true', help='verbose output')
    parser.add_argument('--tags', '-t', help='specify behave tags to run')
    parser.add_argument('--define', '-D', action='append', help='Define user-specific data for the                                                                                                                  config.userdata '
                                                                                          'dictionary. Example: -D foo=bar to store it in '
                                                                                           'config.userdata["foo"].')
    
    args = parser.parse_args()
    return args


def _run_feature(feature, tags=None, userdata=None):
    """
    Runs features matching given tags and userdata
    :param feature: Feature that should be run
    :type feature: str
    :param tags: Tags features should contain
    :type tags: str
    :param userdata: Userdata for behave
    :type userdata: list
    :return: Feature and status
    """
    logger.debug("Processing feature: {}".format(feature))
    if not userdata:
        params = "--tags={0} --no-capture".format(tags)
    else:
        params = "--tags={0} -D {1} --no-capture".format(tags, ' -D '.join(userdata))
    # cmd = "behave -f plain {0} -i {1}".format(params, feature)
    cmd = "behave {}".format(feature)
    r = call(cmd, shell=True)
    status = 'ok' if r == 0 else 'failed'
    return feature, status


def main():
    """
    Runner
    """
    args = parse_arguments()
    pool = Pool(args.processes)
    if args.tags:
        p = Popen('behave -d -f json --no-summary -t {}'.format(args.tags),
                  stdout=PIPE, shell=True)
        out, err = p.communicate()
        j = json.loads(out.decode())
        features = [e['location'].replace(r'features/', '')[:-2] for e in j]
    else:
        feature_files = glob('*.feature') + glob('features/*.feature') + glob('features/**/*.feature')
        # features = [x.replace('features/', '') for x in feature_files]
        features = feature_files
    run_feature = partial(_run_feature, tags=args.tags, userdata=args.define)
    logger.info("Found {} features".format(len(features)))
    logger.debug(features)
    # features = [f for f in features if 'aft' not in f]
    for feature, status in pool.map(run_feature, features):
        print("{0:50}: {1}!!".format(feature, status))


if __name__ == '__main__':
    main()



Behave feature files and Step definition file:
For behave feature files and step definition visit this link:

Install nose using pip:


$pip install nose

Install Bamboo and setup:
For Bamboo installation visit this link:

Selenium Grid :

The grid allows you to :
  • Scale by distributing tests on several machines ( parallel execution )
  • Manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS
  • Minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance

Installation of Selenium Grid:

For using Grid we have to download the Selenium Standalone Server we can download the jar file from this link below: 
http://selenium-release.storage.googleapis.com/3.9/selenium-server-standalone-3.9.1.jar

The updated jar file for the above can also be found on the link below:http://selenium-release.storage.googleapis.com/index.html

Starting the Selenium grid

Step 1: Start the hub

The Hub is the central point that will receive all the test request and distribute them the right nodes.

Open a command prompt and navigate to the directory where you copied the selenium-server-standalone file. Type the following command:

java -jar selenium-server-standalone-<version>.jar -role hub

Step 2: Start the nodes

To run a grid with new WebDriver functionality, you use the selenium-server-standalone jar file to start the nodes.

java -jar selenium-server-standalone-<version>.jar -role node  -hub http://localhost:4444/grid/register


Using Grid to run test 

For WebDriver nodes, you will need to use the RemoteWebDriver and the DesiredCapabilities object to define which browser, version, and platform you wish to use. Create the target browser capabilities you want to run the tests against:

DesiredCapabilities capability = DesiredCapabilities.chrome()

Pass that into the RemoteWebDriver object:

WebDriver driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), capabililty)

The hub will then assign the test to a matching node.A node matches if all the requested capabilities are met.To request specific capabilities on the grid, specify them before passing it into the WebDriver object.

capability.setBrowserName(“chrome” ) 

capability.setPlatform(“LINUX”)


capability.setVersion(“3.9”)



Installation of Docker and Docker-compose


Install the latest version of Docker CE.

$ sudo yum install docker-ce


Install docker-compose.

For installing Docker-compose run this command on your terminal:

$sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-uname -s-uname -m-o /usr/local/bin/docker-compose

Add the executable to binary:

sudo chmod +x /usr/local/bin/docker-compose

Test the installation

$ docker-compose --version

You can now configure the tasks and jobs required by your build plan.
Now for our task add a Script task and give this script for running the Docker compose.

$ docker-compose up -d --scale chrome=4

We need to create a docker-compose file for mapping the images of Selenium hub and Chrome in our project directory named as 'docker-compose.yml' and add the following code to the file:

docker-compose.yml selenium-hub:
  image: selenium/hub
  environment:
    - GRID_TIMEOUT=120
    - GRID_MAX_SESSION=200
    - GRID_NEW_SESSION_WAIT_TIMEOUT=-1
    - GRID_BROWSER_TIMEOUT=120
  ports:
    - 4444:4444

chrome:
    image: selenium/node-chrome
    links:
      - selenium-hub:hub
    dns:
      - 8.8.8.8
      - 208.67.222.222
    environment:
      - NODE_MAX_INSTANCES=100
      - NODE_MAX_SESSION=100
    volumes:
      - /dev/shm:/dev/shm
    privileged: true

Execution Task:
Now add another Script task for running our test parallel and add this script:

$ python behave_parallel.py

Now for stopping all  the containers that we use another script task :

$ docker-compose down

Reporting

For reporting, we add Junit parser in our tasks and give the file name testreports.xml.

Install using pip or easy_install:
$ pip install junit-xmlor$ easy_install junit-xml

Output:

build  22-Feb-2018 16:23:10    1 feature passed, 0 failed, 0 skipped

build  22-Feb-2018 16:23:10    1 scenario passed, 0 failed, 0 skipped

build  22-Feb-2018 16:23:10    3 steps passed, 0 failed, 0 skipped, 0 undefined

build  22-Feb-2018 16:23:10    Took 0m36.733s

build  22-Feb-2018 16:23:10    Fail_with_blank_password.feature                  : ok!!

build  22-Feb-2018 16:23:10    Fail_with_incorrect_certificates.feature          : ok!!

build  22-Feb-2018 16:23:10    Fail_with_incorrect_password.feature              : ok!!

build  22-Feb-2018 16:23:10    Fail_with_invalid_loginid.feature                 : ok!!

simple 22-Feb-2018 16:23:10    Finished task 'test' with result: Success

1 comment:

Amazon EKS - Kubernetes on AWS

By Komal Devgaonkar Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service....