Slaying the Hydra: Orchestration Overview and Setting a Clean Slate

This entry is part [part not set] of 5 in the series Slaying the Hydra

This is the second in a series of blog posts explaining a way to execute distributed parallel test automation. The first entry can be found here.

In this post I walk you through the process of orchestration and the first orchestrated stage. I will explain the concepts in a way that allows them to be applied to multiple use-cases.  Since I am a Rubyist at heart — with a fondness for Cucumber and Jenkins — the examples found here are geared towards them.

Orchestration Overview

Jenkins provides pipelines as a functionality, which serve the purpose of orchestrating multiple jobs into a singular flow. The original intent of a pipeline is for automated continuous delivery of software to a production environment. We utilize the pipeline to orchestrate our parallel testing effort.

The purpose of the pipeline being developed provides feedback to our stakeholders as rapidly as we can, given the resources provided. Additionally, we make the framework dynamic to handle configuration changes quickly and efficiently. 

The pipeline implementation in Jenkins requires two parts:

  • The first is the pipeline code, referred as a Jenkinsfile, which is often stored in the related source code repository. In the example below, the Jenkinsfile is stored in the testing repository.
  • The second part is the pipeline job within Jenkins, which references the source code that stores our Jenkinsfile. The image below is the configuration of a pipeline job in Jenkins. We provide the URL location, authentication parameters, and name of the Jenkinsfile.

Jenkins jobs allow for parameters configured at a runtime to supply dynamic execution, depending on the selection. The image below is an example where we choose between IE and Chrome as the browser to be utilized for the UI tests.  

When running a build of the job we can specify between IE and Chrome. If we kick the job off automatically at a certain time it will default to the first option in the drop-down provided (see below).

After constructing the pipeline job in Jenkins, we can proceed to understand the Jenkinsfile. To complete our objectives, we can breakdown the three sections or stages for building the Jenkinsfile.

The above image is a Jenkinsfile, which is what we store with our source code pulled from a repository and utilized as the script for our pipeline.

*Note: while I am providing an overview of a Jenkins pipeline, I cannot cover all the facets of this expansive tool in one blog post. However, has all the information you could ever want, outside of what I supply here.

From the image above we see the node parameter, which allows us to tell Jenkins where we want the pipeline job itself to run. This does not mean every job within the pipeline will run on machines with this tag associated to them, but we will dive into that in the next installment/blog post.

The browser method returns the result of params.browser which is received from the parameter within the pipeline job in Jenkins. This will either equal ‘ie’ or ‘chrome’.

The total_number_of_builds method returns ‘3’ which will come in handy later in our execution stage.

Setting a Clean Slate

In our ‘clearing’ stage we want to build a job named ‘clear_workspace’ that will go out to all impacted machines and clear a file location to ensure we are guaranteed to start with a clean slate.

Executing Our Tests

In our ‘running’ stage we can run three jobs in parallel to provide a faster feedback loop to our end users. I chose the number of jobs randomly; it could just as easily be 20 or 100 and the pipeline would function correctly.

The image below displays a “catchError wrapper” that prevents a failure code from one of the built jobs stopping the whole pipeline execution.

The parallel keyword allows us to execute the jobs at the same time rather than waiting for them to execute sequentially.

Lastly, within the three jobs we are building the parameter sections have browser and total_number_of_builds returned, which are from the methods created at the top of the pipeline file. Additional we are passing a build_number parameter which is either 1, 2 or 3.

Consolidating our Results

Our ‘consolidation’ stage will allow us to access the machines utilized for testing and pull meaningful artifacts from the job and report the results to our stakeholders.

There are two jobs in consolidating stage: one job is going out and pulling the information from each impacted machine and the other job is to consolidate the information into a concise report.

There are complications to this stage, which will be discussed in the final installment of this blog posts series.

Setting a Clean Slate In-Depth

As previously mentioned, the ‘clear_workspace’ job has the intent to clean up after previous runs of the same job on all utilized workstations.  

During execution, the test results are stored in a specific file location on each workstation. We do not want previous results carried into the current execution, so we must go out to each machine being utilized as a node and clear the specified file location.

In Jenkins, we can set a job to iterate a set of workstations via the node parameter plugin. This will execute the job on each node specified sequentially via the default nodes option.

Additionally, we can check the ‘Execute concurrent builds if necessary’ parameter to allow the executions to happen in parallel.

For the actual commands (Windows commands, sorry Mac folks) we need to delete a certain directory and recreate, to ensure it is empty.

In the image above, the file location that we are clearing (first stage) will be the same file location where the results are stored for consolidation (last stage) of our pipeline. Remember, it is important for those locations to be the same.

In the next installment, we discuss executing the tests in parallel and how we ensure tests are distributed within the parallel executions.

Series Navigation

Leave a Reply

%d bloggers like this: