- Slaying the Hydra: Parallel Execution of Test Automation
- Slaying the Hydra: Orchestration Overview and Setting a Clean Slate
- Slaying the Hydra: Run-Time State and Splitting Up the Execution
- Slaying the Hydra: Consolidation of Information and Reporting
- Slaying the Hydra: Modifications and Next Steps
In this fourth blog post of our series, I explain a way to execute distributed parallel test automation. The previous blog entry can be found here.
Referenced below is an image of our pipeline in Jenkins. For this blog post, we will be focusing on the ‘consolidation’ stage within the pipeline.
This stage calls for two freestyle jobs. The first job being machine_consolidation and the second job being report_consolidation.
The report_consolidation job takes three parameters, which come from the numeric value representing the latest build of the machine_consolidation job. In the groovy pipeline code we have associated the latestbuilt method to the machine_consolidation job instance.
This allows us to call latest_build = latestbuilt.getNumber() and the .getNumber() method returns the numerical index value of the most recently completed machine_consolidation job. This value is then passed to the report_consolidation job as latest_build, latest_build + 1, and latest_build + 2. I will explain why we do this later in the post.
Machine Consolidation Job
The intent of the machine_consolidation job is to query the nodes that we have utilized in our parallel testing, retrieve the cucumber json file, and then store it as an artifact.
The first thing we do within this job is setup a Node parameter for the project. The Default nodes option within this parameter has to have all the nodes selected that have been utilized in executing our tests. This allows the job to iterate over all of the utilized nodes and complete the steps. Each iteration will be a new build of the machine_consolidation job.
We should restrict the Jenkins job to be executed in the same workspace location that has already been utilized in the ‘running’ stage to store the testing output results on every machine.
Then we archive the files as artifacts of this job.
Report Consolidation Job
Now we have three builds of the machine_consolidation job completed. Each of these builds have artifacts representing the output files from one of the three nodes that was utilized for testing.
In the report_consolidation job the three parameters shown below are being passed in from the pipeline level. There values are equal to the build_number value of each of the machine_consolidation builds that ran in sequence due to the node parameter iteration.
In this case, latestbuilt.getNumber() as utilized in the pipeline returns the first build in the iteration set caused by the node parameter of the machine_consolidation job. In the pipeline we increase this number by one and then by two, to get the second and third build number.
Additionally, we want to clear the workspace location before every run. The sole purpose of this job is to consolidate information again and again, unless it’s explicitly told Jenkins will not clear the workspace prior to execution. This would cause multiple test result sets to be placed in the workspace and never get cleared out. Which leads to confusion in the report generated further down.
Then we utilize our parameters along with the ‘Copy artifacts from another project’ plugin to copy the artifacts from the three machine_consolidation builds into the build of this job.
Then archive all the artifacts, which will now be every json file, from every test execution across all the nodes previously utilized.
Lastly we utilize the ‘Cucumber Reports’ plugin which will parse through all of the json files and compile a single report of test success and failure.
That sums up the overview of the ‘consolidation’ stage of this pipeline. Now you have all of the information necessary to build this pipeline.
In the next and final post of the blog series, we discuss how to make modifications to the suite and the improvements we can make to the existing ideology.