- Slaying the Leviathan: Containerized Execution of Test Automation-part 2
- Slaying the Leviathan: Containerized Execution of Test Automation-part 1
In this series on automated testing with Docker we covered the basics of the automation framework we are utilizing as well as an overview of Docker in part 1. For part 2, we dive into the actual utilization of the framework.
In our framework we have a Dockerfile in the root directory.
This Dockerfile houses all necessary steps required for building a Docker Image to setup and run a Ruby/Watir test automation framework as a Docker Container.
In Docker, the RUN commands are executed to build the image. The build steps of the Image include:
- Ruby 2.6.6 Installation
- Chrome Installation-This will install whatever is considered the most recent stable Chrome version.
- ChromeDriver Download and Unzip-We are downloading the ChromeDriver for Chrome 84 as that is the stable Chrome version currently being pulled down. This may need changed depending on when you are executing this code.
- Git Setup
The build steps for Image setup are similar to what we did for our workspace setup in part 1 of this series. That is intentional since we need the same things within the context of the image.
The final line in the Dockerfile houses the CMD function. These CMD commands do not run during the build of the image. The commands in the CMD line are executed when the container is built on the top writable layer of the Docker Image.
This CMD step completes the following functionality:
- Clone framework from Git
- Sets the Ruby Version up in Rbenv
- Installs necessary Gems via bundler
- Kicks off the dynamic_tags.rb file, which will split the build based on the variables passed
- Sets the location of the Chrome Browser and Chrome Driver
- Specifies which tests to run within the framework
- Kicks off the Rake Task which will start the Cucumber functionality
On your local machine, build the docker image via ‘docker image build -t cucumber-example ./‘ this should be run from the root directory of our framework.
We should see this when the process is complete (this process will take longer the first time)
Docker Single Threaded Execution
Now we have an image named cucumber-example. This can be seen by running the docker images command.
We can now run a Container based on the Image we have generated utilizing this command.
docker container run -e total_number_of_builds=2 -e build_number=1 –name cucumber-run-4 cucumber-example
Then we see the Container run, which completes all the CMD commands listed in the Dockerfile in the image’s context.
One note, we are setting two environment variables at the runtime of the container total_number_of_builds and build_number.
These environment variables allow our dynamic_tags.rb script within the container to signify a subsection of the tests to run.
Docker Compose allows us to signify how we want to run multiple containers from multiple images, simultaneously, in a YAML format.
We have a docker-compose.yaml file in the root directory of this framework.
We utilize the Compose file to set up multiple Container instances, utilizing the cucumber-example Image we have generated. The services section in the docker-compose.yaml file lists a numerical alias for each instance of the image we will run.
For each of these services, we’re utilizing YAML inheritance to pass the build image because it’s same for all of them and the total number of builds. Each service has a unique value for build_number as the dynamic_tags.rb script will split the regression up between all of these Containers based on that number.
We are running 12 containers in the Compose file, so a 12th of the regression will run on every container. This can be adjusted by simply removing service instances and decreasing the total_number_of_builds value accordingly.
Another parameter we’re passing into all containers is restart: “no”; this stops the containers from restarting once they complete the tests assigned to them. Without this, all of the containers would run in an endless restart loop. This is good if you are housing service in these containers like a web app but not good for a finite process like running a test set.
Docker Compose Runtime
Now we get to accomplish the fun process of running a set of Containers utilizing Docker Compose.
The first thing we do is remove all existing containers related to the instance of Docker Compose. These exist on my local because I have executed this before; they won’t work on yours during your first run.
We want to ensure that these are removed so that we are running in fresh Containers rather than Docker restarting the existing Containers for the Compose file.
One important thing to note is the naming convention of the Containers is generated as a result of Compose executing. It’s a combination of the directory that the Compose file is housed within.
*If you didn’t change the root directory name during phase one, now would be the time to change it to sample_cucumber.
The Service Alias in the Compose file is:
The index of that service running.
This container generated for Service Alias one would be named sample_cucumber_one_1
Next, we can run ‘docker-compose up‘ in our framework’s root directory. All of the necessary containers will be created.
A thing to note is that you will see all the output from all of the running Compose containers mixed in the command line output.
You can prevent this by running in detached mode.
Once Docker Compose has executed and all of the containers are done executing you will see:
The last thing to discuss is how do we retrieve the results from the containers that have run.
Docker has a copy command in which we can take the contents of a directory housed in the Container and store a copy externally or vice versa.
docker container cp sample_cucumber_one_1:docker_web_repo/output ./docker_output/1
- The blue text is the container name
- The red text is the path to the directory
- The green text is where to store the found file externally
This will give us the test results of an individual container and can review external to the container in which it was created.
Conclusion and Next Steps
In part 2, we have covered Docker Images, Docker Containers and utilizing Docker Compose. In part 3 of this series will deal with implementing this framework to run in a CI/CD tool.