Welcome to Red Green Refactor

We officially welcome you to the start of Red Green Refactor, a technology blog about automation and DevOps. We are a group of passionate technologists who care about learning and sharing our knowledge. Information Technology is a huge field and even though we’re a small part of it – we wanted another outlet to collaborate with the community.

Why Red Green Refactor?

Red Green Refactor is a term commonly used in Test Driven Development to support a test first approach to software design. Kent Beck is generally credited with discovering or “rediscovering” the phrase “Test Driven Development”. The mantra for the practice is red-green-refactor, where the colors refer to the status of the test driving the development code.

The Red is writing a small piece of test code without the development code implemented. The test should fail upon execution – a red failure. The Green is writing just enough development code to get the test code to pass. The test should pass upon execution – a green pass. The Refactor is making small improvements to the development code without affecting the behavior. The quality of the code is improved according to team standards, addressing “code smells” (making the code readable, maintainable, removing duplication), or using simple design patterns. The point of the practice is to make the code more robust by catching the mistakes early, with an eye on quality of the code from the beginning. Writing in small batches helps the practitioner think about the design of their program consistently.

“Refactoring is a controlled technique for improving the design of an existing codebase.”

Martin Fowler

The goal of Red Green Refactor is similar to the practice of refactoring: to make small-yet-cumulative positive changes, but instead in learning to help educate the community about automation and DevOps. The act of publishing also encourages our team to refine our materials in preparation for a larger audience. Many of the writers on Red Green Refactor speak at conferences, professional groups, and the occasional webinar. The learning at Red Green Refactor will be bi-directional – to the readers and to the writers.

Who Are We?

The writers on Red Green Refactor come from varied backgrounds but all of us made our way into information technology, some purposefully and some accidentally. Our primary focus was on test automation, which has evolved into DevOps practices as we expanded our scope into operations. Occasionally we will invite external contributors to post on a subject of interest. We have a few invited writers lined up and ready to contribute.

“Automation Team” outing with some of Red-Green-Refactor authors

As for myself, I have a background in Physics & Biophysics, with over a decade spent in research science studying fluorescence spectroscopy and microscopy before joining IT. I’ve worked as a requirements analyst, developer, and tester before joining the ranks of pointed-headed management. That doesn’t stop me from exploring new tech at home though or posting about it on a blog.

What Can You Expect From Red Green Refactor?


Some companies are in the .NET stack, some are Java shops, but everyone needs some form of automation. The result is many varied implementations of both test & task automation. Our team has supported almost all the application types under the sun (desktop, web, mobile, database, API/services, mainframe, etc.). We’ve also explored with many tools both open-source and commercial at companies with ancient tech and bleeding edge. Our posts will be driven by both prior experience as well as exploration to the unknown.

We’ll be exploring programming languages and tools in the automation space.  Readers can expect to learn about frameworks, cloud solutions, CI/CD, design patterns, code reviews, refactoring, metrics, implementation strategies, performance testing, etc. – it’s open ended.

Continuous Improvement

We aim to keep our readers informed about continuous improvement activities in the community. One of the great things about this field is there is so much to learn and it’s ever-changing. It can be difficult at times with the firehose of information coming at you since there are only so many hours in the day. We tend to divide responsibility among our group to perform “deep dives” into certain topics and then share that knowledge with a wider audience (for example: Docker, Analytics or Robot Process Automation). In the same spirit we plan to share information on Red Green Refactor about continuous improvement. Posts about continuous improvement will include: trainings, conference recaps, professional groups, aggregated articles, podcasts, tech book summaries, career development, and even the occasional job posting.

Once again welcome to Red Green Refactor. Your feedback is always welcome.

Book Club: The DevOps Handbook (Chapter 1. Agile, Continuous Delivery, and the Three Ways)

This entry is part 2 of 2 in the series DevOps Handbook

The following is a chapter summary for “The DevOps Handbook” by Gene Kim, Jez Humble, John Willis, and Patrick DeBois for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on The DevOps Handbook

More than ever, the effective management of technology is critical for business competitiveness. For decades, technology leaders have struggled to balance agility, reliability, and security. The consequences of failure have never been greater―whether it’s the healthcare.gov debacle, cardholder data breaches, or missing the boat with Big Data in the cloud.

And yet, high performers using DevOps principles, such as Google, Amazon, Facebook, Etsy, and Netflix, are routinely and reliably deploying code into production hundreds, or even thousands, of times per day.

Following in the footsteps of The Phoenix Project, The DevOps Handbook shows leaders how to replicate these incredible outcomes, by showing how to integrate Product Management, Development, QA, IT Operations, and Information Security to elevate your company and win in the marketplace.

The DevOps Handbook

The Manufacturing Value Stream

In manufacturing operations, the value stream is often easy to see and observe: it starts when a customer order is received and the raw materials are released onto the plant floor.

Value Stream: “the sequence of activities an organization undertakes to deliver upon a customer request” or “the sequence of activities required to design, produce, and deliver a good or service to a customer, including the dual flows of information and material.”Value Stream Mapping by Karen Martin & Mike Osterling

To enable fast and predictable lead times in any value stream:

  • Create a smooth and even flow of work
  • Using techniques such as small batch sizes
  • Reducing work in process (WIP)
  • Preventing rework to ensure defects are not passed to downstream work centers
  • Constantly optimizing the system toward global goals

The Technology Value Stream

The same principles and patterns that enable the fast flow of work in physical processes are equally applicable to technology work.

In DevOps, the technology value stream as the process required to convert a business hypothesis into a technology-enabled service that delivers value to the customer. Value is created only when services run in production.

Focus on Deployment Lead Time

Deployment Lead Time begins when a developer checks a change into version control. Deployment Lead Time ends when that change is successfully running in production, providing value to the customer and generating useful feedback and telemetry.

Instead of work going sequentially through the design/development value stream and then through the test/operations value stream, testing and operation happens simultaneously with design/development.

Defining Lead Time vs. Processing Time:

  • The lead time clock starts when the request is made and ends when it is fulfilled.
  • The process time clock starts only when work begins on the customer request—specifically, it omits the time that the work is in queue, waiting to be processed.
Adopted from the DevOps Handbook

The Common Scenario: Deployment Lead Times Requiring Months

When we have long deployment lead times, heroics are required at almost every stage of the value stream. We may discover that nothing works at the end of the project when we merge all the development team’s changes together.

Our DevOps Ideal: Deployment Lead Times of Minutes

Developers receive fast, constant feedback on their work, which enables them to quickly and independently implement, integrate, and validate their code, and have the code deployed into the production environment.

Achieved by checking in small code changes to version control repository, performing automated and exploratory testing against it, and deploying it into production. Achieved when we have architecture that is modular, well encapsulated, and loosely-coupled.

Teams capable of working with high degrees of autonomy, with failures being small and contained, and without causing global disruptions. Deployment lead time is measured in minutes or, in the worst case, hours.

The below is the Value Stream Map:

Adopted from the DevOps Handbook

Observing “%C/A” As A Measure of Rework

The third key metric in the technology value stream is percent complete and accurate (%C/A). This metric reflects the quality of the output of each step in our value stream.

“The %C/A can be obtained by asking downstream customers what percentage of the time they receive work that is ‘usable as is,’ meaning that they can do their work without having to correct the information that was provided, add missing information that should have been supplied, or clarify information that should have and could have been clearer.”

Value Stream Mapping

The Three Ways: The Principles Underpinning DevOps

The First Way enables fast left-to-right flow of work from Development to Operations to the customer. In order to maximize flow, we need to make work visible, reduce our batch sizes and intervals of work, build in quality by preventing defects from being passed to downstream work centers, and constantly optimize for the global goals.

Adopted from the DevOps Handbook

The First Way

Goal: Speed up flow through technology value stream, reduce lead time to fulfill requests, increase throughput.


  • Continuous build, integration, test, and deployment processes
  • Creating environments on demand
  • Limiting work in process (WIP)
  • Building systems and organizations that are safe to change

The Second Way

Goal: Creation of a generative, high-trust culture that supports a dynamic, disciplined, and scientific approach to experimentation and risk-taking, facilitating the creation of organizational learning, both from our successes and failures.


  • System design to multiple effects of new knowledge (local discoveries into global improvements)

The Third Way

Goal: fast and constant flow of feedback from right to left at all stages of our value stream.


  • Amplify feedback to prevent problems
  • Enable faster detection and recovery


Chapter One covered the concepts of value streams, lead time as one of the key measures of the effectiveness for technology, and the high-level concepts behind each of the Three Ways. The following chapter summaries will cover each of the Three Ways in greater detail.

From the Pipeline v25.0

This entry is part 25 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Making GitHub CI Workflow 3x Faster

GitHub has started a “building GitHub” blog series to provide insight on their engineering team practices. In the first post, they share how they decreased the time from commit to production deployment. The GitHub codebase is a monolith with thousands of tests executed across 25 CI jobs for every commit. To reduce the time from commit to deployment they first categorized the types of CI jobs, then fixed the flaky tests. They then modified their deployment with a “deferred compliance” tool that pushes through changes, but when an issue is noted by the CI jobs, gives the team 72 hours to fix the issues before it’s rolled back. The teams are notified of these compliance issues via Slack. Overall a interesting read and I’m looking forward to the next three posts in the series.

A Sustainable Pattern with Shared Library

Thomas Bjerre describes how he uses Shared Libraries in Jenkins. Shared Libraries are used for Pipelines, which can be defined in external source control repositories and loaded into existing Pipelines. This helps to reduce duplicated code, provides a form of documentation, and a standard way to reuse patterns. Thomas constructs a build plan to decide on what will be done in the build to streamline the rest of the code. A public API is used to help standardize what uses of the library will invoke.

How to Use Page Object Model in Selenium

This post by Perfecto is an overview of the Page Object Model. “Page Object Model (POM) in Selenium is a design pattern that creates a repository of objects, such as buttons, input fields, and other elements. The main goal of using POM in Selenium is to reduce code duplication and improve the maintenance of tests in the future.” To help keep the test code in maintainable state, ensure that page objects never make the verifications. Also make sure the verification is that the page loaded correctly. Lastly, only add elements that are actually used to prevent clutter.

Antipatterns and Patterns

This is a fascinating article that not only explains the difference between antipattern (ineffective approaches that are ineffective) and pattern (effective and improves desired outcomes), but provides examples of pairs within an organization. The collection of all the patterns and antipatterns are included in an associated book, “Sooner Safer Happier”.

Java for QA Engineers: How to Learn

John Selawasky lists the path forward for converting manual testers to automation testers in a Java domain. His recommendations include: (1) learn Java Core and solve many small coding tasks; (2) use a good IDE (I recommend IntelliJ IDEA); (3) Learn unit testing; (4) verify your code without System.out.println but with your own unit tests; (5) read about code refactoring; (6) learn SQL at the beginner level; (7) learn a little bit about Gradle, Maven, and Spring; (8) read, check, and improve the code of other people; (9) work with Mockito (or other mock testing frameworks); and, (10) now learn your testing tools.

From the Pipeline v24.0

This entry is part 24 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Introducing Boa Constrictor: The .NET Screenplay Pattern

Andy Knight, the “Automation Panda”, has released a new open source tool for implementing the Screenplay Pattern for test automation. The Screenplay pattern has slowly chipped away at Page Object’s high usage in the web automation space. The reason for the shift is the pattern follows a good design principle in coding: separation of concerns. In the pattern, actors use abilities to perform interactions. Andy has provided a brief tutorial in his article along with a link to the open source code.

Let’s Focus More on Quality and Less on Testing

Joel Montvelisky is one of the luminaries in the testing field. In this article posted to StickyMinds (which has also a conference presentation), Joel explains how the role of the tester has shifted and his recommendations to providing the most value to a team & organization. “In order to understand a tester’s value, we need to look at the role and understand the impact of the changing development process on this role.”

Comparing Java and Ruby

Deepak Vohra provides a good overview of the differences between Java and Ruby for someone looking to learn their first programming language. I would recommend this for anyone trying to understand the difference between interpreted versus compiled languages, static typed versus dynamically typed, as well as OOP principles. The article is brief but is a good jumping off point.

An Unlikely Union – DevOps and Audit

IT Revolution has made one of their white papers on DevOps available free to the public. This is absolutely worth a read for those of you working in organizations that must go through security, compliance, and audit to make changes. “Many organizations are adopting DevOps patterns and practices and are enjoying the benefits that come from that adoption: More speed. Higher quality. Better value. However, many organizations often get stymied when dealing with information security, compliance, and audit requirements. There seems to be a misconception that DevOps practices won’t work in organizations which are under SOX or PCI regulations. In this paper, we will provide some high-level guidance on three major concerns about DevOps Practices: (1) DevOps and Change Control, (2) DevOps and Security, (3) DevOps and Separation of Duties”

Kobiton Odyssey Recordings

This past Summer Kobiton hosted an online conference called Kobiton. They invited industry leaders in the quality space to provide experience reports. They have made these conference talks freely available to everyone. I recommend listening to the sessions by Joel Montvelisky, Paul Grizzaffi, and Melissa Tondi.

Slaying the Leviathan: Containerized Execution of Test Automation-part 2

This entry is part 2 of 2 in the series Slaying the Leviathan


In this series on automated testing with Docker we covered the basics of the automation framework we are utilizing as well as an overview of Docker in part 1. For part 2, we dive into the actual utilization of the framework.

Docker Applied

In our framework we have a Dockerfile in the root directory.

This Dockerfile houses all necessary steps required for building a Docker Image to setup and run a Ruby/Watir test automation framework as a Docker Container.

In Docker, the RUN commands are executed to build the image. The build steps of the Image include:

  • Ruby 2.6.6 Installation
  • Chrome Installation-This will install whatever is considered the most recent stable Chrome version.
  • ChromeDriver Download and Unzip-We are downloading the ChromeDriver for Chrome 84 as that is the stable Chrome version currently being pulled down. This may need changed depending on when you are executing this code.
  • Git Setup

The build steps for Image setup are similar to what we did for our workspace setup in part 1 of this series. That is intentional since we need the same things within the context of the image.

The final line in the Dockerfile houses the CMD function. These CMD commands do not run during the build of the image. The commands in the CMD line are executed when the container is built on the top writable layer of the Docker Image.

This CMD step completes the following functionality:

  • Clone framework from Git
  • Sets the Ruby Version up in Rbenv
  • Installs necessary Gems via bundler
  • Kicks off the dynamic_tags.rb file, which will split the build based on the variables passed
  • Sets the location of the Chrome Browser and Chrome Driver
  • Specifies which tests to run within the framework
  • Kicks off the Rake Task which will start the Cucumber functionality

On your local machine, build the docker image via ‘docker image build -t cucumber-example ./‘ this should be run from the root directory of our framework.

We should see this when the process is complete (this process will take longer the first time)

Docker Single Threaded Execution

Now we have an image named cucumber-example. This can be seen by running the docker images command.

We can now run a Container based on the Image we have generated utilizing this command.

docker container run -e total_number_of_builds=2 -e build_number=1 –name cucumber-run-4 cucumber-example

Then we see the Container run, which completes all the CMD commands listed in the Dockerfile in the image’s context.

One note, we are setting two environment variables at the runtime of the container total_number_of_builds and build_number.

These environment variables allow our dynamic_tags.rb script within the container to signify a subsection of the tests to run.

Docker Compose

Docker Compose allows us to signify how we want to run multiple containers from multiple images, simultaneously, in a YAML format.

We have a docker-compose.yaml file in the root directory of this framework.

This image has an empty alt attribute; its file name is image.png

We utilize the Compose file to set up multiple Container instances, utilizing the cucumber-example Image we have generated. The services section in the docker-compose.yaml file lists a numerical alias for each instance of the image we will run.

For each of these services, we’re utilizing YAML inheritance to pass the build image because it’s same for all of them and the total number of builds. Each service has a unique value for build_number as the dynamic_tags.rb script will split the regression up between all of these Containers based on that number.

We are running 12 containers in the Compose file, so a 12th of the regression will run on every container. This can be adjusted by simply removing service instances and decreasing the total_number_of_builds value accordingly.

Another parameter we’re passing into all containers is restart: “no”; this stops the containers from restarting once they complete the tests assigned to them. Without this, all of the containers would run in an endless restart loop. This is good if you are housing service in these containers like a web app but not good for a finite process like running a test set.

Docker Compose Runtime

Now we get to accomplish the fun process of running a set of Containers utilizing Docker Compose.

The first thing we do is remove all existing containers related to the instance of Docker Compose. These exist on my local because I have executed this before; they won’t work on yours during your first run.

We want to ensure that these are removed so that we are running in fresh Containers rather than Docker restarting the existing Containers for the Compose file.

One important thing to note is the naming convention of the Containers is generated as a result of Compose executing. It’s a combination of the directory that the Compose file is housed within.

*If you didn’t change the root directory name during phase one, now would be the time to change it to sample_cucumber.

The Service Alias in the Compose file is:

The index of that service running.

This container generated for Service Alias one would be named sample_cucumber_one_1

Next, we can run ‘docker-compose up‘ in our framework’s root directory. All of the necessary containers will be created.

A thing to note is that you will see all the output from all of the running Compose containers mixed in the command line output.

You can prevent this by running in detached mode.

Once Docker Compose has executed and all of the containers are done executing you will see:

The last thing to discuss is how do we retrieve the results from the containers that have run.

Docker has a copy command in which we can take the contents of a directory housed in the Container and store a copy externally or vice versa.

docker container cp sample_cucumber_one_1:docker_web_repo/output ./docker_output/1
  • The blue text is the container name
  • The red text is the path to the directory
  • The green text is where to store the found file externally

This will give us the test results of an individual container and can review external to the container in which it was created.

Conclusion and Next Steps

In part 2, we have covered Docker Images, Docker Containers and utilizing Docker Compose. In part 3 of this series will deal with implementing this framework to run in a CI/CD tool.

Book Club: The DevOps Handbook (Introduction)

This entry is part 1 of 2 in the series DevOps Handbook

The following is a chapter summary for “The DevOps Handbook” by Gene Kim, Jez Humble, John Willis, and Patrick DeBois for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on The DevOps Handbook

More than ever, the effective management of technology is critical for business competitiveness. For decades, technology leaders have struggled to balance agility, reliability, and security. The consequences of failure have never been greater―whether it’s the healthcare.gov debacle, cardholder data breaches, or missing the boat with Big Data in the cloud.

And yet, high performers using DevOps principles, such as Google, Amazon, Facebook, Etsy, and Netflix, are routinely and reliably deploying code into production hundreds, or even thousands, of times per day.

Following in the footsteps of The Phoenix Project, The DevOps Handbook shows leaders how to replicate these incredible outcomes, by showing how to integrate Product Management, Development, QA, IT Operations, and Information Security to elevate your company and win in the marketplace.

The DevOps Handbook

An Introduction to DevOps

“Imagine a world where product owners, Development, QA, IT Operations, and Infosec work together, not only to help each other, but also to ensure that the overall organization succeeds. By working toward a common goal, they enable the fast flow of planned work into production (e.g., performing tens, hundreds, or even thousands of code deploys per day), while achieving world-class stability, reliability, availability, and security.”

An Introduction to DevOps

In this world, cross-functional teams rigorously test their hypotheses of which features will most delight users and advance the organizational goals.

Simultaneously, QA, IT Operations, and Infosec are always working on ways to reduce friction for the team, creating the work systems that enable developers to be more productive and get better outcomes.

This enables organizations to create a safe system of work, where small teams are able to quickly and independently develop, test, and deploy code and value quickly, safely, securely, and reliably to customers.

By adopting Lean principles and practices, manufacturing organizations dramatically improved plant productivity, customer lead times, product quality, and customer satisfaction, enabling them to win in the marketplace.

Before the revolution, average manufacturing plant order lead times were six weeks, with fewer than 70% of orders being shipped on time.

By 2005, with the widespread implementation of Lean practices, average product lead times had dropped to less than three weeks, and more than 95% of orders were being shipped on time.

Adopted from the DevOps Handbook

Most organizations are not able to deploy production changes in minutes or hours, instead requiring weeks or months. These same organizations are not able to deploy hundreds or thousands of changes into production per day. They struggle to deploy monthly or even quarterly. Production deployments are not routine, but instead involve outages and firefighting.

The Core, Chronic Conflict

In almost every IT organization, there is built-in conflict between Development and IT Operations that creates a downward spiral, resulting in slower time to market for new products and features, reduced quality, increased outages, and an ever-increasing amount of technical debt.

Technical Debt: the term “technical debt” was first coined by Ward Cunningham. Technical debt describes how decisions we make lead to problems that get increasingly more difficult to fix over time, continually reducing our available options in the future — even when taken on judiciously, we still incur interest.

Two competing organizational interests: respond to the rapidly changing competitive landscape and provide a stable service to the customer.

Development takes responsibility for responding to changes in the market, deploying features and changes into production. IT Operations takes responsibility for providing customers with IT service that is stable and secure, making it difficult for anyone to introduce production changes that could jeopardize production. Dr. Eli Goldratt called these types of configuration “the core, chronic conflict”.

The Downward Spiral

The first act begins in IT Operations, where our goal is to keep applications and infrastructure running so that our organization can deliver value to customers. In our daily work, many of our problems are due to applications and infrastructure that are complex, poorly documented, and incredibly fragile. The systems most prone to failure are also our most important and are at the epicenter of our most urgent changes.

The second act begins when somebody has to compensate for the latest broken promise—it could be a product manager promising a bigger, bolder feature to dazzle customers with or a business executive setting an even larger revenue target. Then they commit the technology organization to deliver upon this new promise. Development is tasked with another urgent project that inevitably requires solving new technical challenges and cutting corners to meet the promised release date, further adding to our technical debt.

The Third and final act, where everything becomes just a little more difficult, bit by bit—everybody gets a little busier, work takes a little more time, communications become a little slower, and work queues get a little longer. Our work becomes more tightly-coupled, smaller actions cause bigger failures, and we become more fearful and less tolerant of making changes. Work requires more communication, coordination, and approvals; teams must wait longer for their dependent work to get done; and our quality keeps getting worse.

Why Does the Downward Spiral Happen?

Every IT organization has two opposing goals, and second, every company is a technology company, whether they know it or not. The vast majority of capital projects have some reliance upon IT.

“When people are trapped in this downward spiral for years, especially those who are downstream of Development, they often feel stuck in a system that pre-ordains failure and leaves them powerless to change the outcomes. This powerlessness is often followed by burnout, with the associated feelings of fatigue, cynicism, and even hopelessness and despair.”

An Introduction to DevOps

A culture can be created where people are afraid to do the right thing because of fear of punishment, failure, or jeopardizing their livelihood. This can create the condition of learned helplessness, where people become unwilling or unable to act in a way that avoids the same problem in the future.

Counteracting the Downward Spiral

By creating fast feedback loops at every step of the process, everyone can immediately see the effects of their actions. Whenever changes are committed into version control, fast automated tests are run in production-like environments, giving continual assurance that the code and environments operate as designed and are always in a secure and deployable state. Automated testing helps developers discover their mistakes quickly.

High-profile product and feature releases become routine by using dark launch techniques. Long before the launch date, we put all the required code for the feature into production, invisible to everyone except internal employees and small cohorts of real users, allowing us to test and evolve the feature until it achieves the desired business goal.

In a DevOps culture, everyone has ownership of their work regardless of their role in the organization.

The Business Value of DevOps

High-Performing Organizers succeed in the following areas:

  • Throughput metrics
  • Code and change deployments (thirty times more frequent)
  • Code and change deployment lead time (two hundred times faster)
  • Reliability metrics
  • Production deployments (sixty times higher change success rate)
  • Mean time to restore service (168 times faster)
  • Organizational performance metrics
  • Productivity, market share, and profitability goals (two times more likely to exceed)
  • Market capitalization growth (50% higher over three years)

When we increase the number of developers, individual developer productivity often significantly decreases due to communication, integration, and testing overhead. DevOps shows us that when we have the right architecture, the right technical practices, and the right cultural norms, small teams of developers are able to quickly, safely, and independently develop, integrate, test, and deploy changes into production.

Organizations adopting DevOps are able to linearly increase the number of deploys per day as they increase their number of developers

“The purpose of the DevOps Handbook is to provide the theory, principles, and practices needed to successfully start a DevOps initiative. This guidance is based on decades of management theory, study of high-performing technology organizations, work the authors have done helping organizations transform, and research that validates the effectiveness of the prescribed DevOps practices.”

An Introduction to DevOps

The reader is not expected to have extensive knowledge of any of these domains, or of DevOps, Agile, ITIL, Lean, or process improvement. Each of these topics is introduced and explained in the book.

The goal is to create a working knowledge of the critical concepts in each of the above listed areas.

From The Pipeline v23.0

This entry is part 23 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Microsoft Races Ahead On RPA (Robotic Process Automation)

After Microsoft’s acquisition of Softmotive, it was expected they would make strides in the RPA market. Now rebranded as “Power Automate”, the desktop tool is used for both web- and desktop-applications. Most of the processes are automated via a drag-and-drop mechanism with a library of standard actions to choose from. Like the other big vendors in the space (UiPath, Blue Prism, Automation Anywhere), Microsoft wants to extend their tool to Machine Learning (ML) and Artificial Intelligence (AI) capabilities.

Best Practices for using Docker Hub for CI/CD

Docker has published the first in a series of posts about using Docker Hub for CI/CD. To set the stage, they ask users to consider the inner loop (code, build, run, test) and the outer loop (push change, CI build, CI test, deployment) of the development cycle. For instance, as part of hte inner loop they recommend running unit tests as part of the docker build command by adding a target for them in the Dockerfile. Additionally, when setting up CI they recommend using a Docker Hub access token rather than a password (new access tokens can be created from the security page on Docker Hub). Another recommendation is to reduce the build time and reduce the number of calls by making use of the build cache to reuse layers already pulled using buildX caching functionality. Lots more to come in subsequent posts from the team at Docker.

5 Key Elements for Designing a Successful Dashboard

“When you’re designing a dashboard to track and display metrics, it is important to consider the needs and expectations of the users of the dashboard and the information that is available. There are several aspects to consider when creating a new dashboard in order to make it a useful tool. For a mnemonic device to help you easily remember the qualities that make a good dashboard, just remember the acronym ‘VITAL.'”

BDD (Behavior Driven Development) | Better Executable Specifications

Dave Farley speaks on Behavior Driven Development (BDD) in this video recorded by Continous Delivery. In the talk, he provides background on the creation of BDD and its relation to Test Driven Development (TDD). Dave gives a solid rundown of the naming conventions that should be used and those that should be avoided — with the effects on software testing. This is a great starter for anyone looking to learn more about BDD.

10 Reasons to Attend DevOps Next

DevOps Next is happening this week online (https://www.perfecto.io/devops-next). The conference has three tracks to choose from: Testing Tools, which include an introduction to AI/ML in software testing tools; Continuous Testing, which are practices and use cases in continuous testing leveraging AI and ML; and, DevOps & Code, about maturing code quality and DevOps teams productivity using AI and ML. The event is headlined by a number of experts in the field. A great opportunity to learn more about ML & AI (Note: I will also be presenting on RPA).

From the Pipeline v22.0

This entry is part 22 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

A Primer on Engineering Delivery Metrics

Juan Pablo Buriticá recently published on excellent article on engineering metrics. The focus of the article is learning about what and how to measure the software delivery phase of development. The first step is to define why you want to measure something — look for the outcome. Another key component is building trust in the organization, so the team believes in the strategy. Some of the Software Delivery Performance Metrics to consider: delivery lead time, deployment frequency, mean time to restore, and change failure rate.

How to Start Testing with Python

This webinar led by Andy Knight walks you through the essentials of test automation with Python. He uses pytest as the framework. During the course of the session, he shows how to write unit & integration tests. He also gives a rundown of parameters, fixtures, and plugins.

What Are Machine Learning Uses to Improve Static Analysis

The article demonstrates a few use cases for machine learning in performing static analysis of defects. For one, it can be used to group defects together that are similar in nature. These groupings can be used to look for patterns in system behavior. Another usage is ranking defects based on how straightforward or complex they are; AI-assisted defect ranking uses supervised learning. A similarity score is attached to defect report to either “True Positive Reports” (TP) or “False Positive Reports” (FP). The two groups are based on the results of prior review of the defects reported in the past.

Improving Test Data Collection and Management

“There is much published about the data we generate to assess product quality. There is less discussion about the data testers generate for our own use that can help us improve our work—and even less is said about recommended practices for data collection. Test data collection, management, and use all call for upfront planning and ongoing maintenance. Here’s how testers can improve these practices.”

Book Review: Explore it!

Kristin Jackvony has posted a review of Elizabeth Hendrickson’s “Explore It!” book on exploratory testing. The book should be required reading for anyone in software testing.  The first key delineation; checking is what a tester does when they want to make sure that the software does what it’s supposed to do, whereas exploring is what a tester does when they want to find out what happens if the user or the system doesn’t behave as expected.

From the Pipeline v21.0

This entry is part 21 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Code Review Checklist

Michaela Greiler has put together a great list of concerns for any code reviewer as a set of best practices. It’s one thing to require others to approve a pull request, it’s quite another to establish a set of standards for the team to enforce during those code reviews. She first provides a quick list of items to self-review before sending out the code for review by others. She also includes a robust list of items broken down by category: implementation, logic errors & bugs, error handling & logging, usability & accessibility, testing & testability, dependencies, security & data privacy, performance, readability, and expert opinion. She finishes with some excellent advice on being respectful in code reviews as a professional courtesy. This is definitely an article to be bookmarked.

Bringing New Life into Edge Selenium Tools

Microsoft Edge has been rebuilt using Chromium, which means a new automation implementation using Selenium. Michael Mintz took Edge through a test drive using Python to check the performance. He found that Edge automation has mostly the same response as Chrome with a few differences in how extensions are handled. Michael used SeleniumBase, a neat wrapper for Selenium, to setup his automation scripts. You can get EdgeDriver directly from Microsoft HERE and SeleniumBase HERE.

Improving Test Data Collection and Management

“There is much published about the data we generate to assess product quality. There is less discussion about the data testers generate for our own use that can help us improve our work—and even less is said about recommended practices for data collection. Test data collection, management, and use all call for upfront planning and ongoing maintenance. Here’s how testers can improve these practices.”

The Problem With “Broken Windows” Policing

This article goes off the path for the typical post on Red Green Refactor, but it’s important historically for context around the term “Broken Windows”, which is often applied to the state of a codebase with too much technical debt. In tech, the advice around broken windows is applied to maintaining good practices such as code reviews, regular refactoring, following design patterns, and implementing extensible architecture. However, the term itself has been misapplied for many years in law enforcement policies. The article is enlightening about the context of terms we use in tech but don’t necessarily know the origin or outside applications of the term.

Tutorial on SRE-Driven Performance Engineering with Neotys and Dynatrace

This is a great instructional video on performance feedback. Andreas Grabner and Henrik Rexed demonstrate how to practice performance engineering using Neotus and Dynatrace. They build a delivery pipeline that automates the tasks around preparing, setting up, and analyzing test executions.

From the Pipeline v20.0

This entry is part 20 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Cucumber Reports

The creators of Cucumber have released a free, cloud-based service for sharing execution reports for scenarios. Both the Java and Ruby implementations of Cucumber have the report functionality built-in with simple commands or environment variables. If enabled, the console output links to an online execution report. The results will be available for a 24-hour period before being automatically deleted. In the future, the execution reports can be linked to a GitHub repo and will no longer be scheduled for auto-deletion if claimed.

Separating Automation Tooling from Automation Strategy

“When people do not have good luck with automation, it is hardly ever because of the tool being used, but almost always because of the wrong automation strategy, wrong expectations, and wrong adoption of automation. Automation tools only answer the “how” of automation, while having an automation strategy gives answers to who, where, when, what, and why. Here’s why it’s so important to have a test automation strategy.”

Ten More Commandments on Automation

Paul Grizzaffi provides an excellent overview of the common pitfalls organizations run into with test automation. His advice brings the same concepts we would expect from development code and applies is to test automation code because we should treat them the same way. Check to see if any of your test automation code breaks a commandment!

The Technical Debt Quadrant

The article a breakdown of the four types of technical debt as originally described by Martin Fowler. Sometimes we make deliberate mistakes either to simply push the product to production or because we don’t properly consider design. Other times our own inexperience with a technology means we make mistakes. How we react to that accrued technical debt determines if we are reckless or prudent about identifying the problem to help ensure we don’t repeat the mistakes again.

Webinar: Add Static Code Analysis to Your CI/CD Pipelines

Perforce recently hosted a webinar on static code analysis in CI/CD pipelines. Plenty of excellent lessons to learn about adding quality to deployment pipelines. “With the amount of software being installed into devices across all industries, it has become essential that the embedded code is safe and secure, reliable, and high quality. However, ensuring that the embedded code meets these standards and is delivered in a timely manner can be a daunting and time-consuming challenge. For that reason, it is essential that developers pair static code analysis with efficient software development practices, such as CI/CD pipelines. View this webinar and learn how to add static analysis to your DevOps process.”

From the Pipeline v19.0

This entry is part 19 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Mozilla: The Greatest Tech Company Left Behind

A sad story about layoffs at Mozilla with cuts happening in the developer tools division. The article contains a brief history of Mozilla, including their contributions to web. Mozilla actually pulls most of it’s revenue from Google, a competitor to the browser, to make the search engine the default for the browser. Probably one of their biggest failures is not having a presence in the mobile web market, which is dominated by Chrome and Safari.

Introducing BDD to Your Team: How Does it Affect Your Role as a Tester?

Bas Djikstra provides a roadmap for introducing your team to BDD. The first fundamental mindset change is involving testers from the start of development of every new feature. Bas recommends involving testers in refinements sessions upfront. Specifications should be discussed upfront with the business partners and work with the developers should be collaborative during actual feature development. The last piece of BDD is automating those specifications to align with end-user behavior.

How Continuous Testing is Done in DevOps

“DevOps does speed up your processes and make them more efficient, but companies must focus on quality as well as speed. QA should not live outside the DevOps environment; it should be a fundamental part. If your DevOps ambitions have started with only the development and operations teams, it’s not too late to loop in testing. You must integrate QA into the lifecycle in order to truly achieve DevOps benefits.”

Breakpoint Highlights: Test Automation

BrowserStack’s first virtual conference recorded all their sessions and have made them available to everyone. It’s a good alternative to the face-to-face events many of us have missed since COVID. At least with these virtual events we can watch them at our leisure and not have to worry about choosing between two talks at the same slot. Details: “We hosted BrowserStack’s first-ever virtual summit, Breakpoint 2020 last week. We saw over 10,000 registrations and 2500+ attendees from 155 countries, 18 speakers, and 2 cats! Speakers from Twitter, Trivago, Selenium, Robot Framework, and more talked about agile testing, quality at speed, decentralized testing, release reconstruction, and testing best practices.”

A Guide from Key DevOps Experts

Eran Kinsbruner of Perforce is releasing another anthology book written by a group of DevOps professionals on Artificial Intelligence, Machine Learning, and Robotic Process Automation. The book covers: “The fundamentals of AI and ML in software development and testing, including the basics for testing AI-based applications, classifications of AI/ML, and defects tied to AI/ML. Practical advice and recommendations for using AI/ML-based solutions within software development activities, like visual AI test automation, AI in test management, testing conversational AI apps, RPA benefits, and API testing. More advanced and future-focused angles of AI and ML with projections and unique use cases, including AI and ML in logs observability, AIOps, how to maintain AI/ML test automation, and test impact analysis with AI.”