Book Club: Behavior Driven Development – Discovery (Chapter 3)

This entry is part 3 of 5 in the series BDD Discovery

The following is a chapter summary for “BDD Books – Discovery” by Gaspar Nagy and Seb Rose for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on BDD Discovery


“Explore behavior using examples

Written by the creator of SpecFlow and the author of The Cucumber for Java Book, this book provides inside information on how to get the most out of the discovery phase of Behaviour Driven Development (BDD). This practical guide demonstrates good collaboration techniques, illustrated by concrete examples.

This book is written for everyone involved in the specification and delivery of software (including product owners, business analysts, developers, and testers). The book starts by explaining the reasons BDD exists in the first place and describes techniques for getting the most out of collaboration between business and delivery team members.

This is the first in the BDD Books series that will guide you through the entire development process, including specific technical practices needed to successfully drive development using collaboratively-authored specifications and living documentation.”

BDD Discovery

Chapter 3

This chapter answers the questions: “Are examples really enough to specify a feature? How many examples do we need to specify a feature?”

3.1 How Hard is Concrete?

Outcome: description of the state of the system after the user or system behavior has taken place. It should contain enough detail to measure the behavior against expectations.

Action: the event that causes the behavior to take place. It might be some action by a user, the system, a scheduled job, or any other stimulus that can cause the system to react.

Context: describes the state of the system before the action takes place.

Anatomy of an Example

3.2 Is All That Concrete Essential?

Rules are generic expressions of how the system ought to behave – they each cover lots of possible situations. An example expresses a single situation that illustrates an application of a rule. The purpose of the example is to clarify a rule; each example must be specific and precise.

Each example illustrates a single rule, and should only mention concrete data that is directly related to the behavior being illustrated.

Getting the right level of detail is not the primary concern – inessential detail should be removed later when examples are formatted as scenarios.

3.3 How Many Examples Do We Need?

All possible states cannot be reasonably captured in Examples. A balance must be achieved between Examples and Business Rules.

The team determined that the delivery address can’t be changed once an order has moved beyond the “waiting for pickup” state. The team resolved the misunderstanding by using examples. Please refer to Figure 17. A happy middle ground must be reached in describing enough examples to deliver the feature and writing too many features for a given business rule that preclude efficient delivery of the software.

3.4 Why Stop Now?

“Catching all implementation mistakes will be important, but during the discovery phase our focus is on the requirements: we would like to prevent bugs from ever happening.”

Chapter 3 – Why Stop Now?

The Examples demonstrate the development team understands what they are being asked to do and that the business understands what they’re asking for.

“When we start considering the exhaustive exploration of all possible combinations, we have moved away from understanding the requirements into the realm of software testing. When the examples start to address concerns that the product owner is not interested in, it’s time for the facilitator to bring the discussion back on track.”

Chapter 3 – Why Stop Now?

Good coverage for test cases is still important, however this is an activity outside the requirement workshop. The intent of the workshop is engage the product owner and the team.

3.5 Rules vs. Examples

Are examples alone sufficient to specify the functionality of an application? It is not always possible to “reverse engineer” the rules from the examples. Both rules and examples are needed to document the expected behavior of the system.

“The rules provide the concise, abstract description, and the examples provide precise, concrete illustrations of them.”

Chapter 3 – Rules vs. Examples

Why “Specification by Example”? The emphasis is on the use of examples to support a specification by making it harder for the rules to be misinterpreted.

Rules and examples are not the only way to specify the behavior of software. Other tools complement them, such as: definitions, model diagrams, formulas, and glossaries.

3.6 My Example Illustrates Multiple Rules!

Try to come up with Examples that illustrate a single rule. Focusing on a rule and utilizing it are two different things. Consider whether the example utilizes the rules, focuses on the rule, and/or illustrates the rule.

If an example illustrates multiple rules, then split the example into several that each focus on a single rule.

3.7 The Bigger Picture

Short, focused examples illustrate the behavior of a single rule, but the overview of the whole system behavior is missed.

Leverage wireframes, page-flows, box-and-arrow diagrams, and other tools to illustrate the application behavior.

3.8 What We Just Learned

The requirement workshop is an excellent place discuss the understanding of the requirements. Creating concrete examples that illustrate the rules are challenging.

Examples on their own are not sufficient; rules should be documented as well. An Example is comprised of context, action, and outcome.

Examples should illustrate a single rule and contain only data essential to understand the behavior of the rule.

Book Club: Behavior Driven Development – Discovery (Chapter 2)

This entry is part 2 of 5 in the series BDD Discovery

The following is a chapter summary for “BDD Books – Discovery” by Gaspar Nagy and Seb Rose for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on BDD Discovery


“Explore behavior using examples

Written by the creator of SpecFlow and the author of The Cucumber for Java Book, this book provides inside information on how to get the most out of the discovery phase of Behaviour Driven Development (BDD). This practical guide demonstrates good collaboration techniques, illustrated by concrete examples.

This book is written for everyone involved in the specification and delivery of software (including product owners, business analysts, developers, and testers). The book starts by explaining the reasons BDD exists in the first place and describes techniques for getting the most out of collaboration between business and delivery team members.

This is the first in the BDD Books series that will guide you through the entire development process, including specific technical practices needed to successfully drive development using collaboratively-authored specifications and living documentation.”

BDD Discovery

Chapter 2

“In this chapter, we are going to peer into the daily work of a software product team to learn more about how they use structured conversations to help them discover what the expected behavior of the next feature should be. We’ll start by describing one of their requirement workshops. This will introduce concepts that you’re not familiar with, but don’t worry, all your questions will be answered later in the chapter.”

Chapter 2 – Structured Conversation

2.1 Where is My Pizza?

The sample used in the book is a Pizza delivery service, with an app that will track the real-time location of order(s). This example will be used throughout the book as the team develops features to modify the delivery address of an order after the order has been submitted.

2.2 A Useful Meeting

The section introduces the idea of a “Requirement Workshop”:

“The team meets regularly (usually several times a week) to discuss the work that they’ll be undertaking in the next sprint or two. The purpose of this meeting is to explore a story, understand it’s scope and illustrate it unambiguously with concrete examples. While they’re doing this, they may discover new details about the story. They may also ask questions that no one at the meeting is able to answer right away.

What matters most in this meeting is to bring diverse perspectives together, so that they can learn about what needs to be done and work together more effectively. In other organizations, similar meetings have been variously called three amigos meeting, discovery workshop, specification workshop, story refinement, product backlog refinement and backlog grooming – as always, the name is less important than the purpose.”

Chapter 2 – A Useful Meeting

Requirement Workshop

  • The team meets regularly
  • The purpose is to explore a user story
  • The scope of the story is discussed
  • The story is illustrated with examples
  • Questions are documented that no one in the workshop can answer

The team will consider “Change Delivery Address”. The feature will implement the following:

  • The system will need to be able to confirm whether it’s possible to change the delivery address
  • The system will need to check that the new address is not too far from the current one.
  • Determine the state of the order

Throughout the book, the authors demonstrate a technique called Example Mapping to help facilitate requirements workshops. Example Mapping is a way to conduct a structured conversation.

2.3 Collecting Examples

Persona in user-centered design and marketing is a fictional character created to represent a user type that might use a site, brand, or product in a similar way.

An example is written on a green card. Please refers to Figures 7 & 8.

Business Rules are written on blue cards. Only valid address is accepted. Estimated time of arrival is updated or new address should be within restaurant’s delivery range.

Problems or Questions are written on red cards. Please refer to Figure 12.

The team discusses other business rules such as valid address is accepted, estimated time of arrival is updated, and new address is within restaurant’s delivery range. For each rule, one or more examples are used to illustrate those rules. The workshop finishes in a time-box of 30 minutes with the completed Example Map shown in Figure 14. Further details on the content of an Example Map is discussed in Section 2.5.

2.4 Deliberate Discovery

“Discovery that happens while you’re developing software can be thought of as ‘accidental discovery’ – it may upset your schedule, or even derail or interrupt your roadmap entirely. The discovery that happens during a requirement workshop is ‘deliberate discovery.’”

Chapter 2 – Deliberate Discovery

The understanding of the user story is deliberately explored with concrete examples. Otherwise, learning happens accidentally during development of a story. Examples are used both to illustrate what is known and question assumptions.

2.5 Example Mapping in a Nutshell

Example Mapping is a simple, low-tech method for making conversations short and powerfully productive.”

Matt Wynne

Examples are captured on green cards – illustrate concrete behavior of the system.

Rules are written on blue cards – these are logical groupings of the examples usually focusing on a particular condition. Commonly referred to as acceptance criteria (AC), business rules, or simply requirements.

Questions or assumptions are captured on red cards – any topic that would block the discussion.

User Stories are written on yellow cards – start by discussing a single user story, but as we are digging into the details, we often decide to split the story into smaller stories and postpone some of them.

The User Story is placed on the top row and the rules are arranged in a row underneath. Examples pertaining to a given rule are placed underneath that rule. The red cards are placed to the side of the Example Map. For each session, a facilitator should schedule the meeting and keeps the session active by ensuring discussion is captured on the cards and everyone agrees with the wording.

2.6 How to Establish Structured Conversations

Structured Conversation is a facilitated exchange of ideas that conform to a predefined form.

A structured conversation exhibits the following properties:

  • Collaborative – all attendees participate actively
  • Diverse Perspectives – all primary areas of a team are represented
  • Short – regular workshops in a time-box so the feedback loop is quick
  • Progressive Focus – capture the progress of the workshop in real-time
  • Consensus – agreed concrete examples measure the workshop’s success

Each property in the structured conversation is elaborated on further below.


  • Conversations should involve the entire team
  • Establish a culture where everyone can participate

Diverse Perspectives

  • Requirement workshops should have representation from different perspectives (development, test, business, UX, etc.)
  • The business representatives focus on the fulfillment of the business goals
  • The developers explore the technical implications of the feature
  • The testers challenge the feasibility of testing the feature and help identify special edge cases


  • Requirement Workshops should be no longer than 30 minutes
  • Long meetings are exhausting and expensive.
  • Short meetings can be scheduled more frequently
  • Frequent meetings can vary the attendees to improve shared ownership and can also reduce the impact of unanswerable questions

Progressive Focus

  • The workshop has a focus on what will be discussed but not discovered
  • Understanding becomes progressively more complete
  • Meeting should have a stated purpose
  • Understanding should be captured as it develops
  • Be able to quickly grasp the state of the discussion
  • Stop discussions that aren’t going anywhere


  • The output is correct
  • The feature is sufficiently understood
  • There is no hidden or private knowledge
  • Who is responsible for answering each remaining question

2.7 What We Just Learned

“Software development is a learning process. The more we can learn about the problem, the easier solving it becomes. This process can be made more effective by having several team members (with different perspectives) analyzing the requirements together before they start developing the software. These collaborative requirement workshops are most productive if they are kept short and run regularly throughout the project – often several times a week.”

Chapter 2 – What We Just Learned

In the next Chapter of the series, we’ll learn how to write concrete Examples.

Book Club: Behavior Driven Development – Discovery (Chapter 1)

This entry is part 1 of 5 in the series BDD Discovery

The following is a chapter summary for “BDD Books – Discovery” by Gaspar Nagy and Seb Rose for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on BDD Discovery


“Explore behavior using examples

Written by the creator of SpecFlow and the author of The Cucumber for Java Book, this book provides inside information on how to get the most out of the discovery phase of Behaviour Driven Development (BDD). This practical guide demonstrates good collaboration techniques, illustrated by concrete examples.

This book is written for everyone involved in the specification and delivery of software (including product owners, business analysts, developers, and testers). The book starts by explaining the reasons BDD exists in the first place and describes techniques for getting the most out of collaboration between business and delivery team members.

This is the first in the BDD Books series that will guide you through the entire development process, including specific technical practices needed to successfully drive development using collaboratively-authored specifications and living documentation.”

BDD Discovery

Chapter 1

Behavior Driven Development (BDD) is an agile approach to delivering software.

BDD is about collaboration and domain discovery.

JUnit requires developers to use the word “test” in their method names, which emphasizes testing over documentation, design, and definition of the expected behavior.

Dan North originally designed BDD as a set of practical rules for naming and structuring tests to preserve a connection to the requirements. The business-readable format was designed so the business representatives could confirm the expected behavior.

1.1 The Missing Link

The purpose of software development is to deliver solutions to business problems.

A continuing challenge is to verify that the software actually satisfies the requirements.

Waterfall methods have slow feedback cycles, which allows projects to go off track. In response, the industry began to experiment with methodologies like XP and frameworks like Scrum.

Please refer to Figure 1

BDD helps maintain a connection between the requirements and the software – and as such acts as a bridge. The bridge is made out of examples. Every test is an example. Each example is an expression of a required system behavior.

“If you have sufficient examples you define the behavior of the system – you have documented the requirements. Business people remain engaged, because the examples are expressed in business language. Fast feedback is preserved, because the examples are automated.”

Chapter 1 – The Missing Link

Chris Matts is originally credited with Given, When, Then keyword format for examples written as Scenarios. There are several free tools for writing BDD Scenarios: Specflow and Cucumber.

1.2 How Does BDD Work?

“Examples (and their formalized representation – scenarios) play a critically important role in BDD. To understand how BDD works, let’s have a look at the way that these scenarios are created and how they drive the development process.”

Chapter 1 – How does BDD Work?

BDD is used when the details of user stories are discussed, in the form of examples. Focusing on examples makes the business rules clear. As a general guideline, each business rule should be illustrated by one or more examples.

“Examples also enable us to explore our understanding of a rule. Exploration often leads to the discovery of complexities and assumptions that otherwise would not be found until much later in the development process.”

Chapter 1 – How Does BDD Work?


  • Examples can take various forms: input-output data pairs, sketches of the user interface, bulleted lists of different steps of a user workflow or even an Excel workbook illustrating a calculation or a report.
  • All examples describe a behavior as a combination of context, action, and outcome.

Some examples are formulated into scenarios when a user story is ready. BDD tools turn these scenarios into executable tests before the related behavior has been implemented in the application itself.

Test-Driven Development (TDD) helps speed up the feedback loop by demanding that teams write automated tests before they write the code. A test is written first, which initially fails. Next, the application feature is implemented so that the test passes. After code cleanup (refactoring), the next test can be written.

BDD drives development in a similar fashion to TDD, however the scenarios are described from the perspective of the user. Begin by writing a scenario that should fail. Next, write the automation & application code until the scenario passes. After code cleanup (refactoring), the next scenario can be written. BDD does not replace TDD, but rather is built into the red-green-refactor cycle.

Please refer to Figures 3 and 4

BDD Feedback Benefits:

  • Implementation correctness for developers
  • Overall solution for the product owner
  • Implemented behavior to help business analysts understand existing functionality
  • Provides a signal for manual / exploratory testers that a feature is ready for testing
  • Safety net for developers, identifying unwanted side effects of changes
  • Detailed documentation of application behavior for any support team
  • Defines a domain language that is understood by everyone

1.3 What About Testing?

BDD does not replace classic testing or testing skills. BDD does not define how testing should be performed, but rather provides guidelines about the Agile Testing Process.

The focus of the Agile Testing Process is to help ensure defects are never added to the codebase in the first place. Testers are involved in project requirement discussions, to help prevent bugs.

A significant proportion of defects are rooted in problems that arise from misunderstood requirements (see in Outside Resources for J.-C. Chen and S.-J. Huang, 2009). The primary type of verification is done through exploratory testing (see in Outside Resources for E. Hendrickson, 2013).

1.4 A Language That is Understood by Everyone

Every project is a voyage of discovery. A language must be established in which the customer can explain the problem(s) in detail and the development team can explain the solution. Teams should write scenarios using this language. If scenarios drive the implementation, then the solution will be close to the business domain.

1.5 Living Documentation

Living Documentation is a form of documentation that represents the current state of the application, which is updated in real-time. Scenarios make up the living documentation, which should be understandable by everyone. Scenarios should be written in domain-specific terms to describe the behavior of the application.

1.6 What is BDD, Then?

BDD is an agile approach that consists of three practices that have to be addressed in order:

  • “The first practice is discovery, a structured, collaborative activity that uses concrete examples to uncover the ambiguities and misunderstandings that traditionally derail software projects.”
  • “The second practice is formulation, a creative process that turns the concrete examples produced during discovery into business-readable scenarios.”
  • “The third, and final, practice is automation where code is written that turns the scenarios into tests.”
  • The above practices must be adopted in order to gain the expected benefits.

The benefits of automation:

  • “When the tests pass, the development team can be confident they have delivered what the business have asked for.”
  • “The tests give the development team a safety net when the time comes to modify the code.”
  • “The tests form living documentation of how the system behaves, readable by the business, guaranteed to be up-to-date.”

Please refer to Figure 5

1.7 What We Just Learned

BDD was created to address the challenges associated with misunderstandings in a project team. Examples make an excellent bridge between business requirements and technical specifications.

Scenarios written in a ubiquitous language, act as both documentation and tests. As documentation, they close the gap between business and development. As tests, they demonstrate the solution being implemented aligns with business requirements. The Scenarios provide documentation that describe the actual application functionality.

The focus of BDD is collaboration and the examples are turned into test cases. The purpose is to inform all involved that development is moving in the correct direction.

From the Pipeline v17.0

This entry is part 17 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

The Top 5 Considerations for Creating a Successful Cloud-Based Pipeline

The article posted by Even Glazer covers the top five considerations for creating and running an automated, cloud-based pipeline. He advises anyone looking to implement a pipeline to consider the constraints of your organization, particularly around data security policies. The top five considerations area: (1) Consider Your Business Needs First, (2) Develop Your Cloud-Based Pipeline as Part of Your Apps, (3) It’s All About Continuous Improvement, (4) Enable Self Service Features, and (5) Track Your Pipeline, Microservices and Compliance Policies.

Patterns of Distributed Systems

Another mammoth post that can be turned into a book chapter on Martin Fowler’s site. This time, guest author Unmesh Joshi takes us through a set of patterns he observed in mainstream open source distributed systems. Several of these patterns are works in progress but the article itself is well worth a read.

“Distributed systems provide a particular challenge to program. They often require us to have multiple copies of data, which need to keep synchronized. Yet we cannot rely on processing nodes working reliably, and network delays can easily lead to inconsistencies. Despite this, many organizations rely on a range of core distributed software handling data storage, messaging, system management, and compute capability. These systems face common problems which they solve with similar solutions. This article recognizes and develops these solutions as patterns, with which we can build up an understanding of how to better understand, communicate and teach distributed system design.”

Code Coverage Best Practices

Another great post from the Google Testing blog about code coverage. They openly question whether or not code coverage alone reduces defects and a high % of coverage being responsible for higher quality in test coverage. Chasing a specific number does not mean the application under test is of good quality. Instead, it’s important to use a risk-based approach to testing and ensuring that all deployments are gated by code coverage.

“We have spent several decades driving software testing initiatives in various very large software companies. One of the areas that we have consistently advocated for is the use of code coverage data to assess risk and identify gaps in testing. However, the value of code coverage is a highly debated subject with strong opinions, and a surprisingly polarizing topic. Every time code coverage is mentioned in any large group of people, seemingly endless arguments ensue. These tend to lead the conversation away from any productive progress, as people securely bunker in their respective camps. The purpose of this document is to give you tools to steer people on all ends of the spectrum to find common ground so that you can move forward and use coverage information pragmatically. We put forth best practices in the domain of code coverage to work effectively with code health.”

Searchiiiiiing, Seek And Locate…But Only With Appropriate Attributes for Automation

Another great post by everyone’s favorite metal-loving test automation architect Paul Grizzaffi. In his latest post, Paul discusses one of the big drawback of UI-based automation: attributes used for locating elements. Sometimes the only access we have to an application is via the UI, so we must interact with elements that can be typed into, pressed or click, and need their values to be inspected. Sometimes third-party tools (such as Salesforce or SiteCore or Oracle Cloud) have dynamically built elements that make attributes difficult to nail down. We can instead use “data” attributes to add arbitrary attributes to an HTML element to help automation developers locate elements and make UI-based automation a little less brittle.

WebDriverIO for Javascript Automation Testing

Joe Colantonio walks us through the various implementation of WebDriverIO for automation testing. WbDriverIO is a javascript-based testing tool that uses Selenium. People typically use WebDriverIO if they don’t want to build their own framework from scratch and they want some additional features not provided in vanilla Selenium. The article provides a list of these features along with several podcast interviews on the subject for those looking to learn more.

Slaying the Leviathan: Containerized Execution of Test Automation-part 1

This entry is part 1 of 2 in the series Slaying the Leviathan


In this three-part series on test automation, I will explain how to utilize Docker, a Ruby/Cucumber web framework, and Jenkins to run parallel test automation within Docker containers.

All of the code shown in this series is accessible on GitHub here. This repository contains all Docker and Test Automation components necessary to follow along and complete the same steps on your machine. If you plan on following along, you will need to install Docker, JenkinsGit, and Ruby

After you’ve installed the necessary prerequisites, go ahead and pull down the suite from GitHub. Navigate to the directory in the terminal where you want this framework to reside and run the command: ‘git clone

If the command executes correctly, you should now have the docker_web_repo pulled down onto your local machine. In order for the later functionality to work I would ensure that the root folder of the framework has the name sample_cucumber.

Framework Introduction

Before we start with Docker, it’s important to understand how the framework functions when running locally. Completing the following steps provides a level of insight into the steps/components necessary for the framework to run.

  • Step one has completed as we have cloned the repo from GitHub.
  • Navigate to your terminal and type in ruby -v, which returns the version number of Ruby installed on your machine. If it doesn’t, the Path environment variable needs to be updated to include the location of the Ruby executable. I am running Ruby 2.6.6, my path to the Ruby executable would be C:\Ruby26\bin.
  • In the terminal, navigate to the directory where you pulled down the docker_web_repo, type in ‘gem install bundler‘, allowing you to utilize bundler’s ability to install all the gems listed in the Gemfile of the repo.
  • Now input ‘bundle install‘ which pulls all the gems listed in the Gemfile down from
  • Navigate here and download the ChromeDriver of the same version number as Chrome on your machine.
  • Once the ChromeDriver has been downloaded, place it in the bin directory of the version of Ruby you installed.
  • This suite utilizes Rake tasks to run, for more information about Rake tasks click here. For this blog post, all you have to do is set an environment variable and run the Rake task. First, input set or_tags=regressioninto your terminal and then type rake features:default. You will should see the output that the regression is running. No browser will visibly open because the suite is running in headless Chrome.

We now have a checklist of the environmental setup to be completed for this web repository to execute automated tests. 

Additional Functionality

Another powerful facet of this framework is the code in lib/utilities/dynamic_tags.rb. This code allowed the tests in the suite to be split into sections and ran in parallel. All this code required at runtime is for environment variables set for total_number_of_builds and build_number

This code worked by replacing a tag (by default @regression) with the tag @split_builds for a subset of the tests corresponding to the build_number provided as an environment variable.

Docker 101

The best overview is provided by the folks at Docker here. For our purposes, it’s important to understand the following.

At a high level, Docker allows for multiple processes to run on a single host. Each processes have a unique Host name, IP address and File System. Shared between them are the Hosts Operating System, CPU, and Memory. This is very similar to what a Virtual Machine is; however, one difference is that Virtual Machines have their own operating system.

Additionally, Docker offers methods of storing and deploying these processes. All of the setup necessary to run a process is stored as a Docker image. The Docker image is a snapshot of what you want the environment to look like for the process to run successfully.

The Docker image is constructed, utilizing a set of steps, housed within a Dockerfile.

Lastly, when we deploy a running instance of these images, which we call a “container” in the Docker world.

Conclusion and Next Steps

In this first blogpost of a three-part series, we executed the framework locally on our machine and became familiar with some of the components in the framework. Additionally, we have explored what Docker is, conceptually.

In the next blogpost, we dive into the actual implementation of Docker and how it interacts with the testing framework.

From the Pipeline v16.0

This entry is part 16 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Best Books to Learn Automation Testing

The following is a list of ten books to help anyone learning automation testing. Some of the books included are industry standards such as “Clean Code” by Bob Martin and “Refactoring” by Martin Fowler. Dot Graham is represented twice with a couple of classics in “Software Test Automation” and “Experiences of Test Automation”. Probably one of the best jumping off points is “Continuous Testing for DevOps Professionals” because it’s an anthology series with many well-known automation developers each publishing a chapter.

Defensive Design Strategies to Prevent Flaky Tests

Flaky tests are those tests that alternate between passing and failing due to poor automation code, test data issues, or environment instability. Investigation of flaky tests takes away valuable time from investigating true failures. Flaky tests can be unit tests, integration tests, or UI tests. One of the recommendations is to not make assumptions about the data you can’t verify. Tests should examine the current state, make a change to the system, and then examine the new state.

5 Tips to Take Your DevOps Pipeline Beyond the Basics

“The goal of a DevOps pipeline is to create a continuous workflow that includes the entire application lifecycle. But too often, people focus only on the tools and automating everything, not stopping to think whether their processes could further improve performance and efficiency. Let’s look at some common challenges to continuous delivery and then learn five tips for refining your DevOps pipeline and taking it to the next level.”

Specification by Example, 10 Years Later

Gojko Adzic takes a look at “Specification by Example” 10 years on. He recently conducted a survey to discover what’s changed in the industry since releasing the book. The summary is from many of the people who follow Gojko and industry experts. They looked at areas such as using executable specifications, writing style for requirements, tooling selection, and extent of automation.

How to Automate Video Game Tests

Joe Colantonio conducted an interview with Shane Evans over GameDriver, a way to conduct automated testing in games that goes beyond Unit Testing. GameDriver allows for automation of playtesting, which is useful because most integration testing and playtesting are done manually in the game industry. With the shift to CI/CD as a standard practice, game developers need to test more quickly. The interview takes you through the architecture of GameDriver, a simple example, and a game test recorder.

From the Pipeline v15.0

This entry is part 15 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

IEEE Spectrum Ranked the Top Trending Programming Languages

Python earned the top spot of programming languages as ranked by IEEE based on 11 metrics from 8 sources. The study found that Javascript has greater volume but is used primarily for web applications, whereas Python has become a general-purpose scripting language. One of the interesting metrics used was the number of job openings on indeed (not sure why LinkedIn was not used instead), which had Java at #1 followed by Javascript, C#, and Python. This article was quite helpful in understanding trends in the industry.

Value Stream Mapping: How to See Where You’re Going By Seeing Where You Are

Steve Pereira posted an experience report on Value Stream Mapping from his involvement with 3 different organizations over the past year. A value stream is the sequence of activities an organization undertakes to deliver on a customer request. The sequence is displayed visually to depict information and material flow. This is a great example of three different mapping exercises and the value provided through value stream mapping (VSM).

VSM DevCon

As a follow-up on the post above, VSM DevCon recently hosted a virtual conference on value streams. They posted all the recorded videos from the event (free to view). Details below: “As software development, delivery and performance become more complex due to modern architectures, value streams can help organizations unlock the bottlenecks and eliminate process waste to continuously improve how they work and deliver better experiences to their customers. Value stream management concepts are critical when the product changes frequently due to opportunities in the markets, the materials change due to the complexity of modern software architectures and means of delivery, and the output is often changing based on customer demands and expectations.”

Comparing 4 Top Cross-Browser Testing Frameworks

Eran Kinsbruner examines several of the most popular testing frameworks used for webapp testing. He compares Cypress to Selenium to Puppeteer to the recently released Microsoft Playwright. Overall Selenium is the most flexible to use but does require more programming knowledge than the other three.

Technical Debt: 5 Ways to Manage It

This article offers five ways to help manage technical debt within a team or an organization. In the post, the author recommends a team look to reframe software development strategy (new code standards, TD tracking, agile approach that promotes TD removal), integrate metrics into development (code coverage for unit tests, bug counts, etc.), test more frequently within a release cycle, maintain a knowledge base for standards & practices, and conduct regular refactoring sessions.

Slaying the Hydra: Modifications and Next Steps

This entry is part 5 of 5 in the series Slaying the Hydra

In this final installment of “Slaying The Hydra”, I discuss modifications that can easily be made to the suite for scalability, based on the resources available.

Additionally, I provide an overview on how we can expand and improve on the ideologies introduced in this series for a future series.

Run Time Parameters

In our example we only have one parameter browser specifying whether we want to run the test automation in a Chrome or IE browser. In enterprise test automation frameworks, the run time parameters included are often more exhaustive, allowing for more dynamic usage of the suite.

To modify the suite for additional run time parameters, we first modify the parameters of the Jenkins Pipeline job itself by selecting ‘Add Parameter‘ and then configure the meter to fit our needs.

This image has an empty alt attribute; its file name is image-1024x326.png

Additionally we modify the build job portions of our pipeline to pass the parameter selected in the pipeline job to the build of the test_runner job. This is done by expanding the parameters to include whatever additional values we want to pass into the test_runner job (note: the browser value is returned via params.browser).

Lastly, within the test_runner job we modify the input parameter, in so, the value from the Jenkinsfile is passed successfully to the build of the job.

Additional Executors and Machines

The really nice thing about this suite and the related code is the ability to execute equally well with 20 tests split between 2 executors, or 2000 tests split between 20 executors.

If we want to increase the number of executors utilized we complete the following steps:

Step One: Pipeline Changes

First, we increase the parallel portion of the pipeline to equal the number of executors you have available. For every new portion, ensure the build_number and total_builds values are updated and accurate values.

In the pipeline, the parameters for the report_consolidation job in the consolidation stage will need to be modified to include the build_number parameters for each machine_consolidation job executed.

Step Two: Node Changes

If the executor is a machine not connected to Jenkins, I would reference this Linux Academy post for connecting the machine to Jenkins.

In our test_runner job we have specified @local as the tag to locate when finding nodes that can run a build of this job.

This image has an empty alt attribute; its file name is image-4-1024x54.png

Therefore, we navigate Manage Jenkins > Manage Nodes > Corresponding Node and set the Labels value to @local.

This image has an empty alt attribute; its file name is image-6-1024x25.png

One note, a singular machine in some cases will be able to handle multiple executions on its own. For example, if you are running remote web browsers via a cloud partner, a singular machine will be able to execute multiple instances of the framework without risk of impacting other tests because the browser is remote.

In this case we could alter the # of executors value in Manage Jenkins > Manage Nodes > Corresponding Node.

Step Three: Job Changes

In our clear_workspace job and our workspace_consolidation job within the node parameter we have to include the additional node as a default nodes option.

If this step is skipped from the clear_workspace job, you will see confusing results since the locations that consolidate testing results on the newly added node(s) would never get cleared out and retain the data from previous test executions.

If this step is skipped from workspace_consolidation job, the results from the execution jobs executed on that machine would not be included in reporting.

Additionally in the report_consolidation job, the parameters would be modified to include all parameters passed in from the pipeline representing all of the runs of the workspace_consolidation job.

This concludes the parallel test automation utilizing single threaded execution nodes. If you have questions or issues when attempting to set this up, don’t hesitate to reach out.

The logical place from here is to create an example completing the same sort of parallelization utilizing Docker. This will allow us to complete similar work, without as heavy of an overhead, and we can add some containerization experience to our belts.

From the Pipeline v14.0

This entry is part 14 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

The State of Open Source Testing: Key Findings on BDD

A consortium of companies joined together to conduct a survey of more than 1,800 technology professionals, most of them in the QA space, to create “The State of Open Source Testing” survey results. Their findings were (1) Writing good Gherkins requires practice, (2) Living Documentation is a hidden gem, (3) SpecFlow and Cucumber are the tools of choice, (4) BDD is used in pieces, and (5) BDD projects run with 50% higher efficiency.

Data Toxicity and Automatic Number-Plate Recognition (ANPR) – Take Five for CyberSecurity

Wolf Goerlich has been posting great videos on Cyber Secrurity in a series called “Take Five” (because the videos are five minutes). In the associated video he talks about a leak of license plates from a automated photo recognition system; he shares what “data toxicity” means and three things a company can do about toxic data.

Top 75 Automation Testing Blogs & News Websites To Follow in 2020

The following is a list of the most popular automation test blogs to follow in 2020 as ranked by Feedspot. Their team ranked the blogs based on relevancy, industry focus, posting frequency, social media follow counts & engagements, domain authority, age of the blog and Alex traffic report.

Enterprise Test Automation: 4 Ways to Break Through the Top Barriers

“How can mature companies with complex systems achieve the level of test automation that modern delivery schedules and processes demand? There are four strategies that have helped many organizations finally break through the test automation barrier: Simplify automation across the technology stack, end the test maintenance nightmare, shift to API testing, and choose the right tools for your needs.”

A Digital Jobs Program to Help America’s Economic Recovery

Google has started an online jobs program to teach people skills to shift careers. The program is called “Grow with Google” and will encompass multiple Google Career Certificates, funding need-based scholarships, $10 million in job training grants, and Google will consider the certificates the equivalent of a four-year degree for related roles. The new certificates will be in Data Analytics, Project Management, and User Experience (UX) Design. These programs don’t require a degree or prior experience to enter.

Book Club: The Phoenix Project (Chapters 30-35)

This entry is part 8 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Chapters 26-29 HERE

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 30

Bill joins Erik at MRP-8.

Erik reveals he was a Special Forces officer in the US Army.

“A manufacturing plant is a system. The raw materials start on one side, and a million things need to go just right in order for it to leave as finished goods as scheduled out the other side. Everything works together. If any work center is warring with the other work centers, especially if Manufacturing is at war with Engineering, every inch of progress will be a struggle.”


“takt time” is the cycle time needed in order to keep up with customer demand. If any operation in the flow of work takes longer than the takt time, you will not be able to keep up with customer demand. The feedback loop must go back to product definition, design, and development.

Erik points out the most profitable items made in the plant have the longest setup and process times. To solve the problem, they developed a machine that combined four work centers into one, eliminating over thirty manual, error-prone steps, completely automating the work cycle.

Bill is challenged to decrease IT’s changeover time and enable faster deployment cycle time. Erik expects 10 deployments a day.

“Dev and Ops working together, along with QA and the business, are a super-tribe that can achieve amazing things. . . Until code is in production, no value is actually being generated, because it’s merely WIP stuck in the system. He kept reducing the batch size, enabling fast feature flow. . . He automated the build and deployment process, recognizing that infrastructure could be treated as code, just like the application that Development ships.”


Jez Humble and Dave Farley codified the practices and principles that enable multiple deployments per day in the book “Continuous Delivery”.

Deployment Pipeline: entire value stream from code check-in to production. Everything in version control — not just code but everything required to build the environment. Automate the entire environment creation process.

Erik tasks Bill with moving Brent to a team that will automate the build process.

“Stop focusing on the deployment target rate. Business agility is not just about raw speed. It’s about how good you are at detecting and responding to changes in the market and being able to take larger and more calculated risks.”


Features are always a gamble. So the faster features are pushed to market and tested, the better off a company is at adapting.

Chapter 31

SWAT Team kick-off meeting led by Chris. Steve has authorized a small team to deliver the promotion functionality and do whatever it takes to make a positive impact on the holiday shopping season.

The chief concern is the deployment process and the way Parts Unlimited is building the environments.

Chris is shocked at the “ten deploys per day” request, but Patty thinks the idea is solid; they can deploy bug fixes and performance enhancements. The company could enable Marketing to make their own changes to content or business rules or enabling faster experimentation and A/B split testing.

Development and Operations start to generate all the steps needed to deploy a feature

Development examples: automated tests in Dev, creating QA environment that matches dev, deploying code into it, executing tests, deploying & migrating to a stage environment, load testing.

Operations examples: preparing new server instances, loading and configuring the operating system, databases, and applications’ making all the changes to the networks, firewalls, and load balancers.

Bill marks each step where IT had problems with deployments in the past; almost all steps are marked.

Patty creates a “Value Stream Map” on the whiteboard. She writes the time-commitment for each step and whether the step requires wait time.

Patty suggests focusing on environments and code packaging process.

Brent proposes creating a common automated script for building the dev, qa, and prod environment.

New requirement for IT: at the end of each Sprint, the deployable code AND the environment must be checked into version control.

New process for IT: someone is responsible for package creation and Dev handoff — generate and commit the packaged code to trigger an automated deployment into the QA environment.

“Brent, if it’s okay with you and everyone else, I’d like to invite you to our team sprints, so that we can get environment creation integrated into the development process as early as possible. At each three-week sprint interval, we not only need to have deployable code but also the exact environment that the code deploys into, and have that checked into version control, too.”


Chapter 32

Bill reflects on how different software engineers are today than when he was coming up through the ranks. They are more loose and less likely to follow process.

The SWAT Team calls themselves “Unicorn”. The objective is doing whatever it takes to deliver effective customer recommendations and promotions.

Project Unicorn had a code base completely decoupled from Phoenix.

The first challenge was to start analyzing customer purchase data. The team created a completely new database, using open source tools with data copied from Phoenix and order/inventory management systems.

Decoupling from other projects made changes easier and did not put other projects at risk.

Unicorn team developers were using the same OS, library versions, databases, database settings, etc.

Brent goes missing from the team for two days and can’t be contacted. Brent has been taken to a secret off-site to discuss breaking up the company. He believes a split will be a complete nightmare.

“Dick and the finance team rushed me out the door yesterday morning to be a part of a task force to create a plan to split up the company. Apparently, this is a top priority project, and they need to figure out what the implications to all the IT systems are.”


Bill schedules a meeting with Steve Masters to discuss the off-site.

Bill believes Brent is someone who has the respect of developers, has enough deep experience with IT infrastructure, and can describe what the developers need to build. Steve agrees to “bring Brent home”.

Chapter 33

Sarah demands to have Brent returned in an angry email to the Chairman of the Board.

Meanwhile, the Unicorn promotion report takes much longer to execute. One of the developers recommends using cloud compute instances. Maggie looks into cloud providers and Security works to identify risks with sending customer data to the cloud. During the Demo, the team reports they can deploy easily to the cloud with the automation in place.

Maggie demos the promotions offering system and proposes to do an e-mail campaign to one percent of the customers (as a trial before Thanksgiving).

The marketing campaign is a success, with over 20 percent of the respondents going to the website and six percent making a purchase. The conversion rates are 5x higher than any prior campaign.

Steve Masters publicly congratulates the entire team on the marketing success.

John reports the security fixes for Unicorn are much easier than Phoenix.

“After being forced to automate our security testing, and integrating it into the same process that William uses for his automated QA testing, we’re testing every time a developer commits code. In many ways, we now have better visibility and code coverage than any of the other applications in the company!”


Developers only have read-only access in production and security tests are integrated into the build procedure. The automated controls help resolve the SOX-404 audit findings.

Only a single issue — promotions for out of stock items — temporarily halted deployment, but was fixed within a day.

Chapter 34

The high traffic to the Parts Unlimited website on Thanksgiving led to a Sev-1 emergency because the e-commerce systems were going down.

The team puts more servers into rotation and turned off computationally-intensive features.

Taking down the real-time recommendations can be disabled with a configuration setting in minutes.

Bill calls in the entire team to the office on Black Friday. Store Managers are having trouble handling the volume of requests from Unicorn promotions. To fix the problem, the team will deploy a web page for store personnel to type in the coupon promotion code to automate the cross-shipment from our warehouses. They will also create a new form on the customer account web page to get items delivered directly to them.

At a Monday meeting with Steve masters, he reports the in-store and web sales are breaking records.

“I want to congratulate you for all your hard work. It has paid off beyond my wildest expectations. Thanks to Unicorn, both in-store and web sales are breaking records, resulting in record weekly revenue. At the current run rate, Marketing estimates that we’ll hit profitability this quarter. It will be our first profitable quarter since the middle of last year.”


Chris reports the team can do daily deployments and they can also do A/B testing all the time — so the company is faster responding to the market.

Bill proposes setting up all new teams just like the Unicorn team for fast deployments. Steve commends them all for their work.

“If we’re done congratulating ourselves, I’ve got a business wake-up call for you. Earlier this month, our largest retail competitor started partnering with their manufacturers to allow custom build-to-order kits. Sales of some our top selling items are already down twenty percent since they launched this offering.”


The team is next charged with creating “build-to-order kits” with their manufacturing partners.

The next day, Wes says the change Sarah requested would be difficult because their original system is an outsourced mainframe application. To make the change, their outsourced will need six months to gather requirements and nine months to develop & test.

In a meeting with Steve, the team propose to break contract with the outsourcing company early at a cost of $1M and regain control of the MRP application & underlying infrastructure.

With MRP in-house, development could build an interface to Unicorn. The manufacturing capability could be moved from “build to inventory” to “build to order”.

One risk is the outsourcer may have changed the codebase. The outsourcer could also make the transition difficult. John also needs to remove access from the staff who are not being retained.

Sarah does not like the proposal, and wants approval from the board & Bob Strauss. Steve reminds Sarah who she works for, which quiets her.

“I think we need to check with Bob Strauss and get full board approval before we undertake a project this big and risky. Given the previous performance of IT, this could jeopardize all of our manufacturing operations, which is more risk than I think we should take on. In short, I personally do not support this proposal.”


“Remember that you work for me, not Bob. If you can’t work within that arrangement, I will need your immediate resignation.”


Chapter 35

There are few Sev-1 incidents and Bill spends most of his time coaching his managers through two-week improvement cycles according to the Improvement Kata.

The team is closing their monitoring gaps, refactored or replaced the top ten fragile artifacts for stability, and the flow of planned work is fast.

Bill also deployed Project Narwhal (Simian Army Chaos Monkey) that routinely creates large-scale faults, thus randomly killing processes or entire servers.

Development and IT Operations worked together to make their code & infrastructure more resilient to failures.

John similarly started a project called “Evil Chaos Monkey”  that would exploit security holes, fuzz applications with malformed packets, try to install backdoors, gain access to confidential data, and other attacks.

Steve Masters hosts a party at his home and asks Bill to arrive an hour early.

The company will have a record-breaking quarter. The average order size hit a record after Project Unicorn delivered.

Sarah has decided to look for other options elsewhere and is on a leave of absence.

The last few months, Bill has been interviewing candidates for CIO. He feels most would revert the changes the company has made the last few months. Bill puts Chris forward for the position.

Steve explains that Bill was the unanimous choice to take the position, but he won’t be getting the job.

Steve wants Bill to do rotations in sales & marketing, manage a plant, get international experience, manage the relationships with critical suppliers, and manage the supply chain. Erik will be his mentor. If successful, Bill will be moved into the COO role.

“In ten years, I’m certain every COO worth their salt will have come from IT. Any COO who doesn’t intimately understand the IT systems that actually run the business is just an empty suit, relying on someone else to do their job.”


IT should either be embedded into business operations or into the business.

The rest of the IT Management and executive staff arrive at Steve for the party. They gift Bill with a bronzed “craptop”.

Erik will not become a board member, but instead will be a large investor in the company. He wants to create a hedge fund that invests in companies with great IT organizations.

Erik charges Bill to write “The DevOps Cookbook” to show how IT can regain the trust of the business.

“I want you to write a book, describing the Three Ways and how other people can replicate the transformation you’ve made here at Parts Unlimited. Call it The DevOps Cookbook and show how IT can regain the trust of the business and end decades of intertribal warfare. Can you do that for me?”


The group assembled reflects much more than Dev or Ops or Security. It’s Product Management, Development, IT Operations, and Information Security all working together and supporting one another.