From the Pipeline v13.0

This entry is part 13 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Unit Testing is Overrated

In this recent post the author argues that focus on unit tests is in many cases a complete waste of time. In the example-heavy piece, it’s argued that unit tests are only useful to verify pure business logic inside a given function because of external dependencies mean the implementation is often replaced with abstractions or writing data to systems means that test is technically a unit test (and would need to be abstracted as well to make it a unit test). The post is definitely worth a read.

Beyond the Cache with Python

Guy Royse explores the usage of Redis beyond caching. Redis can be used as a queue by pushing new items to the end of the list. Redis can also be used to subscribe to and publish events as well as data streaming, as a search engine, and a primary database. The examples are all in Python and taken from Guy’s popular Bigfoot data he uses at conferences. The article also links to a GitHub account to check out the code.

How to Achieve Automated Accessibility Testing

“Accessibility testing is a type of testing done to ensure that your apps are usable by as many people as possible. Automated accessibility testing helps expedite your release cycle and identify issues early.” Eran Kinsbruner shows how a team can make accessibility testing an upfront concern and shows an example how to automate the testing using Jenkins, GitHub, Selenium, Axe, and Perfecto. The post also includes a webinar to see how automated accessibility testing can be achieved.

Working with BDD Remote Part 3: Make Living Documentation Accessible

Gojko Adzic’s third installment in BDD remote work. In the post, he advocates to make easy access scenarios as part of the living documentation for testing, in particular exploratory testing (as an example). One example he provides is using Azure Devops with SpecFlow “LivingDoc”, which renders feature files in Azure Devops with filtering and linking capabilities. There are other tools for Jira. The ultimate goal is making this material readily to all members of the team.

Best Selling DevOps Books

This is a compiled list of the best-selling DevOps books of all time according to book Authority. As expected, the DevOps Handbook, Accelerate, and The Phoenix Project top the list.

Book Club: The Phoenix Project (Chapters 26-29)

This entry is part 7 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Chapters 21-25 HERE

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 26

Bill’s team will conduct business process owner interviews for “understanding customer needs and wants,” “product portfolio,” “time to market,” and “sales pipeline”.

John researches the business SOX-404 control environment.

Meeting with Ron Johnson, VP of Sales. He is the owner of the sales pipeline and sales forecast accuracy.

The Sales Forecast Accuracy starts with a revenue target. Ron’s team always misses the target because it’s not obtainable.

Parts Unlimited does not know what its customers want. The company has too much product that will never sell and never enough of product that does sell.

“It’s a crappy way to run a company. It demoralizes my team, and my top performers are quitting in droves. Of course, we’ll replace them, but it takes at least a year for replacements to perform at full quota capacity. Even in this lousy economy, it takes too long to find qualified sales people.”

Ron Johnson

Sales Forecast Accuracy is jeopardized by poor understanding of customer needs and wants.

Sales Pipeline — challenging for salespeople to get information from the Customer Relationship Management (CRM) system.

Bill wants money for the monitoring to enforce control policies to ensure incidents like the phone outage won’t happen again.

Establishing what applications / infrastructure are fragile to Sales, they can prioritize preventive work to ensure revenue is not hurt.

Maggie owns the merchandising and pricing roadmaps for Parts Unlimited. She’s the business sponsor for half the IT projects.

“Ultimately, the way I measure our understanding of customer needs and wants is whether customers would recommend us to their friends. Any way you cut it, our metrics aren’t very good.”


Data in the order inventory and management system are almost always wrong.

Maggie wants accurate and timely order information from the stores and online channels. That data can be used for marketing campaigns that continually do A/B testing of offers, find the ones that our customers jump at.

The information can be used to drive the production schedule to manage the supply and demand curves.

“In these competitive times, the name of the game is quick time to market and to fail fast. We just can’t have multiyear product development timelines, waiting until the end to figure out whether we have a winner or loser on our hands. We need short and quick cycle times to continually integrate feedback from the marketplace.”


“The longer the product development cycle, the longer the company capital is locked up and not giving us a return. Dick expects that on average, our R&D investments return more than ten percent. That’s the internal hurdle rate. If we don’t beat the hurdle rate, the company capital would have been better spent being invested in the stock market or gambled on racehorses.”


Maggie dislikes the three-year lead time on projects. They should be six to twelve months. Phoenix cost $20M/3Yrs. There is intense competition for IT in supporting projects. Given the WIP and capital locked into Phoenix, the project should have never been approved.

Chapter 27

Phone and MRP systems need predictive measures that include compliance with the change management process, supervision and review of production changes, completion of scheduled maintenance, and elimination of all known single points of failure.

John introduces the idea of CIA — confidentiality, integrity, and availability.

For Marketing Needs and Wants: Support weekly and eventually daily reporting of orders and percentage of valid SKUs created by Marketing.

Excerpt from the Phoenix Project.

“Seems pretty obvious to me. We need to come with the controls to mitigate the risks in your third column. We then show this table to Ron and Maggie, and make sure they believe that our countermeasures help them achieve their objectives. If they buy it, we work with them to integrate IT into their performance measures”


The company used to have the CIO attend the quarterly business reviews but stopped inviting him after negative feedback.

The company must consider IT Risks as Business Risks, otherwise the company will miss their objectives.

Bill proposes to integrate risks into leading indicators of performance. The goal is to improve business performance and get earlier indicators of whether the company will achieve them.

The company is moving too slowly with too much WIP and too many features in flight. The releases must be smaller and shorter and deliver cash back faster. Bill proposes meeting for three weeks with each business process owner to identify business risks posed by IT to integrate them into leading indicators of performance. He also proposes meeting with Dick & Chris about Phoenix to improve throughput.

From the audit-compliance side, John excitedly reports back on his findings.

Faye, a Financial Analysts who works in Finance, created SOX-404 control documents. They show the end-to-end information flow for the main business processes in each financially significant account.

“The control being relied upon to detect material errors is the manual reconciliations step, not in the upstream IT systems.” – Faye’s document

John wants to rebuild the compliance program from scratch. He proposes to: 

  1. drastically reduce the scope of the SOX-404 compliance program 
  2. root cause analysis of production vulnerabilities
  3. flag all systems in scope for compliance audits to avoid changes that risk audit and create on-going documentation for auditors
  4. reduce the size of PCI compliance program by eliminating anything that stores or processes cardholder data
  5. pay down technical debt in Phoenix

“We quickly agree to pair up people in Wes’ and Chris’ group with John’s team, so that we can increase the bench of security expertise. By doing this, we will start integrating security into all of our daily work, no longer securing things after they’re deployed.”


Chapter 28

he number of Sev-1 outages this month is down by more than two-thirds. Incident recovery time is reduced by half.

Improving the production monitoring of the infrastructure and applications, IT knows about incidents before the business does.

The project backlog has been reduced by eliminating unneeded security projects from audit preparation and replacing them with preventive security projects.

Bill comes to the conclusion that IT Operations work is similar to planned work. The team is mastering the First Way: curbing the handoffs of defects to downstream work centers, managing the flow of work, setting the tempo by our constraints, and understanding what is important versus what is not.

Sarah’s group is trying to make unauthorized purchases for online or cloud services. Her team has four instances of using outside vendors and online services. 

Sarah’s vendors will cause Parts Unlimited to break their customer privacy regulations and potentially state privacy laws. One vendor uses a database technology that is not secured.

“The first problem is that both projects violate the data privacy policy that we’ve given our customers,” John says. “We repeatedly promise that we will not share data with partners. Whether we change that policy or not is, of course, a business decision. But make no mistake, if we go ahead with the customer data mining initiative, we’re out of compliance with our own privacy policy. We may even be breaking several state privacy regulations that expose us to some liability.”


Sarah has been able to get away with murder because she has the strategy that Steve needs, whereas Steve is execution-focused.

During the next Phoenix deployment, one of the critical database migration steps failed. Brent made a change to a production database that no one knew about. This change was one of Sarah’s side projects.

The Dev and QA environments don’t match the production environment. The team still manages to finish the deployment before the stores open. Patty sends out a list of known errors to look out for, an internal web page for the latest Phoenix status, and instructions on how to report new problems. The service desk is on standby and both dev & ops teams are on-call.

Chapter 29

Steve Masters is happy with the state of IT even though the latest Phoenix Deployment went in late. 

“I am very proud to be a part of this team that is obviously working together better than ever, trusting one another, and getting incredible results.”


Sarah is caught in the meeting having started unauthorized IT projects.

“Supporting those projects also requires an incredible amount of work. We’d need to give your vendors access to our production databases, explain how we’ve set them up, do a bunch of firewall changes, and probably over a hundred other steps. It’s not just as easy as signing an invoice.”


Sarah announces that Board Member Bob Strauss believes the company should be split up and leaves the room in a huff.

Erik challenges the group to master the Second Way: creating constant feedback loops from IT Operations back into Development, designing quality int othe product at the earliest stages.

“In any system of work, the theoretical ideal is single-piece flow, which maximizes throughput and minimizes variance. You get there by continually reducing batch sizes. You’re doing the exact opposite by lengthening the Phoenix release intervals and increasing the number of features in each release. You’ve even lost the ability to control variance from one release to the next.”


Bill proposes to pause deployments until they can figure out how to keep environments synchronized.

Erik proposes that work only flows forward. With rework and long release cycles, the team will never hit the internal rate of return.

“The flow of work should ideally go in one direction only: forward. When I see work going backward, I think ‘waste.’ It might be because of defects, lack of specification, or rework. . . Regardless, it’s something we should fix.”


Bill proposes a SWAT Team to deal with the Phoenix capacity issues, with that team tasks with delivering features that hit revenue goals.

The features are focused on customer recommendations and promotions that match the customer profile from consumer data.

From the Pipeline v12.0

This entry is part 12 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

6 Ways to Secure Buy-in For Your DevOps Journey

This article by Scott Carey is an excellent summary of talks at the recent DevOps Enterprise Summit. The first way is “start with people, not technology”, by conducting value stream mapping exercises to identify key business outcomes and processes. “Land and expand” is about finding the first product team or workload to deliver success and then expand to other teams. “Coach, don’t dictate” is about having empathy for the teams being supported in the transition, especially those who haven’t made the journey yet in their careers. “Making it safe to try” is about establishing an environment where it’s alright to fail fast by establishing psychological safety. “Measure, measure, measure” means establishing the telemetry to monitor outcomes and having a shared understanding of success. Lastly, “leveraging a change in management” is about taking advantage of a fresh start to promote DevOps practices.

50 Quick Ideas to Improve User Stories

 Several years ago Gojko Adzic published a series of books on testing, user stories, retrospectives, and impact mapping. These books are great resources to help a team or organization improve in their respective areas. Gojko has also turned the “Fifty Quick Ideas to Improve Your User Stories” into a free reference card deck under the creative commons license. Great reference material for any BA.

Metrics and Scrum

Doc Norton has been consistently featured on “From the Pipeline” for good reason: he’s one of the brightest minds in tech and unafraid to take on sacred cows. In his most recent post, he examines metrics associated with Scrum. What makes the post fascinating is although the Scrum guide does not explicitly mention metrics, the guide does have several points of measurement: in monitoring goals, in monitoring sprints, and increments, from which implicit metrics can be derived.

Design Patterns in Test Automation

Another great post from Anand Bagmar about test automation. In the lengthy post with many citations, he walks the reader through essentials of test automation from building a framework to common designer patterns, to challenges associated with the page-object pattern. Bookmark this article for future reference!

Deception and Estimation: How We Fool Ourselves

This brief blog post by Linda Rising to accompany a conference talk raises an interesting point about our built-in biases in estimating software projects. “Research shows that the best estimates come from a high-level comparison of the current project to others of a similar nature. Yet most estimation in software development comes from a bottom-up approach, by looking at the complexity of unknown components and then adding up the pieces to produce a view of the whole.”

Slaying the Hydra: Consolidation of Information and Reporting

This entry is part 4 of 5 in the series Slaying the Hydra

In this fourth blog post of our series, I explain a way to execute distributed parallel test automation. The previous blog entry can be found here.

Orchestration Overview

Referenced below is an image of our pipeline in Jenkins. For this blog post, we will be focusing on the ‘consolidation’ stage within the pipeline.

This stage calls for two freestyle jobs. The first job being machine_consolidation and the second job being report_consolidation.

The report_consolidation job takes three parameters, which come from the numeric value representing the latest build of the machine_consolidation job. In the groovy pipeline code we have associated the latestbuilt method to the machine_consolidation job instance.

This allows us to call latest_build = latestbuilt.getNumber() and the .getNumber() method returns the numerical index value of the most recently completed machine_consolidation job. This value is then passed to the report_consolidation job as latest_build, latest_build + 1, and latest_build + 2. I will explain why we do this later in the post.

This image has an empty alt attribute; its file name is image-13.png

Machine Consolidation Job

The intent of the machine_consolidation job is to query the nodes that we have utilized in our parallel testing, retrieve the cucumber json file, and then store it as an artifact.

The first thing we do within this job is setup a Node parameter for the project. The Default nodes option within this parameter has to have all the nodes selected that have been utilized in executing our tests. This allows the job to iterate over all of the utilized nodes and complete the steps. Each iteration will be a new build of the machine_consolidation job.

We should restrict the Jenkins job to be executed in the same workspace location that has already been utilized in the ‘running’ stage to store the testing output results on every machine.

Then we archive the files as artifacts of this job.

Report Consolidation Job

Now we have three builds of the machine_consolidation job completed. Each of these builds have artifacts representing the output files from one of the three nodes that was utilized for testing.

In the report_consolidation job the three parameters shown below are being passed in from the pipeline level. There values are equal to the build_number value of each of the machine_consolidation builds that ran in sequence due to the node parameter iteration.

In this case, latestbuilt.getNumber() as utilized in the pipeline returns the first build in the iteration set caused by the node parameter of the machine_consolidation job. In the pipeline we increase this number by one and then by two, to get the second and third build number.

Additionally, we want to clear the workspace location before every run. The sole purpose of this job is to consolidate information again and again, unless it’s explicitly told Jenkins will not clear the workspace prior to execution. This would cause multiple test result sets to be placed in the workspace and never get cleared out. Which leads to confusion in the report generated further down.

Then we utilize our parameters along with the ‘Copy artifacts from another project’ plugin to copy the artifacts from the three machine_consolidation builds into the build of this job.

Then archive all the artifacts, which will now be every json file, from every test execution across all the nodes previously utilized.

Lastly we utilize the ‘Cucumber Reports’ plugin which will parse through all of the json files and compile a single report of test success and failure.

That sums up the overview of the ‘consolidation’ stage of this pipeline. Now you have all of the information necessary to build this pipeline.

In the next and final post of the blog series, we discuss how to make modifications to the suite and the improvements we can make to the existing ideology.

From the Pipeline v11.0

This entry is part 11 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

The Difference between Structured and Unstructured Exploratory Testing

“There are a lot of misunderstandings about exploratory testing. In some organizations exploratory testing is done unprofessionally and in an unstructured way—there’s no preparation, no test strategy, and no test design or coverage techniques. This leads to blinds spots in the testing, as well as regression issues. Here’s how one company made its exploratory testing more structured.”

Kubernetes: 4 Ways to Save IT Budget With Automation

Kubernetes is a tool used to manage and scale applications. In the article they provide four ways to save with Kubernetes, but be careful since they are lacking for metrics. The four are: (1) more efficient infrastructure management, (2) improved resource utilization, (3) increased developer productivity, and (4) increased operation team productivity.

Automating Safe, Hands-off Deployments

This is a great article on how to setup continuous deployment pipelines. It covers simple code reviews, unit testing, integrating testing, rollbacks, and different types of deployments. This article is part of a larger collection of Amazon’s Builder’s Library that is absolutely worth a read for anyone interested in CI/CD.

Technical Debt

As a follow-up from last week’s video by Doc Norton on Tech Debt, here is another solid piece of background by Allen Holub. He does a great job tethering the idea of tech debt as something that accrues naturally in Agile; this debt must be paid off regularly else it will accrue to the point it becomes impossible to pay off. He argues that debt does not necessarily come from sloppy decisions but rather a state of ignorance because we are learning as we work. Teams will learn by releasing and should improve their code based on that feedback.

DeepMind Sets AI Loose on Diplomacy Board Game, and Collaboration is Key

Diplomacy is a classic game of negotiation where a group of seven players make and break alliances in pre-WWI Europe. In the game, it’s difficult for any player to make progress (gaining territory) by working by themselves. They need allies to attack other players and grow. Although one player can win the game, most often games end in draws among multiple players. The link to AI for this game is to look at collaboration and teamwork for AI agents, where soft skills will be necessary to solve problems. In the simulation where bots replaced players as a way to train AI agents to learn trust & cooperation.

Book Club: The Phoenix Project (Chapters 21-25)

This entry is part 6 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Chapters 17-20 HERE

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 21

Bill arrives at an audit meeting in building 2. He arrives to a packed conference room with attendees such as Dick, John, Wes, Erik, Ann, Nancy, and the auditors. Bill is surprised at how awful and stressed John looks.

The meeting lasts for 5 hours and ends with everyone surprised with the auditor’s conclusion that the company probably isn’t in trouble.

Bill is surprised to learn after that meeting that Erik and the lead auditor are old friends.

Bill describes the meeting and says that Dick kept marching out business SME’s to demonstrate that they have their own controls outside of IT where fraud would be caught.

John then asks if Bill has a minute to talk. He is still visibly flustered.

“John looks awful. If his shirt were just a little more wrinkled, and maybe had a stain or two in front, he could almost pass as a homeless person.”


John starts to ramble on to Bill about the systemic IT issues and information security. Erik is still talking to the auditor in the room and guides him out into the hallway upon hearing John starting his rant. John says that no one cares about IT security and the entire dev organization hides their activities from him.

“You all look down on me. You know, I used to manage servers, just like you do. But I found my calling doing information security. I wanted to help catch bad guys. I wanted to help organizations protect themselves from people who were out to get them. It came out of a sense of duty and a desire to make the world a better place.”


Erik comes back into the room angrily and grabs a chair.

“You know what your problem is, Jimmy? You are like the political commissar who walks onto the plant floor…sadistically poking your nose in everybody’s business and intimidating them into doing your bidding, just to increase your own puny sense of self-worth. Half the time, you break more than you fix. Worse, you screw up the work schedules of everyone who’s actually doing important work.”

Erik (to John)

Erik continues to go off on John and tells him that he doesn’t have anything else to say to him until John understands what just happened in that room.

“This should be your guiding principle: You win when you protect the organization without putting meaningless work into the IT system. And you win even more when you can take meaningless work out of the IT system.”


Erik tells John to go to the MRP-8 plant and talk to the safety officer.

Erik leaves John and Bill alone. John says goodbye and pushes his binder off the table. He says he may not be back tomorrow. Bill sees a haiku that John had written:

Here I sit, hands tied

Room angry, I could save them

If only they knew

Chapter 22

The Monday after the audit meeting, John disappeared.

Bill finds Wes and takes him to Patty’s office to talk about the monitoring project. Bill tells them both about how Erik validated that they can release the monitoring project and how important it is so that they can elevate Brent.

Patty seems to entertain the idea that IT is like manufacturing but Wes is still skeptical.

“Let’s use the example of configuring a server. It involves procurement, installing the OS and applications on it according to some specification and then getting it racked and stacked. Then we validate that it’s been built correctly. Each of these steps are typically done by different people. Maybe each step is like a work center, each with its own machines, methods, men, and measures.”


Patty concludes that she’s not sure what the machine would be in her scenario, and that they are better off trying out the process first. Otherwise, they are just stumbling around in the dark.

Wes is still skeptical, but Bill explains how challenging some of the floor work in the factory was. He says how those workers had to call upon their experience to solve problems and how they earned his respect.

Bill says that they should start the monitoring project as soon as they can.

The next Monday, Bill goes to the change room with Patty. She has put up a new Kanban board. Over the weekend she went to the MRP-8 factory and learned to split work into Ready, Doing, and Done in order to reduce WIP.

Patty plans on putting Kanban boards around key resources to manage their work. She thinks this will help predict lead time and get faster throughput.

“Imagine what this will do to user satisfaction if we could tell them when they make the request how long the queue is, tell them to the day when they’ll get it, and actually hit the date, because we’re not letting our workers multitask or get interrupted!”


Patty has also implemented an improvement kata and two-week improvement cycles.

Patty wants to implement a Kandan board around Brent in order to further isolate him from crises.

Later, Bill is sitting with Patty and Wes to figure out how to get project work started again. Bill states that they have two queues: business and internal projects.

They decide to only release the five most important business projects. The hard part is prioritizing the 73 internal projects.

Bill remembers what Erik has told him. He says that unless a project increases Brent’s capacity by reducing his workload or allowing someone else to take it over, then it isn’t important.

He asks for 3 lists: one that requires Brent, one that increases Brent’s throughput, and one this is everything else.

“We’re doing what Manufacturing Production Control Departments do. They’re the people that schedule and oversee all of production to ensure they can meet customer demand. When they accept an order, they confirm there’s enough capacity and necessary inputs at each required work center, expediting work when necessary. They work with the sales manager and plant manager to build a production schedule so they can deliver on all their commitments.”


Two days later, Bill finally gets a new laptop. This is two days earlier than planned.

Chapter 23

The following Tuesday, Bill gets a call from Kirsten. Brent is almost a week late on delivering a Phoenix task and the schedule is in jeopardy again. There are also several other late tasks.

Bill gets to work and joins Patty and Wes in a conference room. Patty explains that the late task is a test environment that was supposed to be delivered to QA. It turns out that this one “task” was more like a small project and involved multiple layers and teams.

Bill goes to the whiteboard and draws a graph:

Bill explains how wait times depend upon resource utilization: “The wait time is the ‘percentage of time busy’ divided by the ‘percentage of time idle.’ In other words, if a resource is fifty percent busy, then it’s fifty percent idle. The wait time is fifty percent divided by fifty percent, so one unit of time. Let’s call it one hour. So, on average, our task would wait in the queue for one hour before it gets worked.”

Bill recalls the Phoenix deployment when Wes was complaining about an excess amount of tickets that would take weeks to resolve. He concludes that the handoff between Dev and IT Ops is very complex.

Patty states that the group shows that everyone needs idle time. Otherwise, WIP gets stuck in the system.

The group then decides that they can create a Kanban lane for each large, recurring “task”.

“You know, deployments are like final assembly in a manufacturing plant. Every flow of work goes through it, and you can’t ship the product without it. Suddenly, I know exactly what the Kanban should look like.”


They decide that Patty will work with Wes’s team to assemble the 20 most frequently recurring tasks.

Chapter 24

The chapter starts with Bill and his family visiting a pumpkin patch on a Saturday. They spend the day together, and then Bill is watching a movie with his wife on the couch.

He gets a call on his phone from John. He answers the phone after a few ignored calls, and John asks Bill to meet him at a bar. Bill eventually agrees.

When Bill sees John, he looks awful. John is also very drunk.

John says that he has just been at home watching TV. He wants to ask Bill one last question before he leaves.

“Just tell me straight. Is it really true that I haven’t done anything of value for you? In all the three years that we’ve worked together, I’ve never, ever been helpful?”


Bill answers, “Look, John. You’re a good guy, and I know your heart is in the right place, but up until you helped hide us from the PCI auditors during the Phoenix meltdown, I would have said no. I know that’s not what you want to hear, but. . . I wanted to make sure that I wasn’t feeding you a line of bullshit.”

John downs a glass of scotch after hearing Bill’s response and asks for another, but Bill tells the waitress not to get it and order a cab.

Bill puts John in the cab and sends him home.

He tries calling John the next day, but John does not answer. There are still rumors circulating at the office regarding what happened to him.

Later Monday night, Bill receives a text from John: Thanks for the lift home the other day. Been thinking. I told Dick that I’ll be joining our 8am mtg tomorrow. Should be interesting.

Bill doesn’t know what meeting John is talking about.

When Bill asks John what meeting he’s talking about, John responds that he’s been arrogant and doesn’t know Dick that well. He says that him and Bill need to change that together. Bill calls John to see what’s going on.

“I kept thinking about our last conversation at the bar. I realized that if I haven’t done anything useful for you, who I should have the most in common with, then it stands to reason that I haven’t been useful to almost everyone else, who I have nothing in common with.”


Bill reluctantly agrees to join John in the meeting.

Chapter 25

The next day, Bill heads toward Dick’s office for the meeting with Dick and John. He sees John outside the office, and John has totally cleaned up his appearance from when Bill last saw him. Bill: “With the shaved head, his calm friendly smile and perfect posture, he looks like some sort of enlightened monk.”

Bill is shocked when John asks Dick, “. . . what exactly you do here at Parts Unlimited? What is your exact role?”

Dick plays along and answers the question seriously. He says that when he was hired, he was a traditional CFO, but now he also takes care of planning and operations for Steve. John calls him the de-facto COO, but Dick acknowledges that is now part of his job.

“With a very small smile, he adds, “Want to hear something funny? People say that I’m more approachable than Steve! Steve’s incredibly charismatic, and let’s face it, I’m an asshole. But when people have concerns, they don’t want to have their minds changed. They want someone to listen to them and help make sure Steve gets the message.”


When asked what a good day for himself looks like, Dick says that it’s when they are beating the competition and writing big commission checks to their salesmen.

“Steve would be excited to announce to Wall Street and the analysts how well the company is performing—all made possible because we had a winning strategy, and also because we had the right plan and the ability to operate and execute.”


Dick says that they haven’t had a day like that in over four years. He says that a bad day looks like the Phoenix project launch.

John asks Dick what his goals for the year are. Dick gives him a list:

  • Health of Company
  • Revenue
  • Market Share
  • Average Order Size
  • Profitability
  • Return of Assets
  • Health of Finance
  • Order to Cash Cycle
  • Accounts Receivable
  • Accurate Financial Reporting
  • Borrowing Costs

Dick continues to the company goals, which he says are more important than the goals for just his department:

  1. Are we competitive?
  2. Understanding customer needs and wants: Do we know what to build?
  3. Product portfolio: Do we have the right products?
  4. R&D effectiveness: Can we build it effectively
  5. Time to market: Can we ship it soon enough to matter?
  6. Sales pipeline: Can we convert products to interested prospects?
  7. Are we effective?
  8. Customer on-time delivery: Are customers getting what we promised them
  9. Customer retention: Are we gaining or losing customers?
  10. Sales forecast accuracy: Can we factor this into our sales planning process?

Dick says all of those measurements are currently at risk. He says they are $20 million into Phoenix and still are not competitive, and the best favor they can do him is to stay focused and get it working.

After the meeting, Bill says that Dick doesn’t realize how much his measurements depend on IT.

Bill calls Erik to get some advice. He wants to convince Dick that IT is capable of screwing up less often and helping the business win.

“Your mission is twofold: You must find where you’ve under-scoped IT—where certain portions of the processes and technology you manage actively jeopardizes the achievement of business goals—as codified by Dick’s measurements. And secondly, John must find where he’s over-scoped IT, such as all those SOx-404 IT controls that weren’t necessary to detect material errors in the financial statements.”

Erik (to Bill)

He also says that John also needs to learn exactly how business was able to dodge the audit bullet, and to feel free to invite him to Bill’s next meeting with Bill.

Retrieve Fantasy Football Stats using ESPN’s API: Part 3

Welcome to Part 3 of our series on how to scrape fantasy football data from ESPN’s API. Let’s go ahead and recap what we’ve done so far:

  • set up a main class where we are making API calls to ESPN using the RestClient gem
  • built a data_source class to store all of our data and global variables
  • created a player_helper class to extract some logic out of our main class
  • parsed through one week of data for a league and output in CSV format

We’re a good way through retrieving our data and storing it in a format that is ready for output. We still have a little bit more data that we would like to capture, so let’s go ahead and knock that out now.

It’s nice that we have data such as basic player data (name, position, etc.), projected stats and actual stats. It would be great, however, for us to get even more granular statistics for each player and week. In fantasy football, our players gain points by accumulating stats such as gaining yards and scoring touchdowns. Wouldn’t it be nice to break down this data to see where our players’ points are coming from on a weekly or seasonal basis?

Let’s start by seeing where this data is located in our API response. Last time we were already looking inside the response data at the following location:


Now we’re going to dig a little deeper into this same data, and append another [‘stats’] to the end of the above location. If we look inside here, we will see a hash with a bunch of seemingly random numbers as keys, and more random numbers as values. These keys actually correspond to certain statistical categories. There is no way to know this without digging through the data with a little trial and error, but luckily I will provide the keys we’re going to use.

So let’s define a new hash inside our data_source file to map out what these keys correspond to.

        'pass_attempts' => '0',
        'completions' => '1',
        'pass_yards' => '3',
        'pass_tds' => '4',
        'interceptions' => '20',
        'rush_attempts' => '23',
        'rush_yards' => '24',
        'rush_tds' => '25',
        'receptions' => '41',
        'receiving_yards' => '42',
        'receiving_tds' => '43'

For the sake of simplicity, we will ignore defensive stats and kicker stats (who cares about kickers anyway, right?!) Now, if we remember back to last time, we have our get_stats method inside the player_helper class. We have to do a little searching in there based on the statSourceId and the scoringPeriodId. This is the same entry that we’ll want to pull our detailed stats from. So, let’s add in another method in the player_helper class to retrieve these detailed stats.

def get_detailed_stats(player)
  result = {}
  stats = player['stats']
  STAT_KEYS.each_pair do |key, value|
    if stats and stats.has_key?(value)
      result[key] = stats[value]
      result[key] = '0'

The purpose here is to take the STAT_KEYS hash, and swap out the numerical values with the actual corresponding player statistic. We could just return the values without the keys to signify what they are, but if we want to do any work with these down the line then they’ll already be nice and organized. The logic here is fairly straightforward.
1. Take the data we already have and look inside the 2nd ‘stats’ key.
2. Take each numerical value from our STAT_KEYS and check to see if that particular key is returned for that player.
3a. If we do have that key, we’ll store that value in a hash with the key from our STAT_KEYS hash.
3b. If we do NOT have that key, we’ll simply plug in a 0. The reason we look for every stat and plug in 0’s is that we want to have the same exact output for each player so that we can plug it neatly into a spreadsheet.

We’ll plug in the call to this method inside our get stats method like so:

def get_stats(stats_array, week)
  actual = ''
  projected = ''
  stats_array.each do |stat|
    if stat['scoringPeriodId'] == week
      if stat['statSourceId'] == 0
        actual = stat['appliedTotal']
        details = get_detailed_stats(stat)
      elsif stat['statSourceId'] == 1

We don’t need to do this for the projections for now, although we could parse those our more neatly in a similar fashion if we desired. The output for a single player should look something like this:

We can see here that there are a lot of 0’s plugged in, and that’s ok. Most players will only accumulate a few stats that are actually relevant to their position. Now, we will want to return this data to our main class along with the projected and actual values that we are already sending. Let’s send the whole hash for now, and we can include the logic of how to deal with that down the line. The last line of our get_stats method should now look like this:

{actual: actual, projected: projected, details: details}

When we hop back to our main class, our stats variable will now include these broken down player stats. Last time we were storing all of our data in a comma delimited string named results. To append this data to our string, we can simply loop through our stats[:details] hash and append each value to our results string. This line should appear immediately after our result variable is populated.

stats[:details].each_value do |value|
result << value.to_s + ','

Now the rest of our logic can stay the same, and we have a nice comma delimited string to output to a file. Now we can finally hop outside of all of our nested loops, and write the code to output the data. Feel free to name your output file whatever you choose. In this case I’ve decided on simply calling it “output” because I’m not very creative.

If you are familiar with file opening modes, we have a few to choose from here. We can use either “w”, “w+”, “a” or “a+”. (If you are not familiar, “w” will write after truncating the destination to size 0 if it already exists, and “a” will append to the end of the file if it already exists. The “+” just changes mode to read-write instead of write only.) Since we don’t really need to read the file and we don’t really want to append anything to prevent a massive CSV file being created over time, we should be fine with choosing “w”. So our output code will read as follows: + '/output.csv', 'w') do |f|
  output.each do |string|
    f << "#{string}\n"

(In my case this will write a file called “output.csv” to my root directory. You may choose to store your output in a different location.)

And voila! We should now have a nice CSV file with all of our week 1 data. If you open this in Excel, then it will recognize the CSV format for you and move all of your data into the proper columns.

This looks great, but we don’t have any column headers! In order to prevent the need for adding these in every single time, let’s go back and add one last piece of data to our data_source file to easily store our column headers.

Pass Attempts
Pass Yards
Pass TDs
Rush Attempts
Rush Yards
Rush TDs
Rec. Yds
Rec. TDs

Then when we declare our output variable in our main file we can simply add one extra line:

output = []
output << OUTPUT_HEADERS.join(',')

This about wraps up the Ruby code we need to write for pulling data. The way we have organized and written our code should allow for further expansion if you wanted to add new features on your own. The work now shifts over to the excel side to actually make use of this data and create some pretty graphs and charts. What you want to do with this data is up to you, but here are a few examples of what I have made personally.

Breaking down each individual statistic by owner
Difference between actual and projected points per owner over the course of the season
Season overview of points scored above or below projections

Developing these is a great way to get familiar with some VLOOKUPS and other fun formulas. I think going into too much detail is outside the scope of this post, but most of the data isn’t too hard to put together with a help from google.

I hope you’ve enjoyed this series of posts, learned a little something, and hopefully have been able to use this code for your own purposes. If you have any questions or issues with any of the code mentioned here, feel free to contact me through the Contact Us section of the site.

Book Club: The Phoenix Project (Chapters 17-20)

This entry is part 5 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Chapters 13-16 HERE

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 17

Bill takes his son to see the trains after quitting but is interrupted by multiple calls from Wes & Patty.

The inventory management systems are down. No one can get inventory levels in the plants or warehouses, and they don’t know which raw materials need to be replenished.

“Well, we’ve pretty much screwed the pooch since you’ve left,” Wes says, sounding genuinely abashed, confirming my worst fears. “Steve insisted that we bring in all the engineers, including Brent. He said he wanted a ‘sense of urgency’ and ‘hands on keyboards, not people sitting on the bench.’ Obviously, we didn’t do a good enough job coordinating everyone’s efforts, and…”


Steve Masters attempts to call Bill after calling his wife Paige. Eventually, Bill returns his call and listens to Steve’s apology.

Steve had promised to get “his hands dirty” with IT but hasn’t lived up to the promise. His delegation of IT to Sarah was a total screwup.

“I’m convinced that IT is a competency that we need to develop here. All I’m asking is that you spend ninety days with me and give it a try.”


Steve Masters convinces Bill to rejoin Parts Unlimited.

Chapter 18

Bill attends Steve IT Leadership Off-Site, which is actually located on the Parts Unlimited campus.

Wes, Patty, Chris, Erik, and Steve are all in attendance.

“Erik described the relationship between a CEO and a CIO as a dysfunctional marriage. That both sides feel powerless and held hostage by the other.”


“There are two things I’ve learned in the last month. One is that IT matters. IT is not just a department that I can delegate away. IT is smack in the middle of every major company effort we have and is critical to almost every aspect of daily operations.”


“The second thing I’ve learned is that my actions have made almost all our IT problems worse. I turned down Chris and Bill’s requests for more budget, Bill’s request for more time to do Phoenix right, and micromanaged things when I wasn’t getting the results I wanted.”


Steve apologizes to Bill, taking full responsibility for the failures of Phoenix and the audit.

Steve identifies trust as the primary issue.

“A great team doesn’t mean that they had the smartest people. What made those teams great is that everyone trusted one another. It can be a powerful thing when that magic dynamic exists.”


Five Dysfunctions of a Team: In order to have mutual trust, you need to be vulnerable.

Steve asks each person to share something about themselves.

Steve was the first person in his family to make it to college. He worked in a copper mine to pay for college. He eventually went on to work for a pipe manufacturing plant. Steve joins the ROTC to help pay for school and then the US Army.

Steve is an excellent officer with high ratings but none of his subordinates enjoy working with him. Steve commits to changing his ways.

“Over the next three decades, I became a constant student of building great teams that really trust one another. I did this first as a materials manager, then later as a plant manager, as head of Marketing, and later, as head of Sales Operations. Then twelve years ago, Bob Strauss, our CEO at the time, hired me to become the new COO.”


Steve asks for commitment from everyone to develop IT as a competency by starting to trust one another. Everyone in attendance nods in agreement, except for Bill. . .

Chapter 19

ill nods in agreement as well.

Patty apologizes for reacting so coldly to Bill. She credits Bill for changing the IT Department.

“The goal of this exercise is to get to know one another as people. You’ve learned a bit about me and my vulnerabilities. But that’s not enough. We need to know more about one another. And that creates the basis for trust.”


hris volunteers to start. He was born in Beirut and speakers four languages. He describes the story of his wife’s pregnancy complications and how it taught him to not be selfish.

Wes participates next. He was engaged three times and called off each before getting married. Wes races cars and has struggled with his weight.

Patty started as an art major but ended up switching majors five times in college. She dropped out of college to become a singer-songwriter, touring the country. She decided to work for Parts Unlimited because she couldn’t make a living as an artist.

Bill grew up in a family with an alcoholic father. He ran away from home and got into trouble. After being arrested, he chose to join the Marines.

Bill cries as he describes the lessons learned from the Marines: “What did I learn? That my main goal is to be a great father, not like the shitty father I had. I want to be the man that my sons deserve.”

“Solving any complex business problem requires teamwork, and teamwork requires trust. Lencioni teaches that showing vulnerability helps create a foundation for that.”


Steve identifies missing every commitment and schedule as a primary problem in IT. He surmises that the team is not good at making internal commitments.

Chris counters that his team hit their targets, including on Phoenix. However, Phoenix was a disaster. If success was Chris getting all the Phoenix tasks done, then they met their target. If success was putting Phoenix into production fulfilling business goals, then they failed.

Development does not factor in the work operations needs to complete.

Part of the problem is planning and architecture. Development is also waiting for operations to deploy because there is backlog of work.

“Erik has helped me understand that there are four types of IT Operations work: business projects, IT Operations projects, changes, and unplanned work. But, we’re only talking about the first type of work, and the unplanned work that get’s created when we do it wrong. We’re only talking about half the work we do in IT Operations.”


Bill realizes while discussing the types of work (the audit project specifically) they have forgotten to invite John. Steve takes a 15-minute break to invite John.

The IT staff is unsure how the make commitment decisions for projects, unlike the manufacturing plant. No capacity or demand analysis is done.

IT takes shortcuts, which means fragile applications in production, and firefighting, which leads to technical debt.

Technical debt compounds over time.

“If an organization doesn’t pay down its technical debt, every calorie in the organization can be spent just paying interest, in the form of unplanned work.”


“Unplanned work has another side effect. When you spend all your time firefighting, there’s little time or energy left for planning. When all you do is react, there’s not enough time to do the mental work of figuring out whether you can accept new work. So projects are crammed onto the plate, with fewer cycles available to each one, which means more bad multitasking, more escalations from poor code, which mean more shortcuts.”


Identify where the constraint is and then protect it. Ensure time is never wasted on the constraint.

Bill believes Brent is the constraint for Parts Unlimited.

To fix the problems of IT, Bill proposes to stop doing all other non-Phoenix work to focus on improving their processes for two weeks.

Erik agrees, because the goal should be to increase the throughput of the entire system.

Steve promises to send out an email to the company announcing the work stoppage, to prevent managers from “strong arming” Operations into helping pet projects.

The team will identify the top areas of technical debt, which Development will tackle to decrease the unplanned work being created by problematic applications in production.

Chapter 20

The company has made great progress on Phoenix; more accomplished in 7 days than in the prior month.

The company experiences a Sev-1 incident that took out internal phones and voicemail. The incident was caused by a vendor accidentally making changes to the production phone system. The team will put together a project to monitor critical systems for unauthorized changes.

“How do we currently prioritize our work? When we commit to work on a project, a change, a service request, or anything else, how does anyone decide what to work on at any given time? What happens if there are competing priorities?”


Priorities are typically based on the most senior person making the request or most recent request.

Erik and Bill take another trip to the manufacturing plant.

Understanding the flow of work is the first key to achieving the First Way.

Bill surmises that Brent is a worker supporting way too many work centers, which is why he’s a constraint.

“Every work center is made up of four things: the machine, the man, the method, and the measures. Suppose for the machine, we select the heat treat oven. The men are the two people required to execute the predefined steps, and we obviously will need measures based on the outcomes of executing the steps in the method.”


Bill is standardizing Brent’s work so others can execute it. Documenting the steps helps with consistency and quality.

Bill comes to the conclusion that only those projects that don’t require Brent are safe to begin work on again.

The monitoring project is the most important because it elevates the constraint by removing unnecessary work from his plate by bypassing him.

Total Productive Maintenance

  • Do whatever it takes to assure machine availability by elevating maintenance
  • ‘Improving daily work is even more important than doing daily work.’

“The Third Way is all about ensuring that we’re continually putting tension into the system, so that we’re continually reinforcing habits and improving something. Resilience engineering tells us that we should routinely inject faults into the system, doing them frequently, to make them less painful.”


Improvement Kata: Mike Rother says it almost doesn’t matter what you improve, as long as you’re improving something. Because if you are not improving, entropy guarantees that you are getting worse, which ensures that there is no path to zero errors, zero work-related accidents, and zero loss.

Kata: repetition creates habits, and habits are what enable mastery

Just as important as throttling the release of work is managing the handoffs.

The wait time for a given resource is the percentage that resource is busy, divided by the percentage that resource is idle.

If a resource is fifty percent utilized, the wait time is 50/50, or 1 unit. If the resource is ninety percent utilized, the wait time is 90/10, or nine times longer.

“A critical part of the Second Way is making wait times visible, so you know when your work spends days sitting in someone’s queue—or worse, when work has to go backward, because it doesn’t have all the parts or requires rework.”


The Security Projects from John don’t help scalability, availability, survivability, sustainability, security, supportability, or the defensibility of the organization. At present, they are not a good use of time.

From the Pipeline v10.0

This entry is part 10 of 23 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

Software Testing Podcasts

If you’re interested in learning more about testing and love podcasts, Software Testing Magazine has compiled a list of some popular testing podcasts.

A Primer on Continuous Testing

“Continuous testing shortens feedback loops through automated testing that occurs throughout the development lifecycle—hence “continuous.” Testing and QA become the responsibility of everyone working on the software, not just testers. Let’s look at some proven practices from organizations that have used continuous testing effectively to realize tangible benefits.”

Improve Your Test Automation Learning and Delivery with The Three Stream Method

Jon Ferguson Smart is the author of “BDD in Action”, one of my favorite tech books. He posts often on his blog and provides some solid advice on automation. In this post, he briefly discusses the three method: the first stream is value, the second stream is quality or technical debt, and the third stream is learning. He links to a new ebook, “The Roadmap From Manual to Automated Testing”, which is recommended for anyone learning to adopt automation. He’s an excellent author so please give it a read.

Production Deploy with Every Check-In? You Gotta Go TWO Low!

Paul Grizzaffi is an automation architect for Magenic. In this guest post for Applitools he describes multiple issues that can occur during a deployment to prod by a developer, from visual issues to timing issues. There are two different costs to consider: cost of change and cost of failure. To learn more about both check out his post.

The Technical Debt Trap (VIDEO)

For a change of pace, here is an excellent conference presentation given by the great Doc Norton on Technical Debt. I highly recommend watching this video to understand the origins of technical debt and why so many orgs don’t devote time towards quality as an upfront cost. “Technical Debt has become a catch-all phrase for any code that needs to be re-worked. Much like refactoring has become a catch-all phrase for any activity that involves changing code. These fundamental misunderstandings and comfortable yet mis-applied metaphors have resulted in a plethora of poor decisions. What is technical debt? What is not technical debt? Why should we care? What is the cost of misunderstanding? What do we do about it? Doc discusses the origins of the metaphor, what it means today, and how we properly identify and manage technical debt. In this talk I’ll share how these four principles power world-famous companies and how they can help you work with greater speed, simplicity, safety and success.”

Cukes and Apples: Advanced Cucumber Steps

Welcome Back

In the previous post, we implemented the Page Object pattern to drive a simple Cucumber scenario. The steps used in that scenario are expressive enough, but not very reusable and not well-organized. In this post, we will explore some good practices for writing and using Cucumber steps for mobile test automation.

Get the code from the previous post here:

Arrange, Act, Assert

We recommend organizing general, reusable step definitions with a pattern used in unit testing: Arrange-Act-Assert. The Arrange-Act-Assert pattern divides step definitions into three logical groupings, predictably: arrange, act, and assert.

  • The “arrange” section sets up the preconditions necessary for a test to succeed or fail correctly. This will include things like logging in and navigating to pages.
  • The “act” section describes an action for which the result must be validated, like tapping a button or performing a gesture.
  • The “assert” section finishes a test by validating the result of the action which preceded it, by checking conditions like the visibility and value of page elements.

Organizing our step definitions according to the Arrange-Act-Assert pattern makes it easier for new contributors to learn the most reusable steps in the test suite and reminds us of the purpose of these steps as we use them.

Some steps are not easy to place in the Arrange-Act-Assert pattern – for example, a step which validates the display of a page might be used most often during the arrange and act sections of a test, but still constitutes a very good assertion. If you are not sure where to place a step, consider how the step will be used most often, how it will provide the most value, and what first impression a new collaborator should have.

Custom Steps

If the Arrange-Act-Assert pattern is followed too literally, and other categories of a step are prohibited, the benefits of following the pattern are lost. Collections of arrange, act, and assert steps should include steps that are generalized and reusable to make those steps easy to find.

Steps which are too specific for the general collection can be organized separately. For example, a step which handles user login is only applicable to the application under test, and does not describe a general mobile device interaction. Start small by collecting these steps in one place, like “custom_steps.rb”. As the custom steps collection grows in size and becomes unwieldy, identify related steps and create new step files for them.

Select Any Element

In the previous post, we used a step definition that selected a button on the Welcome page: “the user selects Next” or “the user selects Get started”. We could make that step more valuable by making it reusable. If this step could be written to select any element, then it could be used more scenarios.

First, move the step into a file that aligns with the Arrange-Act-Assert pattern: “step_definitions/action_steps.rb”. This makes sense as an action step – many scenarios are likely to validate the result of tapping an element.

Next, update the step pattern to accept any element name.

We will parse the element name and send it to a page object to invoke the method which will select the named element. The element name that is written in our scenarios – and captured by the step – is likely to be mixed-case, and include spaces, so we need to modify it first. See the string modification below, and the “send” method which accepts it:

That “send” method is an incredibly helpful construct that allows us to invoke a method on a page object without knowing the name of that method until runtime – that is, when we begin executing the test.

Because our scenario was written to select both “Next” and “Get started”, we also need to define a separate button named :get_started

Now, the step above is capable of selecting any element on the Welcome page… but there aren’t many elements on that page. This step would be much more valuable if it could also select any element, on any page.

Any Element, Any Page

The step can be further modified to select an element on any page, but there is a catch. See @current_page below:

The @current_page variable will be familiar for users of the web automation gem page-object, which uses the same variable name for the same purpose. We can call the “send” method on an instance variable named @current_page and assume that preceding steps have set the variable, but we must update other steps.

Update the step “the app is on the Welcome page” to set @current_page.

Now the step “the user selects <element_name>” can be used on any page, assuming the preceding step sets @current_page.

The other step, “the app is on the Welcome page”, would be even better if it could be used to describe any page.

Navigate to Any Page

To set @current_page with an instance of any page class, call Kernel.const_get and pass it the name of a page class. As with the element name above, it is necessary to manipulate the string first. Follow the example below to change page names into class names:

Now the step definition “the app is on the <page name> page” can be used to navigate to any page and validate the visibility of that page using the “on_page?” method, assuming “on_page?” is implemented for the named page.

Further Optimizations

Fans of the page-object gem might be looking for the on_page method. Page-object uses a PageFactory module to manage the @current_page, which includes the on_page method used to create new instances of page classes and set the @current_page variable. Our test suite can do the same if we implement a factory method as page-object does.


In this post, we used the Arrange-Act-Assert pattern to organize our steps by category and updated our step definitions to handle any element, on any page. By following the same principles, and leaning on constructs like “@current_screen.send” and “Kernel.const_get”, we can write step definitions that will describe almost any user interaction in a generalized and reusable way.

Get the code from this post here:

Coming Up Next

Updating this test framework to support cross-platform execution will require access to some new hardware. Another post will explore execution with iOS and Android in the future, but for now this series will be on hold while we publish some other articles. Stay tuned!