Slaying the Hydra: Run-Time State and Splitting Up the Execution

This entry is part 3 of 5 in the series Slaying the Hydra

In this third post of the blog series on parallel test execution, I explain how to execute distributed parallel test automation. The previous entry can be found here.

As discussed previously, The running stage (see below) within the pipeline context is set to execute three builds of the test_runner freestyle job in parallel. Each build is receiving the following parameters:

  • browser – either equal to ‘ie’ or ‘chrome’
  • total_number_of_builds – equal to ‘3’
  • build_number – equal to ‘1’, ‘2’ or ‘3’

Freestyle Job Overview

In the following sections, I explain what freestyle components need utilized when constructing the test_runner job in Jenkins.

Parameters

As seen from the image above, parameters are being passed from the pipeline job into the freestyle job. We will update the freestyle job to be parameterized. This selection is made when configuring the Jenkins job (see below).

Next the freestyle job is configured with these parameter names:

  • browser –  the value received from the pipeline parameter value.
  • total_number_of_builds –  the value received from the pipeline parameter value.
  • build_number – the value received from the pipeline parameter value.
  • workspace_location – to show a different way of doing things, we can see from the image above that I did not pass a value for workspace location in the pipeline. When I configured the parameter (below), I set a default value in the freestyle job. This default value will be linked to the workspace_location parameter now unless I otherwise specify.

Node Selection

In this section we restrict where this build can execute to only machines associated with the @local tag only. This setting is located in the Manage Jenkins > Manage Nodes section of Jenkins. It provides us the ability to ensure we are not utilizing nodes that are otherwise utilized or not configured to run the cucumber tests in the steps below.

Version Control

In the Source Code Management section, we specify what testing suite to retrieve via version control and utilize for this effort, which will pull the suite down within the workspace. The “clean before checkout” additional behavior (Jenkins functionality) will remove any files in the workspace that are not in the Git repo before pulling the suite down. This allows for a clean slate for every execution.

Splitting Code

class Splitter
  def total_builds
    ENV['total_number_of_builds'].to_i
  end

  def build_number
    ENV['build_number'].to_i
  end

  def main_run
    scenarios = feature_iterator
    splits = job_splitter(scenarios)
    assignment = job_assigner(splits)
    feature_mod_iterator(assignment, 'features', true)
  end

  def feature_mod_iterator(split_assignment, current_location = 'features', assign = true)
    array = []
    split_assignment.each do |value|
      mod_value = value.gsub('@regression', '@split_builds')
      regex = /#{value}$/
      files = return_all_files(current_location, '*', 'feature')
      files.each do |file|
        output = File.open(file, 'r', &:read)
        modified = output.gsub(regex, mod_value)
        if assign
          File.open(file, 'w+') { |f| f.print(modified) }
        else
          array.push(modified)
        end
      end
    end
    array
  end

  def feature_iterator(current_location = 'features')
    files = return_all_files(current_location, '*', 'feature')
    array = []
    files.each do |file|
      array.push(return_all_gherkin_scenarios(file))
    end
    array.flatten
  end

  def return_all_gherkin_scenarios(file)
    output = File.open(file, 'r', &:read)
    output.scan(/(@regression.*\n. (Scenario:|Scenario Outline:)?.*)/).map { |value| value[0] }
  end

  def return_all_files(current_location, filter = '*', file_type = '*')
    Dir.glob("#{current_location}/**/#{filter}.#{file_type}")
  end

  def job_splitter(scenarios)
    split = scenarios.length.to_i / total_builds.to_i

    container = []
    total_builds.times { container.push([]) }
    mod_scenarios = scenarios.clone

    total_builds.times do |index|
      container[index].push(mod_scenarios[0..(split - 1)])
      container[index].flatten!

      (0..(split - 1)).to_a.length.times do
        mod_scenarios.delete_at(0)
      end
    end

    mod_scenarios.each_with_index do |value, index|
      container[index].push(value)
    end
    container
  end

  def job_assigner(scenarios)
    scenarios[(build_number.to_i - 1)]
  end
end

one = Splitter.new
one.main_run

At a high level, the code block above is creating an array of arrays that split up the regression tests evenly between the number of executors. The build_number value is utilized to access the corresponding index value of the array. All of the tests in that location are re-tagged from @regression to @split_builds locally on the workspace that houses the Ruby/Cucumber code pulled down from version control.

You would have to change the @regression tag to whatever you are utilizing to tag your tests as regression on your team.

The cool thing is that this will run on each of the three workspaces and re-tag a unique subset of tests. Because the total_builds value is the same for all the jobs kicked off, it will create the same nested array structure on every workspace. The difference between workspaces comes about because of the build_number parameter that chooses which subset of tests to re-tag.

Running the Split Code

We should house the code above within our testing framework in version control.  Within the Build section of Jenkins we then create a windows batch command. Next we set the environment variables that the code utilizes total_builds and build_number as being equal to the parameters set within the freestyle job. We can now run the ruby command passing the path to the .rb file that houses the code within the workspace (in reference to the code above).

Running the Tests

We set up another windows batch command to set environment variables for browser and or_tags, and in this instance, we kick off the tests utilizing a rake task. Cucumber Rake is a useful tool, but we could just as easily run a Cucumber command.

The important thing is that we are passing what will be the tag modified locally on each workspace(split_builds) to run only the tests assigned to that workspace. Additionally, we passed the browser variable set within the pipeline and passed to the freestyle job.

Storing Results

In our last batch command, we are extracting the json test results file and storing it on the workspace_location as a json file named with the build_number value (either 1, 2, or 3). This workspace location is the same as what we utilized in the clearing stage and what will be utilized in the consolidation stage.  

Review and Next Steps

To review, in this post, we figured out how to build the freestyle job that is responsible for splitting, executing, and storing the results of our tests.

In the next post, we discuss how to consolidate the information from the freestyle job builds into a concise cucumber report.

From the Pipeline v8.0

This entry is part 8 of 36 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

From Test Management to Continuous Delivery

Seb Rose and Dana Prey recently hosted a webinar on Cucumber.io (now a SmartBear tool) about the evolution of testing to support continuous delivery. “This webinar will define Test Management and Continuous Delivery and go on to explore typical challenges you’ll encounter on your journey towards CD. We’ll describe small steps that you can use to mitigate the risks of changing the way you work, and the value that can be released from the start.”

Information Loss in Software Testing

Matt Heusser describes the level of information loss about a project or product as it moves up through the chain of command, as well as the negative aspects of controlling information about an application for your personal benefit (job security). He provides several alternatives to conveying information such as coverage maps and dashboards to help contain organizational information loss.

Clear, Direct Communication: An Experiment

Kent Beck posts a personal piece about communication with others through his life. The piece is an important introspection who is professionally successful and considered a luminary in our field, yet still struggles with interpersonal connections.

Fighting Against Technical Debt

Cukenfest was held virtually this past week. While the videos are not posted yet, Gaspar Nagy has posted his presentation to slideshare. His talk about technical debt is distilled into three focus areas: Reversibility, Reaction, and Sustainability.

DevOps Journey Playbook

The DevOps Institute have gathered lots of great background information on aspects of DevOps into a single location as a series of playbooks. “Playbooks are a collaborative body of knowledge of research, knowledge and artifacts to help you understand and SKILup your DevOps capabilities. A playbook is populated with twelve research chapter reports plus additional content for ongoing discovery and support during your DevOps journey. We continuously update the playbook with regional and global perspectives for actionable strategies and implementations.”

Book Club: The Phoenix Project (Chapters 4-7)

This entry is part 2 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Chapters 1-3 HERE

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 4

Bill is inundated with emails and voicemails just one day on the job. One high priority email comes from Sarah Moulton (SVP of Retail Operations) regarding delays in the Phoenix Project.

Development on the Phoenix Project is behind and they have not considered how to test and deploy the application. This is typical for handoffs between Development and IT Operations at Parts Unlimited.

“The majority of our marketing projects can’t be done without IT. High touch marketing requires high tech. But if there’s so many of us assigned to these Marketing projects, shouldn’t they be coming to us?”

Bill Palmer

Introductions:

  • Kirsten Fingle, Project Management Office. She is organized, levelheaded, and a stickler for accountability.
  • Sarah Moulton, SVP of Retail Operations.
  • Chris Allers, VP of Application Development and acting CIO. Has a reputation as a capable and no-nonsense manager.

The Phoenix Project team has grown by 50 people in the last two years, many through offshore development shops.

Steve Masters attends the Phoenix Project project management meeting. The project has been red for four weeks. Sarah Moulton attacks Bill’s team for the delays.

“See, Bill, in order for us to increase market share, we must ship Phoenix. But for some reason, you and your team keep dragging your feet. Maybe you’re not prioritizing correctly? Or maybe you’re just not used to supporting a project this important?”

Sarah Moulton

Parts Unlimited has spent over $20 million on Phoenix and are two years late.

Chris says Phoenix can be delivered in a few weeks but Wes is not convinced. It would take three weeks just to order the infrastructure necessary and the performance of Phoenix is slow. Additionally, Operations does not have a specification on how the production and test systems will be configured.

“I’ve seen this movie before. The plot is simple: First, you take an urgent date-driven project, where the shipment date cannot be delayed because of external commitments made to Wall Street or customers. Then you add a bunch of developers who use up all the time in the schedule, leaving no time for testing or operations deployment. And because no one is willing to slip the deployment date, everyone after Development has to take outrageous and unacceptable shortcuts to hit the date.”

Bill Palmer

Bill tries to convince Steve to delay the release of Phoenix to no avail. Phoenix impacts thousands of point of sale systems and all of the back-office order entry systems.

After the meeting, Bill and Wes conclude that they’re going to have to get a huge team of their employees together in a room to make the release happen and will also need members of Chris’s team. They also need to free up Brent from fire fighting so that he can help solve problems at the roots.

To make things worse, Bill gets the dreaded blue screen of death on his laptop. His new secretary, Ellen, informs them that a lot of people are experiencing the issue.

Bill attends the CAB (change advisory board) meeting which Patty runs. They are the only two people in attendance. Bill sends out an email to the org stating that all relevant people must attend another mandatory CAB meeting on Friday afternoon.

Bill is given a replacement laptop that is ~10 years old since the help desk team was unable to fix is blue screen of death.

Wes talks to Bill and objects to his mandatory CAB meeting. He says last time the org tried to enforce this it bogged down all his developers in paperwork and they were unable to be productive.

Chapter 5

Bill wakes up the next day to an email from Steve. They need to meet with Nancy Mailer, the Chief Audit Executive. The auditors have uncovered some issues that need to be discussed.

The room is quiet when Bill arrives at the 8 AM meeting. Also in attendance are John, Wes, and Tim, an IT auditor.

The auditing team has found nearly a thousand issues, although only 16 of them are “significant deficiencies”.

Nancy requires a management response letter which includes a remediation plan. Normally the remediation of these issues takes months, but Bill’s team is only given a few weeks before the external auditors arrive.

John tries to grandstand and state his team is on top of things, but that doesn’t seem to be the case. Bill finds out that John’s fix that broke the payroll system may not have even been necessary since it’s out of scope for this audit.

When Bill asks what the most important issue is, he is told: “The first issue is the potential material weakness, which is outlined on page seven. This finding states that an unauthorized or untested change to an application supporting financial reporting could have been put into production. This could potentially result in an undetected material error, due to fraud or otherwise. Management does not have any control that would prevent or detect such a change.”

Nancy Mailer

Bill is also told his team was unable to produce any change meeting minutes, which he already knows but pretends that this is news to him.

After some more discussion and confrontation between Wes and John, Bill agrees to get with his team and come up with a plan, even though everyone is already buried with Phoenix project work.

Wes and Bill stick around after the meeting to talk. Bill is beginning to get the impression that it’s hard to do much of anything without Brent. Wes says they tried to hire some other people at the same level as Brent but they have either left or aren’t as good as Brent.

Bill also discovers that there is no overall backlog of work. They have no visibility into how many business projects and infrastructure projects.

“We also have all the calls going into the service desk, whether it’s requests for something new or asking to fix something. But that list will be incomplete, too, because so many people in the business just go to their favorite IT person. All that work is completely off the books.”

Patty

The team (Bill, Patty, and Wes) set out to get a list of organizational commitments from their key resources, with a one-liner on what they’re working on and how long it will take. Bill will take all of Patty and Wes’s data to Steve on Monday to frame an argument for needing more people.

Chapter 6

Bill realizes during a status meeting that the development team is even more behind than he had feared, and almost all testing is being deferred to the next release.

Patty and Wes have put together data for what all their people are working on, and they share it with Bill. They discover that they have a high number of projects compared to the number of people, and their people to projects ratio is going to be about 1:1.

Most of the Operations resources are committed to Phoenix, and the 2nd largest project is Compliance. They also mention the compliance project would take all of their resources almost an entire year.

“Most of our resources are going to Phoenix. And look at the next line: Compliance is the next largest project. And even if we only worked on compliance, it would consume most of our key resources for an entire year! And that includes Brent, by the way.”

Wes

The 3rd largest project is incident and break-fix work, which is currently taking about 75% of the staff’s time.

Patty states that the one consistent theme in the interviews was that everyone struggles to get their project work done. When they do have time, the business is constantly making requests.

The numbers show that they will need to hire seven people so that everyone can complete their work.

Later that day, everyone attends a meeting for the Change Advisory Board (CAB).

“We need to tighten up our change controls, and as managers and technical leads, we must figure out how we can create a sustainable process that will prevent friendly-fire incidents and get the auditors off our back, while still being able to get work done. We are not leaving this room until we’ve created a plan to get there. Understood?”

Bill

The group starts off by stating that the change management tool is impossible to use. Bill calls a 10-minute break since things are slowly getting away from him. When the meeting reconvenes, Bill states that they must record all the necessary changes that must take place over the next 30 days.

Everyone dives in and starts taking the change management meeting seriously, however the discussions for individual changes go on for a lot longer than anticipated. To keep it simple, they request (1) who is planning the change, (2) the system being changed, and (3) a one-sentence summary.

The team comes up with a definition of change: “a ‘change’ is any activity that is physical, logical, or virtual to applications, databases, operating systems, networks, or hardware that could impact services being delivered.”

Parts Unlimited IT Operations Team

Later, Patty calls Bill and says that they can expect about 400 changes to be submitted that need to happen the next week. Bill tells Patty that all Monday changes can go through without being authorized, but that all changes for later in the week will have to be reviewed.

Chapter 7

Bill gets a call that a potential new board member, Erik Reid, is in town and needs to talk with all the IT executives. Bill decides to meet with Erik even though it’s been a long day.

Bill mistakes Erik for a deliveryman since Erik is wearing wrinkled khakis and an untucked shirt. Erik seems to have trouble remembering names of people he’s met but has assessed the IT situation accurately.

“It looks like you’re in a world of hurt. IT Operations seems to have lodged itself in every major flow of work, including the top company project. It has all the executives hopping mad, and they’re turning the screws on your Development guy to do whatever it takes to get it into production.”

Erik Reid

Erik then takes Bill to one of the company’s manufacturing plants to learn about WIP. WIP is “work in progress”.

“In the 1980s, this plant was the beneficiary of three incredible scientifically-grounded management movements. You’ve probably heard of them: the Theory of Constraints, Lean production or the Toyota Production System, and Total Quality Management. Although each movement started in different places, they all agree on one thing: WIP is the silent killer. Therefore, one of the most critical mechanisms in the management of any plant is job and materials release. Without it, you can’t control WIP.”

Erik Reid

Erik talks to bill about prioritizing work, and why bottlenecks are important to selecting work. Bill says that running IT operations is not like running a factory, but Erik disagrees with him.

The Theory of Constraints:

“Eliyahu M. Goldratt, who created the Theory of Constraints, showed us how any improvements made anywhere besides the bottleneck are an illusion. Astonishing, but true! Any improvement made after the bottleneck is useless, because it will always remain starved, waiting for work from the bottleneck. And any improvements made before the bottleneck merely result in more inventory piling up at the bottleneck.”

“Your job as VP of IT Operations is to ensure the fast, predictable, and uninterrupted flow of planned work that delivers value to the business while minimizing the impact and disruption of unplanned work, so you can provide stable, predictable, and secure IT service.”

Erik Reid

The Three Ways:

“The First Way helps us understand how to create fast flow of work as it moves from Development into IT Operations, because that’s what’s between the business and the customer. The Second Way shows us how to shorten and amplify feedback loops, so we can fix quality at the source and avoid rework. And the Third Way shows us how to create a culture that simultaneously fosters experimentation, learning from failure, and understanding that repetition and practice are the prerequisites to mastery.”

Retrieve Fantasy Football Stats using ESPN’s API: Part 2

Hello again and welcome to part two of our tutorial on how to scrape data from ESPN’s fantasy football API using Ruby. Last time we left off with our basic connection to ESPN, and we had retrieved some solid data. Let’s continue to pull more data and parse it.

First, we have a little bit of cleanup. There are some global variables sitting around that we’d like to get rid of, and we’re also going to be adding static data to reference. So let’s create a data module to house these objects and name it DataSource. We can start by moving our SWID, S2, and league ID (if applicable) variables into this file and assigning them as constants instead of global variables.

Now that we are working with more than one file, we’ll need to pull in these files to our main.rb class. Since we think we will only have one directory, we can make this simple and only add our Root directory to our Load Path. Let’s create a constant in main.rb called ROOT_DIR that will look like this:

ROOT_DIR = File.join(File.dirname(FILE))

Then we can add that to our load path with this statement:

$LOAD_PATH.unshift(ROOT_DIR)

Now we’ll easily be able to pull any files we create in our Root path. Finally we’ll want to require our DataSource module like so:

require ‘data_source’
include DataSource

We could loop through our root directory and require every .rb file, but this might be overkill for now. Now that we have access to our DataSource file, we can remove those ugly global variables and update the references to them in our code.

Now we’re ready to start looping through each week to pull down all the various statistics that we’re looking for. The general flow of our code will be the following:

  1. Make an API call for each week of the season to pull in the data. In this case, we will use 2019.
  2. Loop through each team that played that week.
  3. Loop through each player on that team’s roster and parse out their stats.

Simple enough, right? So, let’s take a look at the data that we pulled down in part 1 to look at what data is relevant to us. For now, we will be concerned with the Teams key in our Hash. The teams key is structured like so:

This may seem a little messy but I’ll point out some relevant data as we walk through this. Most of the actual stats will come from the data in that hash, but we’ll also pull a few pieces from the playerPoolEntry. As mentioned above, our first step will be to loop through each week and make an API call that applies to that week. Let’s make two new variables to specify the weeks we want to look at and the applicable season. For testing purposes, we’ll just look at week 1 for now:

weeks = *(1..1)
season = ‘2019’

If you aren’t familiar with the * syntax, it will simply create an array with the specified range. So in his case it will just create an array of [1], but we can easily expand this later once we’re ready to pull the data for all weeks. We will also want to declare an array called output where we will store all of our data as it is parsed. Now we can set up our loop to iterate through each week:

output = []

weeks.each do |week|
  url = "https://fantasy.espn.com/apis/v3/games/ffl/seasons/#{season}/segments/0/leagues/1009412?view=mMatchup&view=mMatchupScore&scoringPeriodId=#{week}"
  response = RestClient::Request.execute(
      :url => url,
      :headers => {
          'cookies': {'swid': SWID,
                      'espn_s2': S2}
      },
      :method => :get
  )

  data = JSON.parse(response)

In the above code, we’ll need to redefine the URL for our API call for each week. We can interpolate the season and week variables into the URL string to accomplish this. Then we will perform a GET call and parse out the JSON to turn it into a hash. At this point we should have our data for week 1. This will be followed by our next loop which will parse the players for each team. We will iterate through each object in the teams array from the response body:

data[‘teams’].each do |team|
<body>
end

Now we should be at a point to start pulling out individual pieces of data. The first item we’ll collect is the team ID, or the very first item in the team hash.

This ID will correspond to a team in your league. To find out which team is which, you will have to look at the URL for each team when you are on the ESPN site. To do this you can simply go to the standings page and click through each team.

Here you can see the team ID is set to 2.

This next step is optional depending on if you care about actually having names for each team, but I recommend adding another constant to your DataSource module to map the ID’s for each team:

So if you have added this, we can write the line:

owner = OWNERS[team[‘id’].to_s]

(If you did not add an OWNERS constant then simply write team[‘id’].to_s)

Now we get to add — you guessed it — another nested loop! Is this the best way to write this code? No, it is not. We typically want to minimize our cyclomatic complexity, and the saying goes “flat is better than nested”. So while this isn’t necessarily ideal, we can always get our code to work properly now and then refactor later to extract out some functionality into methods. We can keep a lookout as we go forward to identify places where we can reduce our code complexity and readability when we get around to refactoring. But I digress.

Our next loop will be through each roster entry. The data we will collect for each player is as follows:

  1. firstName
  2. lastName
  3. playerId – a unique ID given to each player
  4. lineUpSlotId – An ID that signifies which position corresponds to the given player
  5. defaultPositionId
  6. actual points scored
  7. points the player was projected to score

Some of this data we can simply take, and some of it we will have to use to parse out more data. Let’s start with the easy ones. The top of our code block will look like this:

team['roster']['entries'].each do |entry|
  fname = entry['playerPoolEntry']['player']['firstName']
  lname = entry['playerPoolEntry']['player']['lastName']
  player_id = entry['playerId']
  slot = entry['lineupSlotId']

This is fairly straightforward as far as data gathering. On the next line we will want to grab the player’s position code. Since this code doesn’t actually tell us anything useful, we’ll have to map out what these codes represent in our DataSource module. The player codes we’ll use are as follows:

POSITION_CODES =
{
'1' => 'QB',
'2' => 'RB',
'3' => 'WR',
'4' => 'TE',
'16' => 'D/ST',
'5' => 'K'
}

Then we can reference this constant just like we did for our team Owners.

position = POSITION_CODES[entry[‘playerPoolEntry’][‘player’][‘defaultPositionId’].to_s]

We also have to get a little creative with the slot codes that we already grabbed. The slot code doesn’t really tell us much other than if a player is in your starting lineup or on your bench. Luckily this is pretty straightforward. Any number that is less than 9, exactly 16, or exactly 17 represents a starter, and anything else is a bench player. This can be evaluated like so:

starter = (slot < 9 || slot == 17 || slot == 16) ? ‘true’ : ‘false’

Great, now we have a bunch of general info about our given player. Now we want to pull their projected and actual stats, but this requires us to iterate over the stats key from our data. These loops are getting a little out of hand, so let’s stop being lazy and create a new module to help us out. Since we’ll mostly be using this module for parsing player data, let’s call it PlayerHelper (player_helper.rb). We can go ahead and require this at the top of our main.rb file the same way we did with our DataSource. Then we’ll add a method into the PlayerHelper called get_stats.

There are a few entries in the stats array that we are looking at, but we only really care about the entry that corresponds to our given week. We also will need our stats array to parse from. So our method declaration will look like this:

def get_stats(stats_array, week)

Now we will need to use a bit of logic to find the correct entry. First we need to find the entry with the corresponding week in the scoringPeriodId field. Then inside that entry we will need to check the statSourceId. If that ID is a 0, then that is the player’s actual stats. If it is a 1, then that entry represents the player’s projected stats. When we have assigned our actual and projected values, we can return a hash with an actual value and a projected value. So our final method code will look like this:

def get_stats(stats_array, week)
actual = ''
projected = ''
stats_array.each do |stat|
if stat['scoringPeriodId'] == week
if stat['statSourceId'] == 0
actual = stat['appliedTotal']
elsif stat['statSourceId'] == 1
projected = stat['appliedTotal']
end
end
end
{actual: actual, projected: projected}
end

And the method call from main.rb will look like this:

stats = get_stats(entry[‘playerPoolEntry’][‘player’][‘stats’], week)

That should give us a pretty good list of data to start with. Now let’s think ahead for a minute. Where should we store all of our data when we’re done retrieving it? It would be nice to create our own database, but that’s probably overkill for the moment, not to mention a lot of extra work. We could definitely put it all in a spreadsheet, too, but then we’d have to pull in some extra gems and add more logic. So let’s just stick with a good old CSV for now, which is just comma delimited fields that we can always import into a spreadsheet later. To do this, we can add all of our data so far to one big string:

result = “#{owner},#{week},#{season},#{position},#{fname},#{lname},#{starter},#{stats[:actual]},#{stats[:projected]},#{player_id},”

It’s not the prettiest thing in the world, but it will work for now. Finally, we can add in this result object into our output array that we created earlier:

output << result

If we let our program iterate all the way through for week 1, then we should have output that looks similar to this:

Not bad for a day’s work!
Let’s review what we’ve accomplished up to this point:

  1. We created a new DataSource module that we can move our global variables into and establish constants that help us map our data.
  2. We’ve created logic that will loop through and collect all of our basic player data.
  3. We created another new module PlayerHelper that we can use going forward to extract logic into to keep our main.rb class clean.
  4. We’ve identified a few places where we can go back and refactor to clean up our existing code.

One more takeaway that we have is that we have further seen how the API returns our data in a way that isn’t exactly straightforward. We have to go pretty deep into our data objects to find what we need. This is typical of most web services that return lots of data. This gives us another reminder that we need to keep our code well organized or none of this is going to make much sense to our future selves and will be hard for others to read.

I hope that you’ve found this post helpful and are able to follow along. For part three, we will look at pulling some additional player data and outputting our results into spreadsheets.

From the Pipeline v7.0

This entry is part 7 of 36 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

Final Thoughts on “Patterns for Managing Source Code Branches”

This is the final post in a series on branching from Martin Fowler. The series has been an amazing journey to follow. In the wrap-up, Martin reminds us “branching is easy, merging is harder”. He provides us with a summary of his recommended rules to follow with branching and merging.

If Estimates Were Accurate, We’d Call Them Actuals

A great post from Tanner about establishing a shared understanding on the team about estimates by using metaphors to bring everyone on board. They key point made in this article is: “Estimates are about mathematics. Expectations are about human connection. That difference matters.”

Using Equivalence Partitioning and Boundary Value Analysis in Black Box Testing

A nice introductory article for those in the testing space wanting to learn about equivalence partitioning and boundary value analysis. “Equivalence partitioning and boundary value analysis are two specification-based techniques that are useful in black box testing. This article defines each of these techniques and describes, with examples, how you can use them together to create better test cases. You can save time and reduce the number of test cases required to effectively test inputs, outputs, and values.”

How to Implement Hypothesis Driven Development

Hypothesis Driven Development is about changing the mindset of software development from a set of fixed features to experimentation. Every project becomes an experiment that tests a hypothesis about the system – meaning we can refute the hypothesis and roll back the changes or update our hypothesis and alter our approach.

Five Attributes of a Great DevOps Platform

Pavan Belagatti gives an excellent rundown of DevOps practices an organization needs to adopt to be successful. Some of those practices are building a strong culture of learning, automation wherever possible, and adopting cloud computing.

Book Club: The Phoenix Project (Chapters 1-3)

This entry is part 1 of 8 in the series Phoenix Project

The following is a chapter summary for “The Phoenix Project” by Gene Kim for an online book club.

The book club is a weekly lunchtime meeting of technology professionals. As a group, the book club selects, reads, and discuss books related to our profession. Participants are uplifted via group discussion of foundational principles & novel innovations. Attendees do not need to read the book to participate.

Background on the Phoenix Project

“Bill, an IT manager at Parts Unlimited, has been tasked with taking on a project critical to the future of the business, code named Phoenix Project. But the project is massively over budget and behind schedule. The CEO demands Bill must fix the mess in ninety days or else Bill’s entire department will be outsourced.

With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with a manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.

In a fast-paced and entertaining style, three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they’ll never view IT the same way again.”

The Phoenix Project

Chapter 1

Bill Palmer is the Director of Midrange Technology Operations for Parts Unlimited, a $4 billion per year manufacturing and retail company.

Parts Unlimited largest retailing competitor offers better customer service and a new feature that allows people to customize their cars with their friends online.

Bill is frustrated because their competition outperforms Parts Unlimited. His group are expected to deliver more with less year after year.

Bill is invited to meet with Steve Masters, the CEO of Parts Unlimited. He is informed that Luke (CIO) and Damon (VP of IT Operations) were let go and Bill is now VP of IT Operations.

“CIO stands for ‘Career Is Over'”

Bill Palmer

IT will temporarily report to Steve until a new CIO is hired.

Steve tells Bill the goal of the company is to regain profitability to increase the market share and average order sizes. At present, the competitors for Parts Unlimited are beating them.

Steve believes “Project Phoenix” is essential to company success. The project is years late on delivering. If the company does not turn things around, the shareholders are likely to split up the company, costing the jobs of four thousands employees.

Chris Allers will be interim CIO. Chris is presently the VP of Application Development. Both Chris and Bill will report directly to Steve.

Bill is reluctant to take the position but Steve convinces him.

“What I want is for IT to keep the lights on. It should be like using the toilet. I use the toilet, and hell, I don’t ever worry about it not working. What I don’t want is to have the toilets back up and flood the entire building.”

Steve Masters

Bill is informed by Steve that the “payroll run is failing”. This is his first task as failure to make payroll means many factory workers would be affected, potentially getting the company into trouble with the Union.

Chapter 2

Bill moves to address the payroll issue by first meeting with Dick Landry, CFO.

“In yesterday’s payroll run, all of the records for the hourly employees went missing. “We’re pretty sure it’s an IT issue. This screwup is preventing us from paying our employees, violating countless state labor laws, and, no doubt, the union is going to scream bloody murder.”

Dick Landry

Bill & Dick go to meet the Operations Manager Ann to get more situational awareness about the problem. The general ledger upload for hourly employees didn’t go through and all the hourlies are zero. The salaried employees numbers are ok.

“To get Finance the data they need, we may have to cobble together some custom reports, which means bringing in the application developers or database people. But that’s like throwing gasoline on the fire. Developers are even worse than networking people. Show me a developer who isn’t crashing production systems, and I’ll show you one who can’t fog a mirror.”

Bill Palmer

As Bill returns to the IT building, he realizes how run down it is compared to the building that Leadership & Financing work in. Bill heads to the Network Operations Center (NOC) to meet Wes and Patty.

Wes is the Director of Distributed Technology Operations. He is responsible for windows servers, database & networking teams. Wes is loud, outspoken, and shoots from the hip.

Patty is the Director of IT Service Support. She owns all the level 1 and 2 help desk technicians. She also owns the trouble ticketing system, monitoring, and running the change management meetings. Patty is thoughtful, analytical, and a stickler for processes and procedures.

IT was in the middle of a Storage Area Network (SAN) firmware upgrade when the payroll run failed. They tried to back out the changes but ended up bricking it instead.

Chapter 2 is the first introduction of Brent, the engineer in the middle of many important IT projects. By having Brent tackle this Sev 1 issue, he is not working on project Phoenix. The team decides to visit Brent to learn more about the payroll issue.

Chapter 3

Bill, Wes, and Patty go to meet Brent about the payroll issue.

“I was helping one of the SAN engineers perform the firmware upgrade after everybody went home. It took way longer than we thought—nothing went according to the tech note. It got pretty hairy, but we finally finished around seven o’clock.”

“We rebooted the SAN, but then all the self-tests started failing. We worked it for about fifteen minutes, trying to figure out what went wrong. That’s when we got the e-mails about the payroll run failing. That’s when I said, ‘Game Over.’”

Brent

The team gets an update from Ann. The last pay period was fine but for the new pay period all the data is messed up. The Social Security numbers for the factory hourlies are complete gibberish.

Since only one field is corrupted, the team deduces it’s not a SAN failure. They find out on the conference call for the incident that a developer was also installing a security application the same time the SAN firmware was being upgraded.

The security software change was requested by John Pesche, the Chief Information Security Officer.

“The only thing more dangerous than a developer is a developer conspiring with Security. The two working together gives us means, motive, and opportunity.”

Bill Palmer

Information Security at Parts Unlimited often make urgent demands and so the development teams don’t invite them to many meetings. The InfoSec team does not follow the change management process and it always causes problems.

John reveals that Luke and Damon were perhaps fired over a compliance audit finding from security.

InfoSec had an urgent audit issue around storage of PII — personally identifiable information like social security numbers, birthdays, etc.. They found a product that tokenized the information so the SSNs were no longer stored.

“‘Let me see if I’ve got this right…’ I say slowly. ‘You deployed this tokenization application to fix an audit finding, which caused the payroll run failure, which has Dick and Steve climbing the walls?'”

Bill Palmer

John made the changes because the next window for the change to be deployed was in four months and auditors would be on-site in one week. John never tested the change because there’s no test environment.

Bill requests a list of all the changes made in the past three days so they can examine the timeline and establish cause & effect. Bill finds out few people use the change management system to make requests.

The Change Advisory Board (CAB) is not well attended. Tams will make changes without approval or notice because of deadline pressures. Bill asks Patty to send out a meeting notice to all the tech leads and announce attendance is mandatory.

After review of the 27 changes in the past three days, only the InfoSec tokenization change and the SAN upgrade could be linked to payroll failure.

The applications were eventually brought online but the company had to submit payroll using the prior pay period. The local newspaper reports on the payroll failure after the Union complains.

From the Pipeline v6.0

This entry is part 6 of 36 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

4 DevOps Anti-Patterns That Lead to Disaster

Tom Stiehm discusses four common anti-patterns associated with DevOps, which are the “hero anti-pattern”, “continuous build anti-pattern”, “DevOps silo anti-pattern”, and “selective automation”.

A Guide to Threat Modeling For Developers

Jim Gumbley provides a great in-depth guide on threat modeling for team with multiple examples on Martin Fowler’s blog.

Deadlines and Agility

Doc Norton addresses the “no deadlines” philosophy in Agile. Deadlines happen often and teams are subject to external deadlines such as regulatory changes. Using repetition he explains three aspects of high-functioning teams.

Failure – Is It A Matter of When?

Barry O’Reilly uses the Chernobyl disaster as a backdrop to discuss how we perform failure analysis. There are four factors to consider for failure causes: (1) KPI’s drive behavior, (2) the flow of information, (3) value guides behavior, and (4) limited resources put pressure on behavior.

Idealcast Podcast

Gene Kim, author of the Phoenix Project, has started a podcast. The first episode covers the Five ideals, key principles for success in the digital age.

Cukes and Apples: Writing Scenarios with Page Objects

Welcome Back

In the previous article, Cukes and Apples: App Automation with Ruby and Appium, I demonstrated a functional workspace setup for mobile automation by using Appium to launch the Google Play Store app while running a Ruby/Cucumber test suite. The implementation described in that post is enough to prove that the workspace is capable of mobile automation, but not yet a functional test suite.

This post will cover the implementation of a working Cucumber test suite. This means launching and closing the app in our tests, writing step definitions that can interact with the app, and designing support code to make tests easier to write.

Updated Capabilities

Since the last post, I’ve acquired a new .apk file and updated my capabilities to install it, as shown in the screenshot below. I’m using the Walmart app – it seemed a better test candidate than the Google Play Store.

Updated env.rb since last post

The “app” capability is used to reinstall the app when the driver is launched. There are fair arguments that execution speed could be improved by leaving the app installed, but my experience has taught me not to deliberately manage app state – it’s easier to start fresh every time.

Create Hooks

We need to launch the app at the beginning of every test, and close it at the end. To do this, we can start by moving the sample code from env.rb to a Cucumber hook.

Create a file called hooks.rb in the features/support directory. In that file, create a Before hook and move the sample code into it, as shown in the screenshot below.

Notice how the begin/rescue construct has changed. Cucumber hooks fail quietly, so it is helpful to rescue and print any exceptions that are raised; for that reason, the begin/rescue now encompasses all the driver code.

Before hook in hooks.rb

The remaining code in the env.rb file is short and sweet:

env.rb without sample code

One more hook will ensure the driver is terminated at the end of every test – the After hook. Use a begin/rescue block and call the quit_driver method as shown below.

After hook in hooks.rb that closes driver

A Simple Scenario

It won’t be possible to observe the Before and After hooks in action until we have a Cucumber scenario to execute. A very simple scenario will help us to test the driver action in those hooks, practice the language we want to use in our tests, and prove that this application can be automated.

Create a feature file under the features directory. I recommend organizing feature files in a single directory under features, like the “gherkin” directory in the screenshot below.

I created a feature called Welcome to describe the experience of launching the app for the first time.

Welcome feature for Walmart app

The step definitions that drive this scenario are very simple – each call upon the driver that was created in the Before hook to find an element on the screen, then two of the steps use that element to test a value or perform an action. These step definitions are intentionally crude, and will be improved.

Crude step definitions

Execute The Scenario

If you are following along with a similar implementation, it should now be possible to execute the scenario to test your work. Check out the following videos to see the automation in action.

Create Page Objects

One of the problems in our step definitions is a lack of clarity. Direct references to the driver (like uses of @driver in the steps screenshot above) generally hurt readability because the logic of an Appium driver is not like the logic of the application under test. Directly referencing the driver will create step definitions that are too technical to understand at a glance.

The Page Object pattern is a fine solution for improving the clarity of application logic in code, and would make a great improvement to our test suite. Implementing the pattern involves creating constructs in code to represent pages of an application, and then imbuing those constructs with data and logic that describe the pages.

The scenario implemented begins with a step that validates the display of the Welcome page by searching for a title element. The code is very simple – it finds an element, but the logic is obscure. How does the reader know that it validates the display of the page? If an object represented the Welcome page, and this step simply asked that object if the page is visible, then the intent of this step definition would be perfectly clear. Such an object, implemented with a Ruby class, might look like this:

Simple page class for Welcome page

By moving validation of page visibility into a method of a page object, we make the code reusable and get a descriptive method name for method name.

The step definition which calls upon this page object is now much easier to understand:

Step definition is more readable with page object pattern

To make this work, some additional environment setup is necessary. In the env.rb file, require the new page class like in the screenshot below.

Require page class file in env.rb

For the following steps, we can make similar changes. Take a look at these updated step definitions and consider whether they are now easier to understand:

Updated steps with page objects

Advanced Page Objects

Using the Page Object pattern creates a risk of generating lots of boilerplate code – for example, the initialize method from the Welcome page above would be reproduced in every other page object in the suite. A conscientious developer will quickly begin seeking optimizations to his or her Page Object implementation. An alternative is the Screenplay Pattern.

A great example to follow is page-object, the Ruby gem which implements the Page Object pattern for web automation with selenium webdriver via the Watir gem. Cheezy and other developers on the project have created a very nice framework for describing elements in page classes and for managing references to the driver and current page.

The first optimization that we can apply is to reduce duplicate code by establishing a common base class for pages. This immediately allows us to remove the duplicate initialize logic from pages that use this base.

new BasePage class
WelcomePage initialization has been moved into BasePage

The next optimization is to programmatically create methods for page elements. In the examples above, I created a method in my page class every time I needed to find an element, click on an element, or get the value of an element – this can get quickly get out of control, resulting in page classes that are hundreds of lines long and difficult to read.

An ideal implementation would streamline the process of creating elements. Examine the declaration of element and button in the following example:

WelcomePage with elements declared

That terse expression, which communicates that our page has one plain element and one button, can be accomplished with a little bit of metaprogramming.

Create class methods like element and button in the base page class and use define_method to create new element methods whenever a page class declares an element or button. This implementation is very similar to the page-object gem.

element and button methods in BasePage

Take a look at the full codebase on GitHub to explore the test suite upgrades implemented in this post:

https://github.com/RussellJoshuaA/cukes_apples_2

Coming Up Next

  • Advanced Cucumber Steps – creating steps for mobile test automation that are readable, reusable, and highly-flexible
  • Cross-platform mobile automation – creating flexible execution mechanisms, page objects that cover multiple platforms, tags for platform-specific execution

Resources

Slaying the Hydra: Orchestration Overview and Setting a Clean Slate

This entry is part 2 of 5 in the series Slaying the Hydra

This is the second in a series of blog posts explaining a way to execute distributed parallel test automation. The first entry can be found here.

In this post I walk you through the process of orchestration and the first orchestrated stage. I will explain the concepts in a way that allows them to be applied to multiple use-cases.  Since I am a Rubyist at heart — with a fondness for Cucumber and Jenkins — the examples found here are geared towards them.

Orchestration Overview

Jenkins provides pipelines as a functionality, which serve the purpose of orchestrating multiple jobs into a singular flow. The original intent of a pipeline is for automated continuous delivery of software to a production environment. We utilize the pipeline to orchestrate our parallel testing effort.

The purpose of the pipeline being developed provides feedback to our stakeholders as rapidly as we can, given the resources provided. Additionally, we make the framework dynamic to handle configuration changes quickly and efficiently. 

The pipeline implementation in Jenkins requires two parts:

  • The first is the pipeline code, referred as a Jenkinsfile, which is often stored in the related source code repository. In the example below, the Jenkinsfile is stored in the testing repository.
  • The second part is the pipeline job within Jenkins, which references the source code that stores our Jenkinsfile. The image below is the configuration of a pipeline job in Jenkins. We provide the URL location, authentication parameters, and name of the Jenkinsfile.

Jenkins jobs allow for parameters configured at a runtime to supply dynamic execution, depending on the selection. The image below is an example where we choose between IE and Chrome as the browser to be utilized for the UI tests.  

When running a build of the job we can specify between IE and Chrome. If we kick the job off automatically at a certain time it will default to the first option in the drop-down provided (see below).

After constructing the pipeline job in Jenkins, we can proceed to understand the Jenkinsfile. To complete our objectives, we can breakdown the three sections or stages for building the Jenkinsfile.

The above image is a Jenkinsfile, which is what we store with our source code pulled from a repository and utilized as the script for our pipeline.

*Note: while I am providing an overview of a Jenkins pipeline, I cannot cover all the facets of this expansive tool in one blog post. However, jenkins.io has all the information you could ever want, outside of what I supply here.

From the image above we see the node parameter, which allows us to tell Jenkins where we want the pipeline job itself to run. This does not mean every job within the pipeline will run on machines with this tag associated to them, but we will dive into that in the next installment/blog post.

The browser method returns the result of params.browser which is received from the parameter within the pipeline job in Jenkins. This will either equal ‘ie’ or ‘chrome’.

The total_number_of_builds method returns ‘3’ which will come in handy later in our execution stage.

Setting a Clean Slate

In our ‘clearing’ stage we want to build a job named ‘clear_workspace’ that will go out to all impacted machines and clear a file location to ensure we are guaranteed to start with a clean slate.

Executing Our Tests

In our ‘running’ stage we can run three jobs in parallel to provide a faster feedback loop to our end users. I chose the number of jobs randomly; it could just as easily be 20 or 100 and the pipeline would function correctly.

The image below displays a “catchError wrapper” that prevents a failure code from one of the built jobs stopping the whole pipeline execution.

The parallel keyword allows us to execute the jobs at the same time rather than waiting for them to execute sequentially.

Lastly, within the three jobs we are building the parameter sections have browser and total_number_of_builds returned, which are from the methods created at the top of the pipeline file. Additional we are passing a build_number parameter which is either 1, 2 or 3.

Consolidating our Results

Our ‘consolidation’ stage will allow us to access the machines utilized for testing and pull meaningful artifacts from the job and report the results to our stakeholders.

There are two jobs in consolidating stage: one job is going out and pulling the information from each impacted machine and the other job is to consolidate the information into a concise report.

There are complications to this stage, which will be discussed in the final installment of this blog posts series.

Setting a Clean Slate In-Depth

As previously mentioned, the ‘clear_workspace’ job has the intent to clean up after previous runs of the same job on all utilized workstations.  

During execution, the test results are stored in a specific file location on each workstation. We do not want previous results carried into the current execution, so we must go out to each machine being utilized as a node and clear the specified file location.

In Jenkins, we can set a job to iterate a set of workstations via the node parameter plugin. This will execute the job on each node specified sequentially via the default nodes option.

Additionally, we can check the ‘Execute concurrent builds if necessary’ parameter to allow the executions to happen in parallel.

For the actual commands (Windows commands, sorry Mac folks) we need to delete a certain directory and recreate, to ensure it is empty.

In the image above, the file location that we are clearing (first stage) will be the same file location where the results are stored for consolidation (last stage) of our pipeline. Remember, it is important for those locations to be the same.

In the next installment, we discuss executing the tests in parallel and how we ensure tests are distributed within the parallel executions.

From the Pipeline v5.0

This entry is part 5 of 36 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web. 

Six Testing Personas to Avoid

This is a great article about the anti-patterns associated with software testers. Great advice for testers on avoiding common problems like only following existing test scripts, a hyper-focus on automation, growing stale in your learning, or focusing too much on shiny new developments without implementing them.

Using Docker Desktop and Docker Hub Together

This is the first of a two-part series on how to use Docker Desktop and Docker Hub. This is a great step-by-step process for anyone looking to get into docker for the first time.

What is Non-Functional Testing?

Eran Kinsbruner provides a great overview of non-functional testing including a list of many of the non-functional testing types. Many of the activities he describes can be assisted with a cloud-based testing platform.

The Science of Software Testing

Last month I spoke at ComTrade’s “Quest for Quality” webinar series. This blog post summarizes much of the talk on “The Science of Testing” about how software testers can leverage practices of scientists to help improve the rigor of testing.

Should We Build Better Software…Or Better People? (With Damian Synadinos)

The QA Lead podcast is a great resource for testers. In the most recent episode Damian Synadinos is the guest of honor. Damian does a great job of getting to the fundamentals of software testing and solving the oft-overlooked human component of software development.