Fictional Test Data

An underlying principle in our work as software developers is that everyone should understand our work. From design to production, we strive to produce sensible models for other humans to understand. We design requirements for clarity, and hammer them out until everyone involved agrees that they make sense. We write code that is self-documenting, employs conventions, and uses design patterns, so that other developers can better comprehend how it works. We write tests that tell detailed stories about software behavior – stories that are not only truthful, but easily understood. We enshrine this principle in the tools and processes we use, in quality assurance especially, with tools like Cucumber and Gherkin, which emphasize collaboration and communication.

We are storytellers

To that end, I propose an exercise in which we try on a new hat – the storyteller.

I sense some parallel between my experience as a reader and my experience in quality assurance. I feel sensitive to the difference between an easy, accessible writing style, and writing that is denser and more challenging. Popular authors like Stephen King are criticized for being too prolific, too popular, and too easy to read, but there is a great value in accessibility – reaching a wide audience is good for the business model.

In software development, striving for accessibility can be valuable. Most of the difficulty that I’ve witnessed and experienced can be attributed not to the inherent complexity of code, or to the scale of a system, but to simple miscommunications that occur as we work to build them. From the perspective of quality assurance, it’s particularly harmful when our tests, our expressions of expected system behavior, are difficult to understand. In particular, I find that test data which drives a test is difficult to understand, untrustworthy, and time-consuming to manage.

When I say “test data”, I’m speaking broadly about information in our systems as it is employed by our tests. It’s helpful to break this down – a common model categorizes information as master data, transactional data, and analytical data.

Most of the data that we directly reference in our tests falls into the category of master data. Master data includes business entities like users, products, and accounts. This data is persistent in the systems that we test, and it becomes persistent in our tests too – most test cases involve authenticating as some kind of user, or interactioning with some kind of object (like a product). This is usually the main character in our stories.

Transactional data is just what it sounds like – transactions. In our systems, this may include purchases, orders, submissions, etc – any record of an interaction within the system. We don’t usually express this directly in our tests, but transactional data is intrinsically linked to master data, and the entities that we use in our tests are further defined by any associated transactional data.

The last category is analytical data, which is not obviously expressed in our tests. This encompasses metrics and measurements collected from production systems and users to make business decisions that drive software development. It tells us about the means by which users access our systems, and the way that they use them. This data is also a part of our tests – we employ information about real users and real interactions to improve our testing, and all of our test data becomes a reflection of the real world.

What does our test data typically look like?

I wouldn’t judge a book by it’s cover, but I would like to read test data at a glance. That’s not easy to do when we share user data that looks like the following example:

We don’t know much about this user without doing further research, like slinging SQL queries, or booting up the app-under-test to take a look. This information is not recognizable or memorable, and it undermines the confidence of anyone who would attempt to read it or use it. It tells a poor story.

Why do we construct data like this? The test data I remember using most often was not particularly well-designed, but simply very common. Sometimes a user is readily shared amongst testers because it is difficult to find or create something better – I give this user to you because it was given to me. At best, we could infer that this is a fair representative of a “generic user” – at worst, we may not even think about it. When we discover some strange new behavior in the system, something which may be a real defect to act on, we often need to ask first “was this data valid?”

Would our work be easier if our data was more carefully constructed?

As an example, I present the Ward family. I designed the Ward family to test tiers of a loyalty points system, and each member represents a specific tier. For the highest tier user, with more rewards than the others, I created Maury Wards. For the middle tier, a user with some rewards – Summer Wards. To represent the user who has earned no rewards – Nora Wards. If the gag isn’t obvious, try sounding out the names as you read them.

I created these users without much though. I was just trying to be funny. I don’t like writing test data, and making a joke of it can be motivating. What I didn’t realize until later is that this data set was not only meaningful, but memorable. I found myself re-using the Ward family, every time I needed a specific loyalty tier, for months. I knew what this data represented, and I knew exactly when I needed to use it.

Beyond the names, I employed other conventions that also made this data easier to use. For example, I could summon these users with confidence in all of our test environments because I gave them email addresses that indicated not only what kind of user they are, but what environment they were created in. I recommend applying such conventions to any visible and non-critical information to imbue data with meaning and tell a clear story.

What could we do to tell a more detailed story?

User stories are relayed to us through an elaborate game of telephone, and something is often lost along the way. Take a look at the following example, and you may see what I mean.

“As a user”. Right there. This example may seem contrived, but I’ve seen it often – a user story without a real user. This doesn’t explicitly encourage us to consider the different real-world people who will interact with our software, and the kind of tests that we should design for them. It would probably make an individual requirement clumsy to include much more explicit information about the user, but it is important. Imagine that this feature was tested with “a user”, and it passed without issue – great. But what about Dan? Dan does all of his business online, and doesn’t shop in-store. Where he lives, our system won’t even recommend a nearby store. How can we avoid forgetting about users like Dan?

If we can’t describe the users in a requirement, what can we do?

Alan Cooper, software developer and author of The Inmates Are Running The Asylum, argues that we can only be successful if we design our software for specific users. We don’t want all users to be somewhat satisfied – we want specific users to be completely satisfied. He recommends the use of personas – hypothetical archetypes that represent actual users through the software design process. UX designers employ personas to infer the needs of real-world users and design solutions that will address them, and for quality assurance, we should use the same personas to drive test case design and execution.

If I expanded a member of the Wards family into a full persona, it might look like the following example – a little something about who the user is, where they are, and how they interact with our system.

A persona includes personal information about a user, even seemingly irrelevant information, like a picture, name, age, career, etc – to make them feel like a real, relatable person. Thinking about a real human being will help us understand which features matter to the user, and how the user will experience these new features, to design test cases which support them.

A persona includes geographic location, especially when relevant in our tests. Software might behave differently depending on the user’s specific GPS location, local time zone, and even legislation. A user may be directed to nearby store locations or use a specific feature while in-store. Our software may behave differently depending on time and date – for example, delivery estimates, or transaction cut-off times. Our software may need to accommodate laws that make it illegal to do business across geographic boundaries, or to do business differently. The California Consumer Privacy Act (CCPA) is a recognizable example with implications for all kinds of software-dependent businesses.

A persona also includes information about the technology that a user favors. This is the lens through which they view our software, and it changes the user experience dramatically. How is this feature presented on PCs, smartphones, and tablets? Does it work for users on different operating systems? Which browsers, or clients, do we support? We can design personas for users with many combinations of hardware and software, and then execute the same test with each of them.

Hope lives in Honolulu, Hawaii, and I chose the name because the ‘H’ sound reminds me that. She lives in the Hawaiian-Aleution time zone, which can be easy to forget about if we do most of our testing against a corporate office address. She uses a Google Pixel 3 and keeps the operating system up-to-date – currently Android 10. While Honolulu is a major city, I took a liberty of assuming a poor internet connection – something else which may not be tested if don’t build personas like this.

Lee lives in Los Angeles, CA – Pacific Time Zone. He uses an iPhone-XS Max, and he doesn’t update the operating system immediately – he’s currently using iOS 12. He has a good network connection, but there’s a wrinkle – he’s using other apps that could compete for bandwidth and hardware resources.

Cass lives in Chicago, IL – Central Time Zone. She’s another Android user, this time a Samsung device, currently running Android 9. She has a good connection, but she’s using other apps which also use her GPS location.

How do we manage all of this?

If I asked you today, “where can I find a user who meets a specific condition,” where would you start? How is your test data managed today? There are plenty of valid solutions, like SharePoint, wikis, network drives, etc – but don’t think of the application database as a test data library – in test environments, it is not a library, but a landfill. There is too much to parse, too many duplicates, too much invalid data – we can only find helpful data if we are very good at finding it. Keep personas and detailed data somewhere that can be easily accessed and manipulated.

We can further reduce the work of test data management by treating the collection like a curated, personal library, where every story is included for a reason. Take care to reduce noise by eliminating duplicate data sets, and removing invalid ones. Name data sets for reference so that they can be updated or recreated as needed without disrupting the requirements, test cases, and software developers that use them.

In summary, I advocate the following:

  • Test data should be recognizable and memorable
  • Test data should be realistic and relatable
  • Test data should be curated and readily available

Additional Resources:

The Inmates Are Running The Asylum, Alan Cooper
Types of Enterprise Data

Automated Accessibility (ADA) Testing with Pa11y

ADA validation is the oft-forgotten child in QA Automation conversations. As Quality Assurance professionals we focus on functional, performance, and security testing but miss the value and importance of accessibility validations. Any site that is customer-facing has an obligation to comply to ADA standards. Therefore, it’s important for us to make accessibility an up-front concern in testing.

For a little background, the Americans with Disabilities Act was signed into law in 1990. The law legally obligates the prevention of discrimination in all areas of public life. There are five sections of the law, but for web applications only the regulations within Title 3 – Public Accommodations apply.

source: https://www.adatitleiii.com

The frequency and severity of the lawsuits related to ADA Title 3 are rising year over year, as evidenced by the chart above. Most of the lawsuits require the company to set aside resources to become ADA compliant.

The commonly accepted standard to gauge ADA compliance is the WCAG (Web Content Accessibility Guideline). The guidelines are provided by the W3C (World Wide Web Consortium). They are broken down into four sections.

Perceivable – Information and user interface components must be presentable to users in ways they can perceive.

Operable – User interface components and navigation must be operable.

Understandable – Information and the operation of user interface must be understandable.

Robust – Content must be robust enough that it can be interpreted by a wide variety of user agents, including assistive technologies.

Each guideline has multiple subsections and multiple success criteria to determine compliance; the criteria are judged on A, AA, and AAA standards with A being the lowest compliance level and AAA being the highest. Additional information can be found HERE.

There are multiple options for ADA compliance testing that fall into a few categories: (1) In house QA utilizing tools like JAWS to go through the web applications flow utilizing tools designed for disabled individuals; (2) Companies that will complete scans and/or manual validations as a service; and, (3) Static tools built as add-on’s into browsers such as Axe and Wave. I personally have no problem with any approach that nets results but would like to provide a fourth option to the list.

Pa11y is an accessibility testing tool utilized to determine a websites’ WCAG Compliance. The tool can scan based on A, AA, and AAA standards and can be executed directly against a web page or an HTML file.

To jump into the weeds a bit, Pa11y utilizes Puppeteer which is a node library that provides an API to interact with headless Chrome. When Pa11y runs Puppeteer it’s creating a headless Chrome browser, navigating to the web page or opening the HTML file in the browser and then the page is scanned against an WCAG compliance rule-set.

The next logical question is “what rule-set is utilized to determine WCAG compliance?” Pa11y by default utilizes HTML Code Sniffer, which is a client side script that scans HTML source code and detects violations of a defined coding standard.

Pa11y will return the following for each violation found: a description of the error, the WCAG guideline violated, the path to the HTML element, and the content of the HTML element. Pa11y by default will output to the command line but with some configuration modifications can export either CSV or JSON.

Additionally, Pa11y has the ability to run the Axe rule-set against the HTML spun up in the browser. This can provide a good level set if your developers are utilizing Axe as well.

So now that we have covered Pa11y, the next step will be discussing the ways in which we can implement Pa11y to run automatically.

The first way is built into Pa11y: we can implement the actions functionality that allows the browser to navigate through the web-page utilizing CSS Selectors via Puppeteer.

source: https://www.w3.org/WAI/standards-guidelines/wcag/

The second way is to utilize an existing test automation framework to complete the following steps:

  1. Utilize your test framework to navigate to the desired scanned page, scrape, and save the HTML to disk.
  2. From the command line pass the HTML file to Pa11y

The first option is beneficial if you’re building from the ground up as no additional framework is needed. The second option is beneficial if you have a built automation framework and you want to utilize the existing navigation to all the various pages that require validation.

Whichever choice you make the scripts should be built into a Continuous Integration (CI) job utilizing your tool of choice (Jenkins, Bamboo, etc). In addition to providing a way to continuously execute your scripts the CI tool will provide a location for storage of the scans to prove compliance if a lawsuit requires proof of compliance effort.

One important note: Automated scans with Pa11y do not replace the need for manual validation as there are WCAG requirements that cannot be validated via an automated scanning tool.

In summary, every web development team should be validating WCAG compliance as part of its software development life cycle. Also, WCAG compliance should be included in teams definition of done for a given card. Lastly, to maximize success for an application under test the results should be transparent and utilize your preexisting automation framework if possible to do the heavy lifting.

Validating Site Analytics

Almost every modern company with an e-commerce presence makes decisions with the help of site data and analytics. The questions posed surrounding a user base can be almost endless. Which pages are people viewing? What marketing campaigns and promotions are actually working? How much revenue is being generated and where is coming from?

In an environment where data is valuable and accessible, it’s important to take a step back and ask the question: is this data accurate? If the data Brad Pitt was basing decisions on to run a baseball organization in the movie Moneyball wasn’t correct, then it would’ve been an extremely short movie (if not somewhat comical). Ultimately, the analytics collected from our websites and applications are used to make important decisions for our organizations. When that data turns out to be inaccurate then it becomes worthless, or worse yet, negatively impacts our business.

Throughout my professional career I have noticed ensuring the integrity of this data can often be put on the backburner within individual software teams. Sure, it’s one of the most important things to leadership, but in our day-to-day job we are often focused on more visible functionality rather than the one network call in the background that is reporting data and doesn’t have anything to do with our apps actually working. At the end of the day, if this data is valuable to our leaders and organization, then it should be valuable to us.

Let’s look at an imaginary business scenario. Say we have a site that sells kittens. Our site sells all kinds of kitten breeds. Our Agile team been working on the site for a long time and feels pretty good about our development pipelines and practices. The automated testing suite for the site is robust and well maintained, with lots of scripts and solid site coverage.

Then one day we find out that Billy from the business team has been doing user acceptance testing on our Adobe Analytics once every couple months. He’s got about 200 scripts that he manually goes through, and he does his best to look at all the really important functionality. But wait a second… we know that our site records data for about 100 unique user events. What’s more, there are about 200 additional fields of data that we are sending along with those events, and we are sending data on almost every page for almost every significant site interaction. This could easily translate into thousands of test cases! How could we possibly be confident in our data integrity when we are constantly making changes to these pages? How in the world is Billy okay with running through these scripts all the time? Is Billy a robot? Can we really trust Billy?

This new information seems like a potential quality gap to our team, and we wonder how we can go about automating away this effort. It definitely checks all the boxes for a good process to automate. It is manual, mundane, easily repeated, and will result in significant time savings. So what are our options? Our Selenium tests can hit the front end, but have no knowledge of the network calls behind the scenes. We know that there are 3rd party options, but we don’t have the budget to invest in a new tool. Luckily, there’s an open source tool that will hook up to our existing test suite and won’t be hard to implement.

The tool that we’re talking about is called Browserup Proxy (BUP), formerly known as Browsermob proxy. BUP works by setting up a local proxy that network traffic can be passed through. This proxy then captures all of the request and response data passing through it, and allows us to access and manipulate that data. This proxy can do a lot for us, such as blacklisting/whitelisting URLs, simulating network conditions (e.g. high latency), and control DNS settings, but what we really care about is capturing that HTTP data.

BUP makes it relatively easy for us to include a proxy instance for our tests when we instantiate our Selenium driver. We simply have to start our proxy, create a Selenium Proxy object using our running proxy, and pass the Selenium Proxy object into our driver capabilities. Then we execute one command that tells the driver to create HAR files containing request and response data.

from the BUP GitHub page at https://github.com/browserup/browserup-proxy

Since we will be working with HAR files, let’s talk about what those actually are. HAR stands for “HTML Archive”. When we go into our Network tab in our browser’s Developer Tools and export that data, it’s also saved in this format. These files hold every request/response pair in an entry. Each entry contains data such as URL’s, query string parameters, response codes, and timings.

HAR file example from google.com using Google’s HAR Analyzer
HAR entry details example

Now we can better visualize what we’re working with here. Assuming we’ve already collected our 200 regression scenarios from Billy the Robot, we should have a good jumping off point to start validating this data more thoroughly. The beauty of this approach is we can now hook these validations up to our existing tests. We already have plenty of code to navigate through the site, right? Now all we need is some additional code to perform some new validations.

Above we mentioned that our site is using Adobe Analytics. This service passes data from our site to the cloud using some interesting calls. Each Adobe call will be a GET that passes its data via the query parameters. So in this case we need to find the call that we’re looking to validate, and then make sure that the correct data is included in that call. To find the correct call, we can simply use a unique identifier (e.g. signInClickEvent) and sort through the request URLs until we find the correct call. It might be useful to use the following format to store our validation data:

Data stored in YML format

Storing data this way makes it simple and easy to worth with. We have a descriptive name, we have an identifier to find the correct request, and we have a nice list of fields that we want to validate. We can allow our tests to simply ignore the fields that we’re not specifically looking to validate. Our degree of difficulty will increase somewhat if we are trying to validate entire request or response payloads, but this general format is still workable. So to review our general workflow for these types of validations:

  1. Use suite to instantiate Proxy
  2. Pass Proxy into Selenium driver
  3. Run Selenium scripts as normal and generate desired event(s)
  4. Load HTTP traffic from Proxy object
  5. Find correct call based on unique identifier
  6. Perform validation(s)
  7. (optional) Save HAR file for logs

Not too bad! We can assume that our kitten site probably already has a lot of our scenarios built out, but we just didn’t know it before. There’s a good chance that we can simply slap some validations onto the end of some existing scripts and they’ll be ready to go. We’ll soon be able to get those 200 UAT scripts built out in our suite and executing regularly, and Billy will have a little less work on his plate going forward (the psychopath).

In my opinion, it’s a very good idea to implement these validations into your test automation frameworks. The amount of value they provide compared with the amount of effort required (assuming you are already running Selenium scripts) makes this a smart functionality to implement. Building out these tests for my teams has contributed to finding a number of analytics defects that probably would’ve never been found otherwise and, as a result, has increased the quality of our site’s data.

A few notes:
– We don’t necessarily want to instantiate our Proxy with every Selenium test we run. The proxy will consumer additional resources compared to running normal tests, but how much this affects your test box will vary depending on hardware. It is recommended that you use some sort of flag or environment variable to determine if the Proxy should be instantiated.
– It can seem practical to make a separate testing suite to perform these validations, but with that approach you will have to maintain practically duplicate code in more than one place. It is easier to plug this into existing suites.
– BUP is a Java application that has it’s own directory and files. The easiest way to manage distribution of these files is to plug it into version control in a project’s utility folder. There is no BUP installation required outside of having a valid Java version.
– I wanted to keep this post high level, but if you are using Ruby then there are useful gems to work with Browserup/Browsermob and HAR files (“browsermob-proxy” and “har”, respectively).

Happy testing!

Additional References:

Browserup Proxy
Browsermob Proxy Ruby gem
HAR Ruby gem

Welcome to Red Green Refactor

We officially welcome you to the start of Red Green Refactor, a technology blog about automation and DevOps. We are a group of passionate technologists who care about learning and sharing our knowledge. Information Technology is a huge field and even though we’re a small part of it – we wanted another outlet to collaborate with the community.

Why Red Green Refactor?

Red Green Refactor is a term commonly used in Test Driven Development to support a test first approach to software design. Kent Beck is generally credited with discovering or “rediscovering” the phrase “Test Driven Development”. The mantra for the practice is red-green-refactor, where the colors refer to the status of the test driving the development code.

The Red is writing a small piece of test code without the development code implemented. The test should fail upon execution – a red failure. The Green is writing just enough development code to get the test code to pass. The test should pass upon execution – a green pass. The Refactor is making small improvements to the development code without affecting the behavior. The quality of the code is improved according to team standards, addressing “code smells” (making the code readable, maintainable, removing duplication), or using simple design patterns. The point of the practice is to make the code more robust by catching the mistakes early, with an eye on quality of the code from the beginning. Writing in small batches helps the practitioner think about the design of their program consistently.

“Refactoring is a controlled technique for improving the design of an existing codebase.”

Martin Fowler

The goal of Red Green Refactor is similar to the practice of refactoring: to make small-yet-cumulative positive changes, but instead in learning to help educate the community about automation and DevOps. The act of publishing also encourages our team to refine our materials in preparation for a larger audience. Many of the writers on Red Green Refactor speak at conferences, professional groups, and the occasional webinar. The learning at Red Green Refactor will be bi-directional – to the readers and to the writers.

Who Are We?

The writers on Red Green Refactor come from varied backgrounds but all of us made our way into information technology, some purposefully and some accidentally. Our primary focus was on test automation, which has evolved into DevOps practices as we expanded our scope into operations. Occasionally we will invite external contributors to post on a subject of interest. We have a few invited writers lined up and ready to contribute.

“Automation Team” outing with some of Red-Green-Refactor authors

As for myself, I have a background in Physics & Biophysics, with over a decade spent in research science studying fluorescence spectroscopy and microscopy before joining IT. I’ve worked as a requirements analyst, developer, and tester before joining the ranks of pointed-headed management. That doesn’t stop me from exploring new tech at home though or posting about it on a blog.

What Can You Expect From Red Green Refactor?

Technology

Some companies are in the .NET stack, some are Java shops, but everyone needs some form of automation. The result is many varied implementations of both test & task automation. Our team has supported almost all the application types under the sun (desktop, web, mobile, database, API/services, mainframe, etc.). We’ve also explored with many tools both open-source and commercial at companies with ancient tech and bleeding edge. Our posts will be driven by both prior experience as well as exploration to the unknown.

We’ll be exploring programming languages and tools in the automation space.  Readers can expect to learn about frameworks, cloud solutions, CI/CD, design patterns, code reviews, refactoring, metrics, implementation strategies, performance testing, etc. – it’s open ended.

Continuous Improvement

We aim to keep our readers informed about continuous improvement activities in the community. One of the great things about this field is there is so much to learn and it’s ever-changing. It can be difficult at times with the firehose of information coming at you since there are only so many hours in the day. We tend to divide responsibility among our group to perform “deep dives” into certain topics and then share that knowledge with a wider audience (for example: Docker, Analytics or Robot Process Automation). In the same spirit we plan to share information on Red Green Refactor about continuous improvement. Posts about continuous improvement will include: trainings, conference recaps, professional groups, aggregated articles, podcasts, tech book summaries, career development, and even the occasional job posting.

Once again welcome to Red Green Refactor. Your feedback is always welcome.