From the Pipeline v21.0

This entry is part 21 of 25 in the series From the Pipeline

The following will be a regular feature where we share articles, podcasts, and webinars of interest from the web.

Code Review Checklist

Michaela Greiler has put together a great list of concerns for any code reviewer as a set of best practices. It’s one thing to require others to approve a pull request, it’s quite another to establish a set of standards for the team to enforce during those code reviews. She first provides a quick list of items to self-review before sending out the code for review by others. She also includes a robust list of items broken down by category: implementation, logic errors & bugs, error handling & logging, usability & accessibility, testing & testability, dependencies, security & data privacy, performance, readability, and expert opinion. She finishes with some excellent advice on being respectful in code reviews as a professional courtesy. This is definitely an article to be bookmarked.

Bringing New Life into Edge Selenium Tools

Microsoft Edge has been rebuilt using Chromium, which means a new automation implementation using Selenium. Michael Mintz took Edge through a test drive using Python to check the performance. He found that Edge automation has mostly the same response as Chrome with a few differences in how extensions are handled. Michael used SeleniumBase, a neat wrapper for Selenium, to setup his automation scripts. You can get EdgeDriver directly from Microsoft HERE and SeleniumBase HERE.

Improving Test Data Collection and Management

“There is much published about the data we generate to assess product quality. There is less discussion about the data testers generate for our own use that can help us improve our work—and even less is said about recommended practices for data collection. Test data collection, management, and use all call for upfront planning and ongoing maintenance. Here’s how testers can improve these practices.”

The Problem With “Broken Windows” Policing

This article goes off the path for the typical post on Red Green Refactor, but it’s important historically for context around the term “Broken Windows”, which is often applied to the state of a codebase with too much technical debt. In tech, the advice around broken windows is applied to maintaining good practices such as code reviews, regular refactoring, following design patterns, and implementing extensible architecture. However, the term itself has been misapplied for many years in law enforcement policies. The article is enlightening about the context of terms we use in tech but don’t necessarily know the origin or outside applications of the term.

Tutorial on SRE-Driven Performance Engineering with Neotys and Dynatrace

This is a great instructional video on performance feedback. Andreas Grabner and Henrik Rexed demonstrate how to practice performance engineering using Neotus and Dynatrace. They build a delivery pipeline that automates the tasks around preparing, setting up, and analyzing test executions.

Series Navigation<< From the Pipeline v20.0From the Pipeline v22.0 >>

Leave a Reply

%d bloggers like this: