Skip to main content

Posts

Who I Am and Where I Am January 2018

As of January 2018 I resigned my position as "Senior Member of the Technical Staff, Quality Assurance" at Salesforce.org. I have more than twenty years experience in testing user interfaces and APIs across a wide variety of platforms. If you would like to contact me, my DMs on Twitter are open or by email at christopher dot mcmahon at gmail. I do not use Facebook, LinkedIn, or Skype.

I have been working remotely for more than ten years. I enjoy telecommuting, it suits me nicely. In the past decade I have lived all over the western United States, including some time in Hawaii.

Here are some points from my career that help tell the story of how I came to be here today:

In 1997 I started testing 911 telecom location services, life-critical software for police/fire/ambulance dispatching for most of the USA. I tested these systems through Y2K and beyond. We saved the world. Don't let anyone tell you otherwise.

In 2004 I was, as far as anyone knows, the first person ever t…
Recent posts

Test Heuristic: Managing Test Data

Managing test data is one of the most difficult parts of good testing practice, but I think that managing test data receives surprisingly little attention from the testing community. Here are a few approaches that I think are useful to consider for both automated testing and for exploratory testing.

Antipattern: Do Not Create Test Data In The UI
But first I want to point out what I think is a mistake that many testers make. When you are ready to test a new feature or you are ready to do a regression test, or you are ready to demonstrate some aspect of the system, the data you need to work with should already be in place for you to use. Having to create test data from scratch before meaningful testing begins is an anti-pattern (and a waste of time!), whether for automated testing or for exploratory testing.

Create Test Data With An API
This is usually my favorite approach to managing test data. Considering this scenario in the context of an automated test, I usually will have a '…

Automated UI Test Heuristic: Sleep and Wait

Wait
In modern web applications, elements on the page may come into existence as a result of actions that the user may take, or they may disappear from the page as a result of actions that the user may take. In order to test modern web applications in the browser, it is rare that tests do not have to wait for some condition or another to be true or false before the test can proceed properly.

In general, there are two kinds of waiting in automated browser tests. In the language of Watir, these are "wait_until" and "wait_while". If you are not using Watir, you have probably already implemented similar methods in your own framework.

My experience is that wait_until type methods are more prevalent than wait_while type methods. Your test needs to wait until a field is available to fill in, then it needs to wait until the "Save" button is enabled after filling in the field. Your test needs to wait until a modal dialog appears in order to dismiss it, then it…

Test Automation Heuristic: No Conditionals

A conditional statement is the use of if() (or its relatives) in code. Not using if() is a fairly well-known test design principle. I will explain why that is, but I am also going to point out some less well-known places where it is tempting to use if(), but where other flow constructs are better choices. My code looks like Ruby for convenience, but these principles are true in any language.
No Conditionals in Test Code This is the most basic misuse of if() in test code:

if (x is true) test x elsif (y is true) test y else do whatever comes next It is a perfectly legal thing to do, but consider: if this test passes, you have no way of knowing which branch of the statement your test followed. If "x" or "y" had some kind of problem, you would never know it, because your conditional statement allows your test to bypass those situations.

Far better is to test x and y separately: 
def test_x (bring about condition x) (do an x thing) (do another x thing) end …

UI Test Heuristic: Don't Repeat Your Paths

There is a principle in software development "DRY" for "Don't Repeat Yourself", meaning that duplicate bits of code should be abstracted into methods so you only do one thing one way in one place. I have a similar guideline for automated UI tests, DRYP, for "Don't Repeat Your Paths".

I discuss DRYP in the context of UI test automation, but it applies to UI exploratory testing as well.  Following this guideline in the course of exploratory testing helps avoid a particular instance of the "mine field problem".
Antipattern: Repeating Paths to Test Business Logic I took this example from the Cucumber documentation:

Scenario Outline: feeding a suckler cow Given the cow weighs "weight" kg When we calculate the feeding requirements Then the energy should be "energy" MJ And the protein should be "protein" kg Examples: | weight | energy | protein | | 450 | 26500 | 215 | | 500 | 295…

Test Automation Heuristic: Minimum Data

When designing automated UI tests, one thing I've learned to do over the years is to start by creating the most minimal valid records in the system. Doing this illuminates assumptions in various parts of the system that are likely not being caught in unit tests.

As I have written elsewhere, I make every effort to set up test data for UI tests by way of the system API. (Even if the "API" is raw SQL, it is still good practice.) That way when your browser hits the system to run the test, all the data it needs are right in place.

For example (and this is a mostly true example!) say that you have a record in your system for User, and the only required field on the User record is "Last Name". If you start designing your tests with a record having only "Last Name" on it, you will quickly uncover places in the system that may assume the existence of a field "First Name", or "Address", or "Email", or "Phone". For some b…

Selenium Implementation Patterns

Recently on Twitter Sarah Mei and Marlena Compton started a conversation about "...projects that still use selenium..."  to which Marlena replied "My job writing selenium tests convinced me to do whatever it took to avoid it but still write tests..."  A number of people in the Selenium community commented (favorably) on the thread, but I liked Simon Stewart's reply where he said "The traditional testing pyramid has a sliver of space for e2e tests – where #selenium shines. Most of your tests should use something else."  I'd like to talk about the nature of that "sliver of space".

In particular, I want to address two areas: where software projects evolve to a point where investing in browser tests makes sense in the life of the project; and also where the architecture of particular software projects make browser tests the most economical choice to achieve a desired level of test coverage.

What Is A Browser Test Project? 
Also recently Josh…