Categories
MIL Guide to Technology

The Mother-in-Law’s Guide to Software Testing

This is part of my Mother-in-Law’s Guide to Technology. My Mother-in-Law is a very smart woman even if she isn’t a “computer person.” The goal of this series is to take some big and treacherous sounding ideas and bring them down to earth.

Dearest Mother-in-Law,

Remember when you had kids and you told them to do stuff. And remember how they used to do what you told them but that wasn’t always what you intended them to do? Well, that’s the way computer programs work.

Just like kids, computer programs will do what you tell them, but beyond that, all bets are off. They don’t do anything that directly contradicts what you said but that doesn’t mean they’ll do what you want them to do.

That’s why you need test kids (or computer programs). Once you give them instructions, you need to make sure that these instructions are understood and that the kids will carry out the spirit of the instructions, not just the letter of them. You also need to make sure that the kids will know what to do in ambiguous situations and that the kids are positioned well to defend against any bad actors. You do this by testing the kids by watching them perform the task once or twice in a variety of circumstances. It’s much better to find out any problems or ambiguity in your instructions before the kids are sent out into the wild.

Let’s look at two kinds of kids that are representative of common software errors. There are many others, but these are two of the big ones. Testing allows you to find these errors early in the process, preventing these issues from causing big problems later on.

The Happy Path Kid

This kind of kid is naively optimistic. He also may not be very bright. When you tell him something he always says, “Of course. Sure. I can do that!” But he almost certainly doesn’t understand all the nuances of what you want him to do. If the instructions don’t exactly match the situation he’s in, trouble quickly ensues. For example, imagine this situation:

You yell at him, “That was so dangerous. I told you never to cross the street when the light is red. What were you thinking! You almost killed yourself!”

Only to hear him say back, “But there was no red light. In fact, there was no light at all.  So I crossed the street.”

When thinking about the Happy Path kid, make sure you examine all of the possible ambiguous situations so that he (or the program) doesn’t run into any confusing situations.

The Adversary

Then there’s the kid who intentionally tries to skirt the rules. He’s far more clever than you are. He doesn’t break any rules but you’re continually surprised how he can turn a little imprecision into a HUGE opportunity for misadventure. He’s the kind of kid who always wins at Scrabble because he’s looking up all the words on his phone. You assumed he wouldn’t do that but you never SAID he couldn’t.

While the Happy Path Kid may encounter ambiguous situations and be led astray, the Adversary will look for weaknesses in your system and figure out how to exploit them.

Here are two stories from my time at Yale where I ran into real-life examples of these “problem children.”

The Happy Path Problem: Heads

This situation came up during my freshman year. It was a beautiful day and we were sitting outside in a seminar called “Perspectives on Science.” As we were sitting on the lawn under a tree, one of my classmates was performing a demonstration of how probability works. If you flip a coin a large number of times, half of the time will be heads and the other half tails.

“Can someone give me a quarter?” she asked the group. My friend Christine excitedly reached into her pocket and grabbed a quarter. The first person flipped the coin.

“Heads,” they said.

“Heads,” the second said.

“Heads.”

“Heads.”

“Heads.”

“Heads.”

“Heads.”

“Heads.”

Flipping a Coin from Rosencrantz and Guildenstern are Dead

By this time the quarter came back to the leader who examined the coin. “Who walks around with a two-headed quarter?!” she blurted out in surprise. As it turns out, Christine did. At the time, we were obsessed about Tom Stoppard’s Rosencrantz and Guildenstern are  Dead which includes a scene where the laws of probability are broken with a coin that continuously lands on heads. So Christine got one.

The Adversary Problem: How to Cheat at Hangman

When I was at Yale, I took our most famous computer science class, CS223, with Professor Stanley Eisenstat. This is where I first learned about the adversary.

Professor Eisenstat Teaching How to Cheat at Hangman (pic via Twitter)

Professor Eisenstat Teaching How to Cheat at Hangman (pic via Twitter)

Professor Eisenstat wrote out a game of hangman on the board with 3 letters filled out:

_ill

We had 8 guesses to get this right. The class started shouting out different possibilities. We started very confidently with

bill

dill

fill

gill

hill

This went on for quite a while as we gradually lost that confidence. Then Professor Eisenstat told us that there were many different words that this could be — far more than 8. Bill, Dill, Fill, Gill, Hill, Jill, Kill, Mill, Pill, Sill, Till, and Will. That’s 12 words if you’re counting. Here’s where the adversary comes in. Because Professor Eisenstat hadn’t committed to an answer beforehand, there was no way that we could win the game. When we chose a letter, he removed that word from the set of possible winners. He always had an option that we hadn’t chosen.

Summing Up

We are in an age when software is part of everything we do. We don’t have finance anymore but finance + computers. We don’t have cars anymore but cars + computers. With software being such an integral part of everything we do, it’s even more important to ensure that software does what we intend it to do.

Let’s take cars for example. We are on the cusp of self-driving cars. The happy path for self-driving cars is pretty easy — driving on a highway on a sunny day. However, there are many unhappy surprises that the car will encounter like people walking across a street carrying a giant piece of plywood or human drivers falling asleep at the wheel. The adversary in the self-driving car example is even more interesting. You have people trying to take advantage of the polite self-driving car and street signs can be modified so that cars will incorrectly identify them

Given how much we rely on computer code, let’s make sure that our software does exactly what we want it to do!