Breaking Things
01 August 2010
So you like to ‘break’ software
Picture the scene. You’re interviewing a tester.
The interview has gone great so far. The candidate appears personable, bright and has relevant experience for the role. You’ve set some traps in your interview questions and they’ve skillfully negotiated each one; deftly turning potential road-blocks into opportunities for you to learn about each other. You’ve spent a little time pairing on some software, alternating between ‘driving’ the testing and ‘navigating’, comentating as you go. Now for one final question - a formality, really…
“What do you most like about testing?”. You ask.
The tester turns to you, a thin smile forming on their lips. You’re convinced that you see a glint in their eyes as they answer:
“I like to break things.”
The myth
First and most importantly, a philosophical question: Is it possible for a tester to ‘break’ software?
Think about this question for a minute. I would argue that a tester can no more ‘break’ software than they can assure its quality. If you don’t have access to change the source code[1], then it must have been ‘broken’ when you received it.
A tester can use software in a way that demonstrates a potential problem for some person, under some conditions, at some time.
‘Broken’ hides information
So if the tester didn’t break the software, does this mean that the software is ‘broken’ at all? Is there any advantage to describing it as such?
In this case, I contest that using language like ‘broken’ gives us no advantage when describing information revealed during testing. Consider the following example statement about a potential problem:
bq. “I was trying to run the ‘Widget quarterly sales report’ from the dashboard, but the ‘widget selection criteria’ dialog is broken!”
We can infer that the tester was looking at the ‘dashboard’ screen of the application and probably entering data into the form that allows the ‘Widget quarterly sales report’ to be customised before it runs. We can also infer that the tester was attempting to limit the results of the report to a particular widget or widgets, using the ‘widget selection criteria dialog’.
Unfortunately, from this statement, we can’t infer much more. Is the ‘widget selection criteria dialog’ not accepting input? Is it accepting input but not returning the expected results in the report? What are the expected results? Is the system crashing when we select items from the dialog? Is the system crashing when we run the report with this set of widgets? … Why should we need to ask so many extra questions?
The answer, lies partly in the use of the word ‘broken’ in our problem statement[2]. Think about how much easier it would be if we replaced ‘broken’ with this:
bq. “I was trying to run the ‘Widget quarterly sales report’ from the dashboard, but every time I try to select a widget from the ‘widget selection criteria’ dialog, an error message popup is displayed saying ‘invalid widget selected’.”
bq. “The widgets were available in the list, so as a report user I felt confused because I assumed they were available for selection.”
This is significantly better for anyone interested in the problem. Programmers can pick up this description and immediately attempt to reproduce the problem. They may even be able to look at the associated unit tests and production code that powers the widget selection dialog box and see what’s going wrong.
Project stakeholders now have an idea of the impact of the problem. They know who it will affect, report users, what the experience will be, a confusing error message, and how frequently it will occur, when the user attempts to do something fairly common with the reporting interface. Already the stakeholder is in a good position to make a judgement about the priority of changing the behaviour.
‘Broken’ implies blame
Not only do we provide more useful information when we avoid using the word ‘broken’ but we potentially avoid taking an adversarial stance on our bug reports.
The word ‘broken’ implies that someone is to blame. Who broke it? We established earlier that it wasn’t the tester who broke the software (they couldn’t, even if they wanted to). So it must have been the programmer.
In my experience, placing blame whether implicitly or explicitly is not a helpful thing in a software team. It’s not constructive, it won’t get the problem fixed sooner or better, and it certainly won’t endear testers to programmers.
In fact, I’ve seen projects and teams completely derailed by blame[3]. It’s a powerful emotional device and can quickly undermine a positive team culture.
‘Breaking things’ is a narrow focus
What does it mean to be a tester driven by the need to destroy things? One outcome could be that this kind of tester’s favourite (and default) strategy involves using the software in ways that are dramatically different from the intention of the software designers.
Is this necessarily a bad thing? Certain kinds of perfectly reasonable tests involve purposefully misusing software and discovering information about its security and ability to be penetrated by attackers or malicious users. There is merit in this kind of testing, if it matters to people who matter. But is this information always important? Is it important enough to neglect other types of testing? As with almost everything in software; it depends.
If, as a tester, you’re always focused on a certain kind of test, or always using the software with a ‘break it’ mind set, then are you under reprsenting other kinds of users? What about the users who are just trying to get something done? How will the information you’re revealing about misuse of the software impact those regular users?
The reward
Why do testers claim to love breaking software? What’s the real motivation?
Is it the excitement of exploring the depths of a product and finding out how to cause a failure? Or is it the fact that it can be caused to fail and you were the one to find out? Do you just enjoy seeing something fail? Or is it something else?
Assuming that the reward is a ‘feel good factor’ associated with ‘liking to break software’, then at what point do you get your reward?
Is it when you find the failure? Is it when you share your findings with someone else? Is it when someone else acknowledges your destructive prowess?
Perhaps the reward lies on the positive side of the failure. Perhaps you feel good when the bug gets fixed? Or when the product ships without the bug and the customers are happy?
If so, how much of the reward is actually down to the ‘breaking’? What if it’s a small proportion? Does that mean you like some other aspect of testing more?
So what?
I find it interesting that people highlight the destructive aspect of testing as being one of the things they love about the job.
I wonder, if they were honest, they actually love helping their team create a great product, by asking questions, exploring and uncovering important value-threatening information. That’s certainly a big part of my motivation.
I’ve made some statements about the potential pitfalls of over-focusing on ‘breaking’ software and about how using ‘broken’ as a language device may be detrimental to a software team. Now I’m genuinely interested to hear from the self-proclaimed destructive testers.
Why do you love breaking software?
fn1. Modifying the software’s binary representation not withstanding, of course. How are your hex editing skills these days?
fn2. OK. The example is contrived, but I have seen and heard worse bug reports from testers and I suspect you have too.
fn3. I’ve seen blame used more regularly and to more destructive effect in larger organisations in almost every team and at every level. I would be interested to hear the experience of others on this topic.