I attended STPCon Fall 2012 in Miami, FL. I was there both as a track speaker and a
first time conference attendee. One interesting
aspects of the conference, there were others I’ll cover in another blog post,
was the testing competition that was available.
Matt Huesser, a principal consultant at Excelon Development, arranged
and helped judge the competition. A blog of his observations can be found at .
I participated in the competition and have my own thoughts
on the competition.
The rules were fairly simple. We had to work in teams of two to five. We had 4 websites we could choose to test and
we had a bug logging system to report our bugs.
We also had access to stand in product owners. We had 2 hours to test, log bugs, as well as
put together a test status report.
My first observation is that it was a competition but it
wasn’t. The activity was billed as a “There
can be only one” style of competition.
However, and more importantly, it was about sharing more than
competing. There were competitive
aspects to the activity, but the real value was in sharing approaches,
insights, and techniques with testers we have never met before. Not enough can be said about the value of
peering. Through this exercise, I was
able to share a tool, qTrace by QASymphony - for capturing steps to recreate our defects during our exploratory testing
sessions, as well as my approach to basic web site security testing. Although we didn’t do pure peering, it is
obvious how valuable the peering approach is.
Secondly, a simple planning discussion over testing approach
and feedback during testing is immensely valuable as it not only spawns brainstorming,
it helps reduce the occurrence of redundant testing. Through this exercise, my cohort, Brian
Gerhardt, and sat next to each other and showed each other the defects we
found. We also questioned each other on
things we had not tried, but were in our realm of coverage. For side by side pseudo peering, this approach
worked quite well for us and led to several bugs that we may not have looked
for otherwise.
Lastly, I reflected on the competition and there are several
observations that I have made as well as one startling curiosity that I think is
most important of all. Every single team in the competition failed
to do one single task that would have focused the effort, ensured we provided
useful information, as well as removed any assumptions over what was important. We failed to ask the stakeholder any of
importance regarding what they wanted us to test. We did not ask if certain functions were more
important to others, we did not ask about expected process flows, we did not
even ask what the business objective of the application was. Suffice to say, we testers have a bad habit of
just testing, without direction and often on assumption. I will be posting a blog post more on this
topic.
What I did notice is that testers when put under pressure,
such as a competition or being time-bound, will fall back on habits. We will apply those oracles that have served
us well in the past and work with heuristics that make sense to us. Often times this will produce results that
appears to be great, but in the end, they really lead to mediocre testing. If we had taken the time to ask questions, to
understand the application and the business behind it, our focus would have
been sharper on areas of higher priority, AND, we would have had context for
what we were doing and attempting to do.
I will keep this blog post short, the moral of the exercise
is simply to ask questions, to seek understanding, to gain context before you
begin.
1 comment:
A few hasty thoughts…
“The rules were fairly simple. We had to work in teams of two to five. We had 4 websites we could choose to test and we had a bug logging system to report our bugs. We also had access to stand in product owners. We had 2 hours to test, log bugs, as well as put together a test status report.”
When I read that part, my first thought was, “Who cares about any of this? If no one cares, then we’re done here! But, if someone cares, then I need to talk to them and find out what they care about. Which of the 4 websites do they care about the most/least? Of those, what things about the websites do they care about the most/least? How it functions? How it performs? How it looks? Anything else, at all?” Once I’d gathered that info, I could focus my efforts on “stuff that is cared about” and produce something that was more meaningful to “the person who cared”.
Then I continued reading…
“Every single team in the competition failed to do one single task that would have focused the effort, ensured we provided useful information, as well as removed any assumptions over what was important. We failed to ask the stakeholder any of importance regarding what they wanted us to test.”
“Aha!”, I thought, “Look how superior I am! *I* wouldn’t have missed that!”
Or would I…? I continued reading…
“What I did notice is that testers when put under pressure, such as a competition or being time-bound, will fall back on habits. We will apply those oracles that have served us well in the past and work with heuristics that make sense to us. Often times this will produce results that appears to be great, but in the end, they really lead to mediocre testing.”
A while back, I had a Skype session with James Bach. During the session, James tested _me_ by posing a test scenario. I was not prepared for this, I was in the wrong state of mind, and [a list of other weak excuses]. In summary, I was “under pressure” and felt “time-bound”. And what did I do? I fell back on old (mostly bad) habits. Rather than use my head, I lost it. I simply applied those oracles that served me well in the past, worked with heuristics that made sense to me…and produced a mediocre answer*. Something similar happened again a while later during a call with Matt Huesser.
So, in retrospect, it is actually very likely that I would have “missed that”, just as every single team in the competition did. Pressure would have “gotten to me”, just as it did everyone else. I’m not superior, at all.
It is very interesting that “pressure” does seem to have such an adverse effect on critical thinking.
*James might argue this. From his perspective, my answer was typical, satisfactory, and actually gave him useful info he didn’t have before.
-Damian Synadinos
Post a Comment