I attended STPCon Fall 2012 in Miami, FL. I was there both as a track speaker and a
first time conference attendee. One interesting
aspects of the conference, there were others I’ll cover in another blog post,
was the testing competition that was available.
Matt Huesser, a principal consultant at Excelon Development, arranged
and helped judge the competition. A blog of his observations can be found at .
I participated in the competition and have my own thoughts
on the competition.
The rules were fairly simple. We had to work in teams of two to five. We had 4 websites we could choose to test and
we had a bug logging system to report our bugs.
We also had access to stand in product owners. We had 2 hours to test, log bugs, as well as
put together a test status report.
My first observation is that it was a competition but it
wasn’t. The activity was billed as a “There
can be only one” style of competition.
However, and more importantly, it was about sharing more than
competing. There were competitive
aspects to the activity, but the real value was in sharing approaches,
insights, and techniques with testers we have never met before. Not enough can be said about the value of
peering. Through this exercise, I was
able to share a tool, qTrace by QASymphony - for capturing steps to recreate our defects during our exploratory testing
sessions, as well as my approach to basic web site security testing. Although we didn’t do pure peering, it is
obvious how valuable the peering approach is.
Secondly, a simple planning discussion over testing approach
and feedback during testing is immensely valuable as it not only spawns brainstorming,
it helps reduce the occurrence of redundant testing. Through this exercise, my cohort, Brian
Gerhardt, and sat next to each other and showed each other the defects we
found. We also questioned each other on
things we had not tried, but were in our realm of coverage. For side by side pseudo peering, this approach
worked quite well for us and led to several bugs that we may not have looked
for otherwise.
Lastly, I reflected on the competition and there are several
observations that I have made as well as one startling curiosity that I think is
most important of all. Every single team in the competition failed
to do one single task that would have focused the effort, ensured we provided
useful information, as well as removed any assumptions over what was important. We failed to ask the stakeholder any of
importance regarding what they wanted us to test. We did not ask if certain functions were more
important to others, we did not ask about expected process flows, we did not
even ask what the business objective of the application was. Suffice to say, we testers have a bad habit of
just testing, without direction and often on assumption. I will be posting a blog post more on this
topic.
What I did notice is that testers when put under pressure,
such as a competition or being time-bound, will fall back on habits. We will apply those oracles that have served
us well in the past and work with heuristics that make sense to us. Often times this will produce results that
appears to be great, but in the end, they really lead to mediocre testing. If we had taken the time to ask questions, to
understand the application and the business behind it, our focus would have
been sharper on areas of higher priority, AND, we would have had context for
what we were doing and attempting to do.
I will keep this blog post short, the moral of the exercise
is simply to ask questions, to seek understanding, to gain context before you
begin.