Tuesday, October 23, 2012
My Lessons Learned from the STPCon 2012 Test Competition
Tuesday, March 6, 2012
Two Testing Giants Part Ways
Part of what is driving this is a changing viewpoint for Cem. Not uncommon since 2 other founding members have already broken off with CDT. To compound things between Cem and James, there is personal animosity between them. While I will say that I have a great deal of respect for James and Cem and they both have great ideas; I, for one, do not really care about them parting ways. And I do not think you care about their parting ways either; because CDT principles are an undeniable truth, not dependent on any one person.
I do agree with Cem’s general statement that there is more than one CDT school – more than one camp each with their own values and ontology (to quote James). The problem is, I disagree with using the word “school”. While it may be semantics, my issue is that school conjures up images of an institution with a predefined curriculum of which you cannot graduate unless you pass the courses. Sounds like certification, with which I disagree. CDT is a paradigm. I don’t like the implied argument, raised by James, that the ability to NOT follow something makes it an approach versus a school; because the implication is, that you must always follow something and that something that must be your identity. This makes CDT sounds like dogma and religion. For the sake of this blog post, I will still reference “schools” as schools to maintain some discussion continuity.
To me, CDT is more fundamental to the testing community than I have found anyone to say. My belief is based on the premise CDT is not about a new way doing things so much as it is an acknowledgement of reality. The CDT principles are more akin to truths than principles. Even if you do not positively use the principles to gain synergies, it does not negate the principles; it does not render them false. CDT is ingrained in the fabric of testing, regardless of which “school” a tester is following. The principles are an acknowledgement of “what is” not of “what can be”. To see what I mean, just paraphrase a few enlightened guys, “I hold these truths to be self-evident…”
- Our job, as testers, is to extract data, synthesize it into information so that someone, a stakeholder, can make a decision. To do this we need tools. Those tools depend on the several factors:
- I can only use tools I recognize – it is difficult to use something as a tool if you cannot see it in front of you Everyone has different tools – our tools are experience and knowledge based, accelerated - at times - but never replaced by technology
- Not every tool we know of is at our disposal – there may be great tools out there, ones we know about, but we simply might not be able to afford or capable of easily learning them
- Every tool has a function – no matter the tool, it has a purpose, a way of being used, and expectation of what using it will do
- Every tool can be used for any job, with varying degrees of success – you can use a butter knife as a regular screwdriver - sometimes
- Using tools, in both traditional and non-traditional ways, will create new tools for us – it is about learning. With all tools being knowledge based, any learning leads to new tools
Our challenge is to realize, as testers, our job is to extract data about a creative process in order to synthesize useful, information in a way so that stakeholders can make informed decisions; to use the tools we have available to do the best job we can. By the way, that translation of data to useful information is and should be influenced by the creative process (development process), our tools, knowledge, experience, and our understandings. We are researchers, inspectors, philosophers, teachers, learners, synthesizers, but most of all, we are information brokers.
Friday, July 17, 2009
Counter to "No Spec=Waste of Time"
In this article the author rants about Elder Games' Asheron’s Call 2 (an MMO) defect issues more than anything else. The author's opening sentence points to his emotional bias in the succeeding thoughts. Pretty powerful emotional stuff there. Distilling his gripes, I find he takes issue with:
1) Low Priority Bugs in the Bug Tracking System (noise)
2) Too many severe bugs released to production
3) Not enough time for the volume of work
The rest of his post is simply an attempt at assigning causation for those issues. It is his causation analysis that I take issue with. A lack of formal written specification does not result in a higher volume of "noise" in the bug tracking system. Testers innately have other references or oracles by which to evaluate software. Those can be prior experience, technology experience, genre experience, etc... Of course any material, even email, can serve as a reference point. In systems where the oracles used by the testing team are nearly universal to both the team and the software - more common in simple systems, very little documentation would be needed to have a successful test effort, with normal noise levels. In systems - more common in complex systems, where oracles are not universal, you will see more noise in the bug tracking system. It isn't the lack of documentation that is a problem; it is a lack of universally acceptable oracles. There are ways to achieve that without volumes of documents, such as collaboration, pair testing with another developer in the group. In the end these are just ways to communicate and agree on a common oracle; after all, documentation is just a proxy for, or the remembered result of, a discussion. The failure to recognize the gap in common oracles may result in increased noise in the bug tracking system and a reduction in severe bugs being caught.
Another aspect "noise" in the bug tracking system that wasn't addressed is the all too common problem of not monitoring what is being logged. All too often, testers log bugs and they don't get reviewed by anyone until some coordinated meeting. The span between the meetings represents the window of opportunity to log bugs where monitoring does not occur (this does not occur in all places and/or is not all that severe everywhere). There are two aspects I would like to address:
2) Number of bugs - Bugs logged into a bug tracking system form another oracle for a tester, see #1. Because of that, the value of those bugs tends to decrease as more bugs are added to the system. In the face of numerous, especially uncategorized and unchanged status, bugs testers tend to skim the list for cursory information instead of diving deep into them to understand what has and has not been covered and uncovered. Therefore, a growing volume of unmanaged bugs degrades the value transferred to the testers by having the repository in the first place.
Finally, the issue of too much work and not enough time is simply a reality of software development. There are numerous estimation models, development methodologies, tools, etc... that all center around dealing with this specific issue of how to get more done with less money, time, and resources. Just because this condition exists doesn't mean there aren't ways to still achieve a solid development and testing effort. At the end of the day, if you don't communicate, and agree on common oracles, you will always incur more work to overcome this obstacle; because, software development is always collaborative no matter how hard you try to fight it.
Monday, July 6, 2009
Automation Pitfalls Snarls an Automation Vendor
Thursday, July 2, 2009
Redefining Defects
I recently did a guest blog post about writing defect reports that include a value proposition; which is just another way of stating an impact of the defect. When writing the post, one thing occurred to me. The term defect is not exactly correct. Some are defects, some are misunderstandings, and some are suggestions. To add to the issue, Cem Kaner, recognized the legal implications of using the term defect (slide 23)
So, what exactly, should defects be called: bugs, defects, issues, problems, erratum, glitches, noncompliance items, software performance reports (spr), events, etc…? To understand what they should be called, we need to understand what “they” are.
What is an observation? There are many dictionary definitions, but for our discussion, let us use Merriam Webster’s third entry which is “an act of recognizing and noting a fact or occurrence often involving measurement with instruments”. The Free Dictionary Online has a similar definition of observation (second) as being “a detailed examination of something for analysis, diagnosis, or interpretation.” Isn’t that what we do as testers? We perform a series of actions to elicit a response and compare that response against a set of expectation? We then definitely log anything we consider as out of line with our expectations? I propose that is exactly what we do in the testing field. Therefore, defects are really observations. Perhaps we should start calling them so. It eliminates negative sounding words, eliminates legal concerns and, most of all, it better matches our actions. I propose we call them observations. Thoughts?
Thursday, June 18, 2009
Certification - A money maker, but not for you

Okay, so let's
"If your team is conducting ad hoc, informal tests with little guidance or planning, the quality of the end product can be severely jeopardized—negatively affecting your bottom line"
This nearly represents everything that is wrong with the ISTQB certification. The quality of the end product is not
But this next statement in their advertisement does represents everything that is wrong with the ISTQB certification
"The best way to be certain that you are providing customers with quality software is to make sure your team of testers is certified."
Really? I thought it was by providing something they value, usually something built to meet their wants and/or needs? If I all need to do is put a gold embossed sticker on it, then so be it. Here you go.

It is the ISTQB's role to support a single, universally accepted, international qualification scheme, aimed at software and system testing professionals, by providing the core syllabi and by setting guidelines for accreditation and examination for national boards
So, their mission is to create a certification scheme (their word, not mine) and provide the materials and exams for that certification. It is nothing less than a money making scheme, and it is right there in their mission statement. They do not care about quality or testing.
Friday, June 12, 2009
Interviewing Testing Individuals
Finding good testers is like prospecting for gold. They take patience and skill to find. Many times during an interview I find that most testers have memorized the basic definitions of the field; usually from Google. So, in addition to standard interview questions I ask a scenario question in order to guage their knowledge and understanding. This is akin to a tester who tests a tester, if you will. To do this, I usually set up a simple scenario, such as what kinds of tests would you run if you were asked to test a toaster? Sometimes I would give them requirements, such as below, and sometimes I don't.
- It is an electric two slice toaster
- It has a single push down lever that control both sides
- It automatically pops up, and shuts off when the desired darkness is achieved
- It has 3 darkness settings: light, medium, dark that are triggered based on heat build up in the toaster
Category 1: Performance based
- Testing two slices on each setting to see how long it takes for each setting
- Testing multiple slices per setting to arrive at an average time for each setting
- Repeating 1 and 2 for a single slice
- Testing the coloration consistency over multiple toasts on the same setting
- Testing that the toast time remains within tolerance over lots of toast cycles
Category 2: Aesthetics and User Interface
- Testing to ensure the settings and labels marking the settings in alignment
- Testing to ensure the lever works when pushed down
- Testing to ensure the lever does not catch either going up or down
- Testing that the finish is not tarnished or changed as a result of the heat produced when using the toaster
- Testing that the slots are big enough for standard size toast
- Testing to ensure toast darkness matches darkness selected
- Testing a variety of bread: wheat, white, bagels, pop tarts
- Testing that the lever and toast pops up at the end of the test cycle
- Testing that the toast can not be re-toasted until the toaster cools down
- Testing that the toaster still shuts off if the lever is stuck in the down position
- Testing to ensure the toaster is not damaged with no slices (or does not allow the lever to be pushed down)
- Testing with a single slice (tested twice, once for each slice)
- Testing to see that any single slice slot is consistent with the other as well as with two slices
- Testing to see that it works with both thin and thick bread types
- Testing to see what that it works with oversized bread for the slots
- Testing to see what happens if you push the lever while it is unplugged
- Testing to see what happens to a variety of bread: wheat, white, bagels, pop tarts
- Testing to see what happens if you test nothing.
- Testing to see what happens if you manually hold the lever down.
- Testing a two slices in one slot with cheese in between sandwich
This is not an end all list of possible tests, but rather a compilation of answers I have received over the years. I have found that people usually cling to a couple of categories, which is usually indicative of their experience. I usually ask the candidate which tests are more important, or which tests can be combined into a single execution. To make things interesting, and to see if people really understand regression testing, I often expand the toaster to a four slice toaster that has two setting knobs. This is where you start to see peoples understanding of testing. Often they will mention “I’d repeat the same tests.” What I look for is for testers who under new combinations, expansion of requirements, important versus non-important re-tests such as:
- Tests for independent settings between the slot sets
- Tests for a single slice in one of the two slots in each slot set
Again, this is not a definitive evaluative technique for testers, but I have found that it is quite beneficial and accurate in categorizing a tester’s knowledge and experience.
One thing I haven't tried yet, is bring a physical object to an interview and asking a candidate to test it. The Easy Button from Staples might be a great option. Then I can observe their behavior instead of analyzing their thoughts.