Friday, July 17, 2009

Counter to "No Spec=Waste of Time"

What’s a QA team without a spec? A goddamned nuisance and a waste of time, that’s what.

In this article the author rants about Elder Games' Asheron’s Call 2 (an MMO) defect issues more than anything else. The author's opening sentence points to his emotional bias in the succeeding thoughts. Pretty powerful emotional stuff there. Distilling his gripes, I find he takes issue with:

1) Low Priority Bugs in the Bug Tracking System (noise)
2) Too many severe bugs released to production
3) Not enough time for the volume of work

The rest of his post is simply an attempt at assigning causation for those issues. It is his causation analysis that I take issue with. A lack of formal written specification does not result in a higher volume of "noise" in the bug tracking system. Testers innately have other references or oracles by which to evaluate software. Those can be prior experience, technology experience, genre experience, etc... Of course any material, even email, can serve as a reference point. In systems where the oracles used by the testing team are nearly universal to both the team and the software - more common in simple systems, very little documentation would be needed to have a successful test effort, with normal noise levels. In systems - more common in complex systems, where oracles are not universal, you will see more noise in the bug tracking system. It isn't the lack of documentation that is a problem; it is a lack of universally acceptable oracles. There are ways to achieve that without volumes of documents, such as collaboration, pair testing with another developer in the group. In the end these are just ways to communicate and agree on a common oracle; after all, documentation is just a proxy for, or the remembered result of, a discussion. The failure to recognize the gap in common oracles may result in increased noise in the bug tracking system and a reduction in severe bugs being caught.

Another aspect "noise" in the bug tracking system that wasn't addressed is the all too common problem of not monitoring what is being logged. All too often, testers log bugs and they don't get reviewed by anyone until some coordinated meeting. The span between the meetings represents the window of opportunity to log bugs where monitoring does not occur (this does not occur in all places and/or is not all that severe everywhere). There are two aspects I would like to address:

1) Approval signals importance - Testers log bugs they find important, again based on their own oracles. If it wasn't important to them, they would never see it as a bug. Importance must be defined in terms of value judgment and not severity. Approval in defined in terms of consensus acceptance and not a formal status assignment. Agreement by any stakeholder that a logged item is a bug signals to the team, "Hey, go ahead and find more of those because we like them." I believe this because it is societal behavior to fear rejection and pursue acceptance. So, approved bugs breeds more similar bugs. If a strong severity/priority system is NOT used, then testers can be led to believe some types of bugs are more significant/valuable than others. Without this correction and in combination with approval it is easy to establish conditions that magnify noise in the system.

2) Number of bugs - Bugs logged into a bug tracking system form another oracle for a tester, see #1. Because of that, the value of those bugs tends to decrease as more bugs are added to the system. In the face of numerous, especially uncategorized and unchanged status, bugs testers tend to skim the list for cursory information instead of diving deep into them to understand what has and has not been covered and uncovered. Therefore, a growing volume of unmanaged bugs degrades the value transferred to the testers by having the repository in the first place.

Finally, the issue of too much work and not enough time is simply a reality of software development. There are numerous estimation models, development methodologies, tools, etc... that all center around dealing with this specific issue of how to get more done with less money, time, and resources. Just because this condition exists doesn't mean there aren't ways to still achieve a solid development and testing effort. At the end of the day, if you don't communicate, and agree on common oracles, you will always incur more work to overcome this obstacle; because, software development is always collaborative no matter how hard you try to fight it.

Monday, July 6, 2009

Automation Pitfalls Snarls an Automation Vendor

I like automation testing tools. I like the challenge, I like the hobbyist programmer it brings out in me, and I like to use them, but only when they are useful. Automated tests are not sentient. They are not capable of subjective evaluations, only binary ones. Not only that, they are only able to monitor what they are told to monitor. These facts are often overlooked in the name of faster, better, higher quality testing. See the advertising quote below:

Benefits include ease of training (any member of your team can become proficient in test automation within a few days), staffing flexibility, rapid generation of automated test suites, quick maintenance of test suites, automated generation of test case documentation, faster time to market, lower total cost of quality, greater test coverage, and higher software quality.

This is the advertising from a tool I ran across today called SmarteScript, by SmarteSoft. I saw their ad, and downloaded a demo. The first thing I did was attempt to run through the tutorial to become acquainted with how the tool operates. Like most commercial for-profit tools, they have a rudimentary demo site that this designed to show off their product's bells and whistles. The basic tutorial was to learn the web-based objects (textboxes, buttons, etc...). Then, with their nifty excel like grid, add in some values for appropriate objects. Pretty simple. I used their tool to learn a dropdown box on their demo site per their instructions. *Emphasis added because it is possible to learn a table cell around the dropdown, the dropdown arrow graphic, or the text only portion of the dropdown list without learning the dropdown list as required. They had specific instructions on how to do this as well as how to tell if you screwed up. However, when going through the tutorial, with IE8, I noticed that it appeared to only learn the dropdown as static text (ie a label). I tried and tried to get it to recognize the dropdown as a dropdown, but it would not. I tried playing the script back just to see if the tool would overcome its own learning. But alas, it failed. I sent my observation to the support team. To their credit, I received an email back the same day stating they were able to reproduce the problem and their development team would look into it further.

I find it ironic that a vendor, selling this as a way to achieve faster time to market, lower total cost of quality, greater test coverage, and higher software quality ended up releasing a tool that is the contrary of that. I would be remiss if I didn't point out that I doubt I can be proficient at using a tool in a few days when it doesn't work on their own demo site; so strike down another claim. Most importantly I think this points out that test automation is not a silver bullet...nor a cheap one.

Thursday, July 2, 2009

Redefining Defects

I recently did a guest blog post about writing defect reports that include a value proposition; which is just another way of stating an impact of the defect. When writing the post, one thing occurred to me. The term defect is not exactly correct. Some are defects, some are misunderstandings, and some are suggestions. To add to the issue, Cem Kaner, recognized the legal implications of using the term defect (slide 23)

So, what exactly, should defects be called: bugs, defects, issues, problems, erratum, glitches, noncompliance items, software performance reports (spr), events, etc…? To understand what they should be called, we need to understand what “they” are.

The primary purpose of defects is to point out what we, as testers, believe to be notable information about the application. Typically, a defect report is about something that doesn’t work the way we think it should; but, it can also be a suggested improvement. As always, there is an explicit or implicit reference against which the defect is judged and evaluated. For software, it can be a requirements document, UI standards, historical experience, etc.

What is an observation? There are many dictionary definitions, but for our discussion, let us use Merriam Webster’s third entry which is “an act of recognizing and noting a fact or occurrence often involving measurement with instruments”. The Free Dictionary Online has a similar definition of observation (second) as being “a detailed examination of something for analysis, diagnosis, or interpretation.” Isn’t that what we do as testers? We perform a series of actions to elicit a response and compare that response against a set of expectation? We then definitely log anything we consider as out of line with our expectations? I propose that is exactly what we do in the testing field. Therefore, defects are really observations. Perhaps we should start calling them so. It eliminates negative sounding words, eliminates legal concerns and, most of all, it better matches our actions. I propose we call them observations. Thoughts?

Thursday, June 18, 2009

Certification - A money maker, but not for you





I received this advertisement via email. It is funny, in their misconceptions, but it is more sad that so many people buy into it (and by buy, I mean spend $$$).
Okay, so let's
start with the second line of this advertisement,

"If your team is conducting ad hoc, informal tests with little guidance or planning, the quality of the end product can be severely jeopardized—negatively affecting your bottom line"

This nearly represents everything that is wrong with the ISTQB certification. The quality of the end product is not
jeopardized with informal testing, a lack of test planning, or a lack of guidance. In reality, quality is a relationship, the simplest of which being the value of the product to the stakeholders (those who matter).
But this next statement in their advertisement does represents everything that is wrong with the ISTQB certification

"The best way to be certain that you are providing customers with quality software is to make sure your team of testers is certified."
Really? I thought it was by providing something they value, usually something built to meet their wants and/or needs? If I all need to do is put a gold embossed sticker on it, then so be it. Here you go.









All your software is now of high quality. Oh by the way ISTQB, that will be $1,995 + $250 ($1,995 for training so that you know how to use the sticker, and $250 for the right to use the sticker. Let's throw in $9.95 for Shipping and Handling as well). So, they don't have a clue about the testing industry, nor about quality. But hey, let's see what their mission statement says. Maybe that will shed some light on what they are trying to do.

It is the ISTQB's role to support a single, universally accepted, international qualification scheme, aimed at software and system testing professionals, by providing the core syllabi and by setting guidelines for accreditation and examination for national boards

So, their mission is to create a certification scheme (their word, not mine) and provide the materials and exams for that certification. It is nothing less than a money making scheme, and it is right there in their mission statement. They do not care about quality or testing.

I do not accept the ISTQB as a part of any community of software testers. The ISTQB is a business, pursuing their own agenda. I know that may sound a bit harsh, but consider this. As Michael Bolton pointed out, In Oct 2008, ISTQB announced 100,000 certified testers. Each of these testers had to pay a fee to take the exam. For the U.S, this fee is $250 (entry level) and I think $100 in India. That means they have made between $10 million to $25 million in revenue on certifications alone in the past 5 years. So far, they are succeeding at their mission statement.

Friday, June 12, 2009

Interviewing Testing Individuals

Finding good testers is like prospecting for gold. They take patience and skill to find. Many times during an interview I find that most testers have memorized the basic definitions of the field; usually from Google. So, in addition to standard interview questions I ask a scenario question in order to guage their knowledge and understanding. This is akin to a tester who tests a tester, if you will. To do this, I usually set up a simple scenario, such as what kinds of tests would you run if you were asked to test a toaster? Sometimes I would give them requirements, such as below, and sometimes I don't.

  1. It is an electric two slice toaster
  2. It has a single push down lever that control both sides
  3. It automatically pops up, and shuts off when the desired darkness is achieved
  4. It has 3 darkness settings: light, medium, dark that are triggered based on heat build up in the toaster

I like this scenario because it has proven, for me, to be a pretty good indicator of the type of testing a candidate has been exposed to as well as an indicator of their ability to decompose of requirements (direct and/or inferred).


Since I do not inform them of this question prior to an interview, I can surmise, based on their responses, what type of testing they favor. Below are some sample responses I tend to get and how I would categorize that response.

Category 1: Performance based

  1. Testing two slices on each setting to see how long it takes for each setting
  2. Testing multiple slices per setting to arrive at an average time for each setting
  3. Repeating 1 and 2 for a single slice
  4. Testing the coloration consistency over multiple toasts on the same setting
  5. Testing that the toast time remains within tolerance over lots of toast cycles

Category 2: Aesthetics and User Interface

  1. Testing to ensure the settings and labels marking the settings in alignment
  2. Testing to ensure the lever works when pushed down
  3. Testing to ensure the lever does not catch either going up or down
  4. Testing that the finish is not tarnished or changed as a result of the heat produced when using the toaster
  5. Testing that the slots are big enough for standard size toast

Category 3: Functional - General

  1. Testing to ensure toast darkness matches darkness selected
  2. Testing a variety of bread: wheat, white, bagels, pop tarts
  3. Testing that the lever and toast pops up at the end of the test cycle
  4. Testing that the toast can not be re-toasted until the toaster cools down
  5. Testing that the toaster still shuts off if the lever is stuck in the down position

Category 4: Functional - Boundary

  1. Testing to ensure the toaster is not damaged with no slices (or does not allow the lever to be pushed down)
  2. Testing with a single slice (tested twice, once for each slice)
  3. Testing to see that any single slice slot is consistent with the other as well as with two slices
  4. Testing to see that it works with both thin and thick bread types
  5. Testing to see what that it works with oversized bread for the slots

Category 5: Exploratory

  1. Testing to see what happens if you push the lever while it is unplugged
  2. Testing to see what happens to a variety of bread: wheat, white, bagels, pop tarts
  3. Testing to see what happens if you test nothing.
  4. Testing to see what happens if you manually hold the lever down.
  5. Testing a two slices in one slot with cheese in between sandwich

This is not an end all list of possible tests, but rather a compilation of answers I have received over the years. I have found that people usually cling to a couple of categories, which is usually indicative of their experience. I usually ask the candidate which tests are more important, or which tests can be combined into a single execution. To make things interesting, and to see if people really understand regression testing, I often expand the toaster to a four slice toaster that has two setting knobs. This is where you start to see peoples understanding of testing. Often they will mention “I’d repeat the same tests.” What I look for is for testers who under new combinations, expansion of requirements, important versus non-important re-tests such as:

  1. Tests for independent settings between the slot sets
  2. Tests for a single slice in one of the two slots in each slot set

Again, this is not a definitive evaluative technique for testers, but I have found that it is quite beneficial and accurate in categorizing a tester’s knowledge and experience.


One thing I haven't tried yet, is bring a physical object to an interview and asking a candidate to test it. The Easy Button from Staples might be a great option. Then I can observe their behavior instead of analyzing their thoughts.

Friday, May 29, 2009

Software is used by people, not robots

There seems to be a continuous disconnect between the whole notion of software development and real world use.  Software is and forever will be a tool that is used by people.  As such, as a community of testers and developers, we cannot ignore this.  We should be developing and testing with this in mind.  Recently, I have encountered two situations where sterile development and testing of technology and business processes led to an unsatisfactory customer experience.  These may seem more business than technology related, but I use these examples to illustrate my point of operating in sterile environments with sterile requirements, development, and testing of anything; not just software.

 

First, I recently received a new debit card in the mail.  Like most things in snail mail it was an automated form letter.  In this letter it stated I could activate my debit card in one of two ways.  Then the form letter proceeded to instruct me on how to activate my card online.  No where on the form was the second method of activation listed.  I did later learn, via website for the online activation method that I could have called to activate my card; but that would have required I go to the website to get the phone number.  I would like to say someone with common sense should have caught this issue, but there is nothing common about sense.  In this case, development of the form letter, creation of the card, mailing it out, all worked as designed.  It is just he design was flawed and no one questioned it.

 

The second incident involved getting an annual fee waived on an old credit card account and closing the account.   This is an old card we had that we had forgotten about and have not used in a very long time.  (The fee was just charged to the card and we received the statement a few days later – this will be important.) As a customer, we have become conditioned to asking for things in a certain order.  In this case, my wife called the credit card company and asked for the fee to be waived.  The representative stated that they couldn’t waive the fee and proceeded to enter into their typical phone script.  She then told the representative to just close the account.  This, of course, resulted in a different script.  The irony is that this new script included notification that if you were charged a membership fee in the past 30 days it would be refunded.  The net result is we closed the credit card and avoided a fee, which was the main goal of calling anyway.

 

As software becomes more pervasive in how and why we do things we must remember the “we” part of the equation.  It is not enough to have well written requirements or low-defect software; the changing society requires that we account for the human factor as well.

Thursday, April 9, 2009

Quality is meeting requirements...

I was at a client site today and I saw one of those yellow diamond signs hanging from a suction cup; but this one didn't say "Baby on board" it said, "Quality is meeting requirements".  This got me to thinking, we all talk about quality, but what does it mean in the real world?  Here are some quotes of some great thinkers

An essential requirement of… products is that they meet the needs of those members of society who will actually use them.  This concept of fitness for use is universal…The popular term for fitness for use is quality, and our basic definition becomes quality means fitness for use. - J. M. Juran

What is quality?  What would someone mean by the quality of a shoe?  Let us suppose that it is a man’s shoe that he is asking about.  Does he mean by good quality that it wears a long time?  Or that it takes a shine well?  That is feels comfortable?  That it is waterproof?  That the price is right in consideration of whatever he considers quality?  Put another way, what quality-characteristics are important to the customer? - Deming

Quality is conformance to requirements - P. Crosby

Quality is the totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs - ISO Definition of Quality

Narrowly interpreted, quality means quality of product...Broadly interpreted, quality means quality of work, quality of service, quality of information, quality of process, quality of division, quality of people, including workers, engineers, managers, and executives, quality of system, quality of company, quality of objectives, etc.  To control quality in its every manifestation is our basic approach - Ishikawa

What each of these quotes attempt to do is to quantify quality as a pass or fail binary system across a range of measures.  In the real world this manifests, most notably, in the form of a metric that attempts to convey an analog process.  What is missing from each of these statements is where is the value to the customer, and how is that measured.  If I buy a sofa for $600 then intrinsically that sofa is worth $600 to me.  If it lasts 2 months I will feel that it is of low quality and I paid too much.  However, the difference between lasting 6 years or 7 years is virtually indistinguishable from a value to me perspective.  At some time period lower there will be a break, but the break isn't a binary one.  Once I've sunk my costs in to the product I have a vested interested in seeing my value out of it.  But where is it, and does it change.  I say it does, it might change with a competing product or to many additional maintenance costs.  This is where traditional quality measures falls short.  There is an analog component to quality and no amount of binary measures can compensate for that quality curve. 

[I do want to point out that at no time do I have to purchase this particular sofa if I do not feel I will get value from it - this is another area where organizations fall short.  It is okay to abandon projects that cannot have their value realized effectively]   

Friday, March 27, 2009

The Exception Proves the Rule

I often hear people speaking in idioms, such as “a blessing in disguise” or “it’s raining cats and dogs”, but one idiom that I find interesting is “the exception proves the rule.”

This particular idiom is actually useful in the field of software testing. To understand how it may be useful in our world, it is necessary to understand the epistemology of the phrase. [If you want to have a fun discussion with someone, ask them to explain what this idiom means!] The definition of the phrase does not mean an exception proves the validity of the rule/law or proves the need for the rule/law. If you have a rule that states “All cats are black, brown, or orange” then the existence and observation of a white cat cannot possibly prove that the rule is valid.

The phrase actually comes from 17th Century law where it is written in Latin as “Exceptio probat regulam in casibus non exceptis” which translates into English as Exception confirms the rule in the cases not excepted. More simply, 'The exception proves the rule exists' — the fact that certain exceptions are made in a legal document or announcement confirms the rule is in force at all other times. Think of it this way, if there is a sign announcing 'Free parking on Sunday' that leads everyone to understand that every other day of the week there will be a fee charged for leaving one's car in that spot. Thanks to Snopes.com for having done this research!

Now, in the world of testing this is an important statement. To rephrase for our world, in requirements, the fact that certain exceptions are iterated help to define the boundary of the requirement. Sometimes, requirements specify the rule and the exception and sometimes they only specify the exception leaving one to use implication to define the requirement. This is an actual requirements document I found online. It is laid out in a typical hierarchical outline fashion with requirements having sub-requirements. See here for the full document. In this document, which is a requirements document for a meeting scheduler for people and rooms, there is the following requirement branch (paraphrased for clarity - see below):

3.1 Scheduling a meeting
3.1.3 When the system chooses a time for a meeting, it shall send queries to the online calendars for all the rooms in which the meeting could be held to ascertain which rooms are vacant during the selected time.
3.1.3.1 If no rooms are vacant during the selected time, the system shall choose the next feasible time.
3.1.3.2 The system shall choose which room should be the venue for the meeting from the set of rooms free at the selected time by a room-choice algorithm that shall take account of the size of the room given the number of invitees to the meeting, and the convenience of the room for the invitees.

What is interesting about this requirement branch is that no where does it explicitly say the system will choose the user’s selected time for the meeting it that time is open for all resources and people. However, I implicitly see this requirement because 3.1.3.1 indicates that should the time not be available, an alternate time would be located and chosen. This implies that if the time selected is available it will be chosen.

There are more complicated examples in the following requirement branch:

3.1.6 If there is at least one feasible time and IMS chooses a time for the meeting, IMS shall display appropriate notification messages to the Initiator and all the invited attendees.
3.1.6.1The wording of the message shall depend on whether the recipient is

(a) the Initiator (who knows about the meeting and is being informed that it has been scheduled)
(b) an invited participant (who is learning about the meeting for the first time and who is being "strongly" invited to attend)
(c) an invited participant who was not in the subset of people whose schedules were checked when the Initiator chose option (a) above (such invitees being "weakly" invited, as they are known to be busy at the chosen time).



Here are some rules I see based on the exceptions identified:
  • The messaging to strong and weak participants is different based on their categorization
  • Participants are categorized as weak if they are not available or were not a part of the original invite
  • Participants are categorized as strong based if they were a part of the original group and have availability at the selected time
  • However, this leads to me to question (something to explore with Rapid Software Testing) whether the messaging does get crossed as the Initiator should also be a strong participant (as they should have availability) or not.

I could go on for a while about these requirements, but essentially, I want to convey that when requirements are not clear, sometimes the best way to clear them up is to question how the exception defines the underlying rule. It is an oracle that all thinking testers should have in their arsenal.

Thursday, March 19, 2009

Quality is Dead... Everywhere - The tale of the Christmas present being 1 month late

A few weeks ago James Bach wrote a post about quality being dead. His hypothesis was stated simply as, "a pleasing level of quality for end users has become too hard to achieve while demand for it has simultaneously evaporated and penalties for not achieving it are weak." James even threw out he proverbial "Microsoft releases buggy software" line. His points have merit and this post is not to discuss his points, as I agree with them; especially with respect to the weak penalties for releasing unsatisfactory software. This post is to point out that the phenomenon is not confined to desktop software where end user productivity is impacted, or confined to web based software where eCommerce may be affected. On to my experience...

First, I want to point out that my wife and I have 6 children ranging in age from 14 to 20. I do this because, as James' points out in his Rapid Software Testing approach, "Quality is value to some person (who matters)." In this story I think I should matter, but perhaps my ego is too big.

My wife went to a national clothing retail store, that caters to teens and young adults, located in our city to purchase a green stretchy sweater as Christmas gift for one of our nieces. This is not unlike many people who purchase clothing Christmas gifts for their family members. In any event, the purchase was made a few weeks before Christmas. At the time, nothing seemed unusual, except the register receipt paper color and texture. It was sort of craft-paperish like. We thought it was simply a branding thing, and pretty cool. We hosted Christmas at our house, exchange presents, and had a good time. A week later, we get a frantic phone call from my wife's sister. She is at their local store of this national change, trying to do what so many of us have to do, exchange the present for the correct size. If you think this is simply a story about exchanging presents, just wait to hear the rest. My sister-in-law had just been accused of stealing the sweater and trying to get a legitimate receipt under the ruse of exchanging the sweater with a fake receipt. Now, my wife's sister is angry and venting on my wife, both trying to figure out what the heck was going on. My wife called our local store to find out that this particular store was beta testing a new register system. How nice! Our local store was nice in trying to resolve our problem, but their initial solution was to have the sweater exchanged at their store only. After presssing on with the store manager, we were able to get a hold of a team member of the project team. The problem they told us, was that the old system could not understand where the transaction number was on the new receipt. They gave us instructions to give to the store in order to manually process the exchange. This is called a work around, but you do the work while going around and around. We called my sister-in-law back, and spoke with her and that store's manager. We followed the instructions as laid out as guess what, it did not work. We gave the store the name and number of the project team member that was helping us. However, when the store tried to call, no one answered. Given the busy exchange season, and the fact that the store still thinks someone is trying to scam them, they were not interested in pursuing the matter further. To wrap up the story, my sister-in-law had to ship the sweater back to us, so we could exchange it, and we had to ship the new sweater back to her. We spoke with our local store and the team member several times, and both refused to reimburse shipping expenses or offer any sort recompence (other than "we're sorry"). By the way, I agree, that chain is sorry.

This is where quality is dead comes in. Not only was the software not ready for beta testing, but it was dropped on unsuspecting consumers in a business that requires signficant consumer spending to stay afloat. There was no concern for any issues that consumers may run into, no method for resolving them, nor any true means to compensate consumers for being victimized for bad software and practices. Usually a beta community gets the opportunity to engage in the test or opt out. Not in this case. Which really makes this a public release and not a beta test. In this situation, I am imagining that only a handful of people were impacted, so maybe this is blown out of proportion. But, as a consumer with a large family in the demographic this store wants to target, I think I matter more than they realize. But this is where the weak penalties come into play. If my family doesn't shop at that store, which is our only recourse left as 2 stores, and a project team member fully know of our experience, their sales are not materially affected. (By the way, I am not calling for a boycott, let's not get crazy now) With such weak penalties it is a wonder that this hasn't happened faster and more frequently than it has.

My Comment Policy

Here is my policy for accepting comments that you make on this blog:

  1. I moderate all comments. I accept comments for one or more of the following reasons:
    • The poster offers an interesting point
    • The poster engages in critical and thoughtful debate
    • The poster either clarifies or offers an opportunity to clarify a point
  2. It is okay to question my ethics or competence when proof is provided. Flame wars will not be posted or engaged. I will not approve a comment that insults me or dismisses my arguments without engaging the points directly. Respect between is paramount to ensure we can have thorough discussions.
  3. If I don’t publish your comment, feel free to ask me why. I promise to explain.
  4. I will not edit or redact a comment that you submit unless I have your permission, with the possible exception of fixing an obvious typo. I may interpolate my replies, however. If you don’t like that, you can email me privately to complain, or you can post your comments about my stuff on your own blog, and then you’ll have total control.\
  5. If you want to comment on a reply I made to one of your comments, consider replying to me privately, so we can have the whole conversation. Then when you are ready to make your follow-up comment, I’m more likely to approve it.
  6. By publishing your comment, I am implicitly endorsing it as potentially useful to the audience of this blog.
  7. If you want me to remove or modify an earlier comment of yours, I will do so. Provided I can do so with Blogger's tools. (I have not yet checked this ability out)
  8. You retain copyright over your comments.
Comment Policy borrowed from James Bach