Saturday, June 14, 2008

Which errors do you accept?

You should test everything! I don't want any errors in production! The system should be a high quality system! And it all should be done within time and budget.

These are some phrases I hear often when it comes to software testing. Only when asking the questions: "What is everything?", "What is a high quality system?", "Within which timeframe you don't want see any errors?" Often the answer is: "Just like I said, just do it!"

Often these are signals people don't exactly know what they want. Still, expectations are in these circumstances high related to testing. They expect you are the solid rock to make sure they didn't miss anything. Only in these conditions it is hard to test.

I think the question asked is wrong. Perhaps it helps asking: "Which errors do you not want to see in production?"
The answer to this question should be translated to risks based on the functionality of the system. And before testing, ask the organization on what level they are willing to accept those risks. This might help to define the test strategy.

Perhaps these answers might be given:
1. "I have not enough knowledge about the errors I don't want to see!"
2. "These specific errors I don't want to see."
3. "Any error is wrong."
4. "You are the expert! You have to tell me."
5. "Don't ask me what is acceptable to go wrong, give me information what goes right!"
6. "Sorry, I don't have the time to answer your question."
7. "Who are you?"
8. "I do not speak Dutch!"
9. ....

If you take a look at these answers you see that in most cases (except answer 2) information is missing. If information is missing how are you able to test properly?

If it is clear what errors are not acceptable, you might consider to proof that those errors are not happen in the environment, instead of testing everything.
If you are in the lead to give information, you might based on functionality and complexity define a strategy where complex functions/modules with high business gain are tested thoroughly and others with lesser effort. And make it visible if it fits the timeframe of the project. Of course it doesn't. You know can ask the organization what risks they accept, what should be tested lesser.

It might help if you make it visible what impact functionality has on the testing process. Perhaps the figure below can help you to explain. I have to mention it is not complete. And off course you have to customize it to make it fit in your situation.

Instead of using Technical Impact, you might think of classifying errors not to be seen on the system. You might use: "Maturity of Identified Errors". Only in this case: If errors can be identied specifically you classify them as high and otherwise as low.

I expect that if the organization is not able to identify not wanted errors, and they see what you are able to test (Error Guessing) they might consider spending some time for identification.

Please don't see this approach as the perfect tool. Use it as a "talking picture"

Perhaps this helps you out.

No comments:

Post a Comment