Saturday, June 14, 2008

Which errors do you accept?

You should test everything! I don't want any errors in production! The system should be a high quality system! And it all should be done within time and budget.

These are some phrases I hear often when it comes to software testing. Only when asking the questions: "What is everything?", "What is a high quality system?", "Within which timeframe you don't want see any errors?" Often the answer is: "Just like I said, just do it!"

Often these are signals people don't exactly know what they want. Still, expectations are in these circumstances high related to testing. They expect you are the solid rock to make sure they didn't miss anything. Only in these conditions it is hard to test.

I think the question asked is wrong. Perhaps it helps asking: "Which errors do you not want to see in production?"
The answer to this question should be translated to risks based on the functionality of the system. And before testing, ask the organization on what level they are willing to accept those risks. This might help to define the test strategy.

Perhaps these answers might be given:
1. "I have not enough knowledge about the errors I don't want to see!"
2. "These specific errors I don't want to see."
3. "Any error is wrong."
4. "You are the expert! You have to tell me."
5. "Don't ask me what is acceptable to go wrong, give me information what goes right!"
6. "Sorry, I don't have the time to answer your question."
7. "Who are you?"
8. "I do not speak Dutch!"
9. ....

If you take a look at these answers you see that in most cases (except answer 2) information is missing. If information is missing how are you able to test properly?

If it is clear what errors are not acceptable, you might consider to proof that those errors are not happen in the environment, instead of testing everything.
If you are in the lead to give information, you might based on functionality and complexity define a strategy where complex functions/modules with high business gain are tested thoroughly and others with lesser effort. And make it visible if it fits the timeframe of the project. Of course it doesn't. You know can ask the organization what risks they accept, what should be tested lesser.

It might help if you make it visible what impact functionality has on the testing process. Perhaps the figure below can help you to explain. I have to mention it is not complete. And off course you have to customize it to make it fit in your situation.


Instead of using Technical Impact, you might think of classifying errors not to be seen on the system. You might use: "Maturity of Identified Errors". Only in this case: If errors can be identied specifically you classify them as high and otherwise as low.

I expect that if the organization is not able to identify not wanted errors, and they see what you are able to test (Error Guessing) they might consider spending some time for identification.

Please don't see this approach as the perfect tool. Use it as a "talking picture"

Perhaps this helps you out.

The impact of an obvious small change

Some days ago I heard a commercial on the radio about the new Mercedes. In this commercial they talked about the wish of the driver that a certain flap should be used less often. To do this Mercedes adapted their car on a certain 18 points. First I thought it was about some very technical requirement the driver had. At the end of the commercial it was all about the flap above the rear tire: The gasoline flap.

For me this was a good example how a requirement can be stated in simple terms only the impact of that wish has huge technical impact to the car.

Somehow I got a déjà-vu moment. How often during testing new requirements are introduced by the business while your almost at the end of testing? Isn’t the business not claiming that it should be very simple to build and still bring the system to production on time?

I don't expect that Mercedes tested their car just by measuring the times the flap was used. Is it not more obvious that they tested all those 18 changes singular and integrated? I think so, because they were able to translate the requirement to the initial problem. It was much easier for them to minimize the usage of that flap by gluing it to the car. Or, build a timer on the flap to prevent opening to often. Or, hide the flap. The initial requirement would be solved this way. The flap can not be used that often or entirely by the driver.

In projects such requirements are often introduced. It could be some additional field on the screen or on a report. And most often the business insists that those fields are build. For them it seems very simple to build. Just drag another field to the screen and the issue is solved. Only they are most of the time not informed about the technical impact and certainly not about the impact for the testing process.

Imagine that a certain field needs other data. Queries have to be rebuilt. Or even worse, a new table has to be introduced. In such situation you don't only look if the field is on the screen and contains some data. You also want to test what other impact has the introduction of this feature.

If organizations have a well defined process, they are writing a Change Request. Based on this CR a technical impact is determined. And if the impact is not that huge decision is made if the risk of failure is acceptable against the business value. What I see is that often the test impact is forgotten in those impact analyses. Because of this missing information the business decide to build the CR.

After the CR is build the testing part starts. Here some points I think what can go wrong. Perhaps you see other points or even experience other points. I am very curious to those.

1. In impact analysis no attention was paid towards the test impact, this might result in delay of the project as testing takes more time then expected. "It was just a small change! Why need so much time for testing?";
2. If impact analysis is only done in relation to that certain function/module and not towards the impact in relation with other main functions/modules, the selection of regression test cases might be wrong. The change of only testing the gasoline flap is bigger then testing those 18 changes in the system. New errors might be introduced;
3. If the technical impact is low, and also the business gain is just medium, or even low. Why bother the change of introduction of new failures. It might help also to consider here the impact on testing;
4. If there is no good documentation about the system. Technical impact is decided based on lines of code. How can you tell what the integrated impact would be? And also important, how are you able to select the proper test cases?;
5. If business gain is high and technical impact is medium/high and time is short, testing is mostly done in a bad, quick manner. How do you know which risks to accept by introducing this feature to production? Under pressure, often a basic test set is chosen. And often this seems to be too less for measuring those accepted risks;
6. If impact analysis is talking about 18 changes, how strong is your development process that there are not somewhere unwritten changes made to make those 18 work? If you don't know about other small changes or code workarounds, how can the tester know about those?;
7. Another situation sometime happen is that business doesn't know about the impact on their processes of those changes. Development doesn't know in front about the technical impact. And still the tester gained the order to test everything thoroughly. Is this an order you can give the tester in these circumstances?;
8. What about the situation when only one test environment is available, you testing a huge impact change and during testing a production failure is introduced to test. This might lead to a situation that the production failure must be delivered with that rarely tested change towards production. Make sure you have multiple test environments or you are in control of your environments.
9....


As mentioned earlier, this list is definitely not complete. I hope this gives you a picture that a small change for business is not obviously a small change on testing. I think it is important for all parties involved not to make assumptions to quickly and keep communicating and respect each others arguments. Perhaps we should stop making decisions based on time pressure. Making decisions based on arguments might be much saver and even cheaper.

Sunday, June 1, 2008

A strategy game: Civilization 2

One of my favorite games to play is Civilization II. And I think it has all to do with software testing. Here a very short explanation to relate this game to software testing.

This game is a strategy game, were you have to build a civilization where you can choose for 3 goals to win the game:
1. Be the first civilization who builds and lounge a spaceship as first;
2. Eliminate all other civilizations;
3. Keep playing until the year 2020.

The reasons why I think it is like software testing are:
1. before starting you have to determine how the world have to look like. Should there be a large landmass (longer time to play) or a smaller area (shorter time to play);
2. how many opponents you have to beat. If you choose for the maximum there are other circumstances you have to take care of in combination of you strategy to win;
3. were do you focus initially on: building lesser cities with all improvements of larger armies of lots of cities with lesser improvements. This decision will influence the rest of the game;
4. do you "cheat" or just continue playing when something did not went as you hoped for?;
5. when "cheating" which outcome do you accept at that moment.

Ad1: translated to testing: do you choose for time boxing or do you test until you met all acceptance criteria.
Ad2: If a testing team is larger you have to communicate more with the team and make decisions every one satisfies. The lesser opponents, the easier it might be to satisfy them.
Ad3: An optimal designed city might be translated to an optimal defined test process. Only this might prevent you to become the first civilization to build and lounge a spaceship. It is very good to make the year 2020. Focusing on a lesser defined process by building as much as armies (perhaps translated to test scripts) might help you conquer the world and become the strongest.
Ad4: cheating can be done by saving the game. And when you get into battle restore the game until you get the desired result. Or when entering a "hidden" village, accept the newly gained knowledge, or perhaps you found another city or a certain amount of money. This can be compared with regression testing: keep retesting until the expected or acceptable result is there. And not continuing with a loss of armies or accepting the horde of barbarians you released.
Ad5: Based on the situation and moment of the game you restore your saved game and accept in the beginning the newly gained knowledge which enables you to build other improvements. Or perhaps accept the found treasure which enables you to speed up the building of an army so you are able to beat the opponent the next round.

Beside these examples this game triggers you to evaluate every situation and create an open and creative mind. When playing this game more often you might be able to select different levels of playing this game. Start on the easiest level when playing first and see if you are able to define you strategy when playing on the hardest level. Like in software testing, initially you might be able to coordinate a simple project. Keep learning from those projects, encrease your skills and see if you are capable to do more difficult projects.

Software testing: An organizational approach (1)

After being involved in several projects over the years I noticed that most of the improvement actions related to software testing are triggered by the testing process it selves. Those improvements are done implicitly or explicitly. Implicitly is done based on the skills of the tester by introducing techniques, strategies to improve the current test process. Explicitly is most of the times done after evaluating the test process and give advice for the next test project.

I think it is not strange that improvements are triggered by the test process, as the influence of the tester is already there. Sometimes, it is only reaching the project it selves.

I noticed that sometimes it is forgotten to look at the impact of those improvements on other processes or the organization. Imagine that you ask as tester you need better documentation. You even claim that you cannot start testing before you have that documentation. This might result in extra work for the developer or business to deliver you request. Only this will result in extending the time of the project as the developer cannot code during writing documentation, or if the developer already started and business comes up with new requirements the project will also delay. In my opinion it is not wrong to ask for better documentation, only keep in mind that the reason for doing this is improving your test process and the organization accepts the impact of it.

In a previous writing I wrote about the focus of testing. See: Do we Test wrong?
The intention of this writing was to pay attention that testing is a process which is a part of the organization. Testing should not be a stand alone goal.

If we look at the test process from an organization view you perhaps might identify the following approached related to software testing, the organization has:
1. one overall approach for software testing;
2. for every project a made to fit approach;
3. for general projects a overall approach and for small projects a made to fit approach;
4. no specific approach for projects, there is continuance of an approach for next releases;
5. every release has its own approach;
6. software testing is not embedded in the organization.

If you also take those views from a development point of view you might find out that the chosen development method might be different also. Assume there are different development methods in the organization like: the basic development method is RUP and some small projects tending to use an Agile approach. You can imagine that this has some impact on the test approach.

And if that test approach is improved only to support that certain test project, the advantages might be within that project, the disadvantages for the organization might be more expensive then those improvements are meant to support.

In most of the organizations a test manager is assigned to monitor all those processes. The question is as always: are choices made correctly?
Perhaps another mean can be introduced: The Organizational Testing Board (I will call it OTB). This should be a staffing function were impact of improvements are determined based on direct and indirect costs and benefits. Considering those costs and benefits the organizational view related to business processes, development methods and test strategies are involved in a improvement plan which is supported by the organization on first hand instead of supporting the individual test process.

If an OTB is established, improvement suggestions are initial comming for the individual test process. They will decide based on costs and benefits for the organization if it is allowed to continue with that improvement. While the decision will be based on the short term and long term plan of (test) process improvement considering the place of chosen development methods in combination with current business processes adapted to the test process and skills of the testers.