Some days ago I heard a commercial on the radio about the new Mercedes. In this commercial they talked about the wish of the driver that a certain flap should be used less often. To do this Mercedes adapted their car on a certain 18 points. First I thought it was about some very technical requirement the driver had. At the end of the commercial it was all about the flap above the rear tire: The gasoline flap.
For me this was a good example how a requirement can be stated in simple terms only the impact of that wish has huge technical impact to the car.
Somehow I got a déjà-vu moment. How often during testing new requirements are introduced by the business while your almost at the end of testing? Isn’t the business not claiming that it should be very simple to build and still bring the system to production on time?
I don't expect that Mercedes tested their car just by measuring the times the flap was used. Is it not more obvious that they tested all those 18 changes singular and integrated? I think so, because they were able to translate the requirement to the initial problem. It was much easier for them to minimize the usage of that flap by gluing it to the car. Or, build a timer on the flap to prevent opening to often. Or, hide the flap. The initial requirement would be solved this way. The flap can not be used that often or entirely by the driver.
In projects such requirements are often introduced. It could be some additional field on the screen or on a report. And most often the business insists that those fields are build. For them it seems very simple to build. Just drag another field to the screen and the issue is solved. Only they are most of the time not informed about the technical impact and certainly not about the impact for the testing process.
Imagine that a certain field needs other data. Queries have to be rebuilt. Or even worse, a new table has to be introduced. In such situation you don't only look if the field is on the screen and contains some data. You also want to test what other impact has the introduction of this feature.
If organizations have a well defined process, they are writing a Change Request. Based on this CR a technical impact is determined. And if the impact is not that huge decision is made if the risk of failure is acceptable against the business value. What I see is that often the test impact is forgotten in those impact analyses. Because of this missing information the business decide to build the CR.
After the CR is build the testing part starts. Here some points I think what can go wrong. Perhaps you see other points or even experience other points. I am very curious to those.
1. In impact analysis no attention was paid towards the test impact, this might result in delay of the project as testing takes more time then expected. "It was just a small change! Why need so much time for testing?";
2. If impact analysis is only done in relation to that certain function/module and not towards the impact in relation with other main functions/modules, the selection of regression test cases might be wrong. The change of only testing the gasoline flap is bigger then testing those 18 changes in the system. New errors might be introduced;
3. If the technical impact is low, and also the business gain is just medium, or even low. Why bother the change of introduction of new failures. It might help also to consider here the impact on testing;
4. If there is no good documentation about the system. Technical impact is decided based on lines of code. How can you tell what the integrated impact would be? And also important, how are you able to select the proper test cases?;
5. If business gain is high and technical impact is medium/high and time is short, testing is mostly done in a bad, quick manner. How do you know which risks to accept by introducing this feature to production? Under pressure, often a basic test set is chosen. And often this seems to be too less for measuring those accepted risks;
6. If impact analysis is talking about 18 changes, how strong is your development process that there are not somewhere unwritten changes made to make those 18 work? If you don't know about other small changes or code workarounds, how can the tester know about those?;
7. Another situation sometime happen is that business doesn't know about the impact on their processes of those changes. Development doesn't know in front about the technical impact. And still the tester gained the order to test everything thoroughly. Is this an order you can give the tester in these circumstances?;
8. What about the situation when only one test environment is available, you testing a huge impact change and during testing a production failure is introduced to test. This might lead to a situation that the production failure must be delivered with that rarely tested change towards production. Make sure you have multiple test environments or you are in control of your environments.
9....
As mentioned earlier, this list is definitely not complete. I hope this gives you a picture that a small change for business is not obviously a small change on testing. I think it is important for all parties involved not to make assumptions to quickly and keep communicating and respect each others arguments. Perhaps we should stop making decisions based on time pressure. Making decisions based on arguments might be much saver and even cheaper.
Saturday, June 14, 2008
The impact of an obvious small change
Subscribe to:
Post Comments (Atom)
9. The tester is not consulted about the impact of the change request. A small change might have a big impact on the way things are tested and test scripts have to be rebuild. Which leds to delay's in the delivery of the product.
ReplyDelete10. The tester is not consulted because he/she might know more about the impact of the change request, what other parties "forget" to mention...