Flashing barcodes and great participants
This time a great session about flashing value and barcodes. Also with great participants and discussions afterwards
The participants this weekend were:
This time the product was a funny barcode reader:
This app is about generating barcodes based on your input like: gender, country, age, weight, height and calculates also a bogus price value.
The mission was about finding out how the calculation worked and what the highest value would be to obtain. Combine this with reporting invalid values.
Below you find the summary I gave during the round up
First I tried out the app by just pressing the buttons and identify its behaviour.
I checked if the values entered are used in the calculation, I did this by using same values twice to see if there is some kind of randomizer active. This was not the case. I also check only changing the gender. It actually matched as described in diagram, only that value changed.
I tried with highest and lowest values. Which result at the end of showing incorrect values in the Scan report. I noticed also that when using actual values, in the barcode there is some mix-up of entered values and presented values.
At the end I left some part of the URL address and came to the actual site. Here there was some valuable info In FAQ about calculation.
The next time I would spend more time to check the logic as described in the FAQ with respect to the outcome of the app. An hour is just too short for me to check if that formula tending to use actually match the actual outcome. Here the important value is to agree upon the perfect BMI.
Some funny issues
Of course it was fun to find some issues. When you test this application using highest numbers you will find out that the calculation between the metric systems is not done properly, this with respect to the offered diagram.
Also testing with lowest numbers return some $NaN tags when looking at the "Scan" list. At least the price value is $0.00
When navigating back and forward you will notice that the dropdown of the country will be emptied which lead also in a strange outcome on the "scan-list"
Initial lessons learned
During the round up I came up with the following lessons learned.
1- Agree upon the level of detail you prepare your model about the app.
With the level of detail I meant how deep and how broad will you test knowing that this decision ask effort and knowledge.
2- Avoid the pitfall that if app is simple and no documentation available using the app, search for other means.
Wonder every time what kind of documentation you need, are you searching for it or using the application as some kind of oracle to ask the questions to.
3- Tools to read code might help
If you know about tools to read code from flash applications, perhaps some this helps as documentation source.
4- Translation between metric systems is often an area for failure. (it was also in the Arianne 5 project if I'm correct)
One of the pitfalls for me is every time the differences between the metrics systems. I should spend some time to learn about and learn to use it instead of using tools for conversion
Lessons during the discussion
Again this time there was a great discussion afterwards. Thomas came with a suggestion to use iterations for trying out test data, this would force thoughts to focus and defocus.
Michael posted an interesting lead which reminds me of some earlier work of him:
It seems to me that one of the principal issues that this exercise brings up is the alternation between focusing and defocusing heuristics--varying one factor at a time (OFAT) or varying many factors at a time (MFAT). (There's also another kind of factor-oriented heuristic noted in the book Exploring Science: hold one factor at a time, or HOFAT.) You use OFAT when you're trying to focus on the effect of a particular factor; MFAT when you're seeking to confirm or disconfirm your ideas about factors in combination with each other
Somehow i couldn't find the source of Michael, on Wikipedia there is something mentioned about it One-factor-at-a-time method When Googling on varying one factor at a time I found some interesting documents I have to investigate later on.
During the discussions I mentioned the approach called TMap defined by Sogeti and at least well know in The Netherlands and also a standard for approaching test projects.
For me TMap is a strong approach which is more process oriented instead of value deliverance to business (perhaps TMap next can serve this better). As for every model/method, it must be used with common sense. We should be warned not to focus on making the method work instead of that, we should watch out to be able to deliver value to business. It is so easy to say that we do it as the method tells you because based on the method agreements are made.
With common sense I meant as said in the discussion:
"to me the skills for common sense is knowing when you are using a method for the benefit of the actual outcome. And you are not using a method to proof you are able to be able to use that method and based on that claiming you do the right thing as the methods is right, you follow the method, therefore you are right.
If you are able to judge your approach against the initial goal you were hired for then you might be able to get the benefits of an approach like this. Otherwise you are selling other things you are hired for."
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters
Sunday, May 2, 2010
Flashing barcodes and great participants