Thursday, August 12, 2010

Geocaching and software testing

Introduction
A few months ago I was introduced by a friend of mine with the phenomenon called Geocaching. Looking at the title of the Geocaching site "The Official Global GPS Cache Hunt Site" you notice the word "Hunt". To me this seems a bit like testing; as testers we are hunting for bugs instead of caches. Below you will find a short introduction to GeoCaching, for more information check out one of the links.

A short introduction to GeoCaching
The definition about geocaching on the main site is: "Geocaching is a high-tech treasure hunting game played throughout the world by adventure seekers equipped with GPS devices. The basic idea is to locate hidden containers, called geocaches, outdoors and then share your experiences online"

I highlighted a few words which have some similarities also with software testing:

High-tech treasure hunting <> in an high tech environment searching for bugs
adventure seekers <> passionate testers
equipped <> with tools and skills
locate hidden containers <> identify issues
share experiences <> share value of the tester/system

You might find more similarities in this definition or you might believe these are not the similarities. You might by right. In my opinion are more relations between testing which goes further then just looking at the definition.

Recently you see testers I value bringing up all kinds of challenges. Challenges to make the tester think. Most of the time those are directly related to testing like the missions in http://weekendtesting.com/.

Geocaching brings you also this challenge, only not just behind the pc, it takes you in the world outside, the caches are most of the time outdoors (Yes, there are some also located behind doors when you search for example in the centre of Amsterdam on caches)

Besides the official geocaching site you can also take a look at the wiki page: http://en.wikipedia.org/wiki/Geocaching

Type of caches
There are several types of caches with all their own purpose and symbols (depending on site and/or tool). Below you see a selection of
Traditional: The basic cache type, a traditional cache must include a log book of some sort
Multi-cache: This variation consists of multiple discoveries of one or more intermediate points containing the coordinates for the next stage; the final stage contains the log book and trade items
Mystery/puzzle: This cache requires one to discover information or solve a puzzle to find the cache
Virtual: Caches of this nature are coordinates for a location that does not contain the traditional box, log book, or trade items
Earthcache: A type of virtual-cache which is maintained by the Geological Society of America. The cacher usually has to perform a task which teaches him/her an educational lesson about the earth science of the cache area
Event Cache: This is a gathering organized and attended by geocachers.

Types of containers
There are several types of containers which contain the logrol. They are from magnetic to nano. from mini to ammo-boxes. Depending on size the purpose is defined and value can be added.
Example of a nano-container



Example of a ammo-box


Example of a re-using containers (foto-container)

An example of a cache hidden behind a location you normaly wont look


A great example of a container can be watched
http://www.youtube.com/watch?v=Lu7IysgaZf8

Coins and travel bugs
When you found a cache, depending on the size it might contain valuable. For some the valuables are trackable like geocoins and travelbugs. For others they are just playable valuables (kids like those stuff, we don't value any more, thought kids have a new toy to play with)

Travelbugs and geocoins have the behaviour to travel around. Sometimes with a preset goal and sometimes just with the goal to travel as much as possible.

There are several ways of selecting your caches.

Some similarities in caching/testing
Similarity 1: Save environment
I started with selecting caches in the neighbourhood. It feels save when you are in your own environment. Like testing you try to start testing in an environment you know, which gives you a save feeling.
Example of caches in my environment at a certain time



Similarity 2: Learn from environment
During my search in my environment I noticed the different levels of difficulty, terrain and also types of caches. Initially the traditionals were easy, then we started to act as a team (family) and walked a multi-cache in our neighbourhood. We joined our effort and watched and learned during the walk.
During testing you have to look further then just your cases. learn from your system/environment.

Similarity 3: Remember patterns/hide outs
While finding traditionals and exploring multi-caches we trained ourselves to identify the types of containers which contains information or the actual caches. In testing you will also learn about the hide-outs of bugs. For example: This can be based on the technology which is used or the process which is involved. You recognize situations.

Example cache hidden in fence. notice that it is under a lit, (you have to look beyond the black box-vision)


Similarity 4: Touring
While I was already active for some weeks I made some trips to my assignment and also to family. before I went to those location I planned my tripped, reserved some additional time to enable myself to search some caches. Sometimes it were short tours picking up just one cache during the route. Other times it were large tours, planning multiple pick-ups during the travel to the north. For this I used the map from geocaching.com to identify possible caches.
Like in testing, you plan and schedule you tour within a certain context.

Example of a map of a long tour (Defocusing)



Similarity 5: Challenges
Like said before, there also mystery caches. These involves some homework. Before you are able to find the cache you have to solve puzzles to calculate the coordinates. You have these also in different types of difficulties. Sometimes you can use the internet for it searching for answers. Sometimes you have to try to think different. This can be challenging, you have to focus and defocus finding the answers. Sometimes you are not able to find an approach to start. Either you leave this cache and continue with another or you ask for help. In testing we also facing challenges which we are not able to solve immediately. it is from these mysteries you learn the most.

Example of a mystery to solve


Example of another mystery to solve





Similarity 6: Pair caching
Recently I cached with a friend of mine. We scheduled a route to pick up as much as caches within a certain time frame. The preparation consisted out a list of caches we wanted to find and some spare caches. We created a tour plan and drove away. With 2 navigation and 2 GPS devices and internet connection we started the day. While caching we noticed that we could be more productive if we split up tasks. Instead of single logging caches etc we came up with an approach where drove the car, while driving he entered the new coordinates and made some pre-work. At the location we both searched for the cache. When it took more then 10 minutes we used both GPS devices. when found, the friend logged the cache in our administration and I on the cachelog. While hiding he entered the next coordinates and we drove of. We managed to adapt our way of working during our mission.
Like in testing we can benefit from pairing up.

Example Preparing a short tour with several types which fits in time frame




Similarity 7: Collecting information and Registration
There are several things to register and collect. I started to print out all the caches I hunted for. On those printouts I wrote down the date, time and nr of cache found. Just for administrative purposes. When needed I also add answers to questions in the cache or the newly calculated coordinates. This is fine when you are acting in small numbers. Now I learned that storing that information takes space, it takes time and sometimes it is not valuable at all. For instance the caches which are just for picking up. Why store that information also offline while it is online available also. it is available on the spot you found out about the cache: geocaching.com

Example of using other techniques




As learned in testing, I learned that registration should be of some value. Sometimes information is needed for an undefined period, sometimes you can delete it when it is used. For example: when a cache is found and no one needs my information, why keep collecting it.
At least I register the found and the not found on the official geocaching site.
There is some value in collecting information when you solved a puzzle and the way you solved it can be usable for other mysteries in the future.
In testing we do the same: we collect information and we register information. And we also do this too much. A lesson I learned again is to collect just information which contributes to the value of the object/system/person

Similarity 8: Find and learn about new spots in the environment
One thing I like about caching is learning about new locations, I was often surprised about the beautiful nature just around the corner. I get a broader vision about my environment were I live in. Also I identify suspicious bricks etc. on locations it is not their nature. Sometimes you see them everywhere. Currently for me there might be caches behind it when I am aware of a possible location in that area. There is the thin border about known known's and the unknown unknowns. You can explore and take the effort to look below the suspicious brick. You also can save the energy.
In testing it is the same: while testing you learn about new spots in the system. You might see suspicious actions in the system. Sometime you focus on them or you leave them as is.
Example of a cache hide-out in a tree


Similarity 9: Valuing your findings
As mentioned before, caches can contain some values like travelbugs or geocoins. You can select caches based on the probably chance of containing those items as mentioned on the website. You know it is there if you spot it in the cache. Sometimes some other person already found it and took it with him.
When you found an item you can decide if you take it with you and register it as taken, or you leave it and register it as discovered. When you decide to take it, be aware that those items might have their own goal, and sometimes that goal is attached to the item, sometimes you have to read it on the website. be careful to take it when you didn't check the goal, you might disturb the purpose of the item and disrespect the owner.

I found this also in testing. Sometimes you check the existence of value in a system based on some information you have. Sometimes you spot items which you were not aware of. you have to be careful if you call it a bug or an issue. Sometimes you were looking for it and sometimes it is not reproducible.

Similarity 10: Addictive
Geocaching is addictive. I enjoy solving puzzles, walking in other environments, spot those caches. Learn other techniques.
This is what I also gain from testing. I like to test, learn about applications, learn from other people. I want to keep testing.

Conclusion
There are similarities in testing and geocaching, in both you tour around an environment where you gain knowledge if you are open for it. You get pleasure if you enjoy it. You have to approach systems/environments and people with respect.
There are always other approaches which you can learn even after asking for help which guides you kin the future.
One of the valuable lessons here is that you have to spend time and energy to make it your own. (often registration is free and there are no certification programs)

Wednesday, July 28, 2010

Repsonse: on How many test cases by James Christie

Somehow the question about how many test cases is so important for "important" people. As if they are getting paid by numbers instead of valued for delivered value. Somehow a false trust is derived from figures. People rely on numbers and assume that number of test cases represents good quality. This seems so obvious the way of doing.

The posting of James Christie triggered me to answer using my weblog.

I value the blog posting of But how many test cases? written by James Christie by its content in relation with his defined context. This context is lost when you translate it to numbers. Below you will find several attempt to make a good posting valued wrong.

Example 1:
Imagine that your blog posting is rated by the number of letters. In your posting you use about 5534 characters. Telling the same story using twitter you need over 40 tweets. Does this show value? It seems that one post in a blog provide more value then one tweet, although the tweet which pointed me to that blog was also very valuable. So is 1 more then 40?

Example 2:
Or what about the coverage of letters. You used all letters of the alphabet, this means your coverage is 100%, does this provide some information about the quality of your posting?What about assigning numbers to it?



Impressive usage of the letter “e” based on numbers it is far most the extensive used letter. Does this provide information? I don’t think so, Perhaps the letter “e” should be used more often, perhaps in relation with other letters. Even this way of thinking is wrong. It doesn’t tell any thing about the context.

Example 3:
What about visualizing the numbers. Below you see a snap shot I you look only at the numbers mentioned in his blog.

I also left some noise in it. The (con)text is now removed.

What does it value now? What information can be obtained? Perhaps the 100.000 mentioned in the text is impressive


Conclusion
Is it correct to drive our testing on numbers? Is it useful to explain coverage in terms of test cases executed? Is the weblog of James in the based on my examples valid and good? The numbers are clear and proven? Are you counting the time?


I compliment James with his blog. A lot of time is spend to "proof" something which is explained incorrect. Wrong questions are asked.

Tuesday, July 27, 2010

What to learn from puzzles

Here a brief posting to express my thoughts what skills can be learned from playing with puzzles. Michel Kraaij triggered me to share my thoughts about this using twitter were he is involved with a discussion with James Bach. Somehow there is a 140 character restriction and also for him this posting.

As I'm no part of their discussion I will not summarize their ideas. The main idea to trigger Michel is to tell him about my idea "ppl become better in solving the puzzle, only they got trained in other skills which helps them solve." http://twitter.com/JeroenRo/status/19645539452

With this statement I intended to express my thoughts that there are other things people learn from playing with puzzles and even repeating them. It is not only the notion of remembering the position of certain pieces.

In my opinion the following things can be learned:
- position of pieces
- shape of pieces
- how does pieces of for instance jigsaws fit Initially

You can also extend the perception of puzzles. Initially I would think also in terms of jig-saws, this might be disadvantage of my native language (in the Netherlands I was trained to call jigsaw puzzles and forgot about not all puzzles are jigsaws)

So what else can be learned from playing with puzzles? To understand this you can look at the outcome: "A puzzle is solved or is not solved."

Not solving is not a failure; even in the process playing with the puzzle you might have learned things.
What can be learned?
- new approaches to solve a puzzle
- new languages
- other visions
- different approaches.
- looking in patterns to jigsaws
- identify differences between the puzzle which is being solved in comparison with puzzles previously solved
- awareness you have gained new information
- ability to use that new information to use in different approaches
- new attitude to approach things like under time pressure, too less information etc

Perhaps the main result of playing with puzzles is the creation of awareness of the persons capability/ability to identify differences in environments and to use different ways to approach "complex" situations with the available knowledge. The person might teach himselve about the sufficiency of information/skills to perform the task or the need more training/guidance/information. If a person learns when to ask for help, a valuable lesson is learned.

The main idea is that there is more to learn from puzzles then repetition.

@Michel, perhaps we should meet each other again to evolve our thinking about this.

Tuesday, July 20, 2010

Failure is also human behaviour

Did you ever wonder if a failure could be avoided if skilled people were participating in your project? Did someone ever doubt the developer not able to deliver good code and the tester to provide well executed test scripts? Was the team you were working in a highly motivated team and bugs were delivered real time? Was the trust and believe missing towards the application and the people although everyone did a good job, was motivated, made and kept their promises followed the process and still issues were found on a system which should be reliable?

Perhaps you have not been in a situation like that.

How often did you sit down in the lunchroom of your company? Do you sit down your own seat? Was the seat in a particular corner of the room? Or did you sat on all chairs during the years?

I have been in such a place for over 2 years. There are perhaps over 100 seats and most and during the years I sat on almost every one of them. Here is the trick; I’m not able to tell for sure as I have my favourite spots. It doesn’t matter. The issue here is that I perhaps missed some chairs or perhaps not as result of my behaviour. It is in human behaviour to find the safest spots. For some people this is near a window, near an escape door, some people like to sit with their back against the wall. Some people are not aware of the options and others don’t care.

I’m sure there are other behaviours on this. In my opinion it is important to acknowledge that human behaviour influence the outcome. Often the reason behind that behaviour is not noticed or measured. I think it is not mandatory to measure everything. Though it is important for a tester to be aware of differences in human behaviour and learn to defocus to see better which relations are created between human and its environment.

Failures are not only technical, therefore the tester needs more skills.

Tuesday, June 15, 2010

EWT22: Test Fishing for bugs and mismanagement

Introduction
This time the famous tester, Michael Bolton, facilitated the European Weekend Testing session. He provided a link to an audio file: http://www.cbc.ca/ideas/features/science/#episode13 It contains an interview, an episode in a radio series that he has been talking about for a long time, called How To Think About Science.

MB stated that it has nothing to do with testing, or did it?

With respect to other Weekend Testing sessions, this session did not had to do with testing an application. It was more likely listening to other sciences and taking advantage of that information. I believe it was an interesting session.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas,
Michael Bolton, (Guest Facilitator)
Ken De Souza,
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi,
Ian McDonald,
Thomas Ponnet
Dr. Meeta Prakash,
Richard Robinson,
Jeroen Rosink,
Artyom Silivonchik


Mission
The mission as provided was:
"Listen to the recording. Take notes on what you're hearing, with the goal of capturing lessons from the interview that might usefully influence how we test, and how we think about testing. Stories or patterns from your own testing experience that you can relate to something in the interview are especially valuable."

Context:
"Here's the background to the interview, from the link above: On July 3, 1992, the Canadian Fisheries Minister John Crosbie announced a moratorium on the fishing of northern cod. It was the largest single day lay-off in Canadian history: 30,000 people unemployed at a stroke. The ban was expected to last for two years, after which, it was hoped, the fishery could resume. But the cod have never recovered, and more than 15 years later the moratorium remains in effect. How could a fishery that had been for years under apparently careful scientific management just collapse? David Cayley talks to environmental philosopher Dean Bavington about the role of science in the rise and fall of the cod fishery."

Approach
I believe you can approach listening in different ways. The challenge what I noticed here is the type of stakeholder you would identify yourself with.

You might be the tester, seeing the story as a metaphor and trying to identify the circumstances you assume to be useful and valuable for testing.

You can hear to it from a process point of view, call yourself the "manager". Or perhaps you are the historian, trying to find facts and similarities perhaps useful for lessons to learn. Perhaps you are the student, trying to see with an open mind what can be learned.

You might even come up with other exciting roles. The question remains: How would you approach such a challenge? Which tools do you need? What back ground is important? How would you focus on the assignment?

For this mission I came up with several options.
1. Would I use the audio-file which could be downloaded and was locked by an password?
2. Would I use the audio-file provided online?
3. Would I actually care for this challenge?

The pre-process
I had the file already downloaded and also looked at the web site with the audio-file. The website also provided me some information/ some background which confirmed the content of the interview as Michael also provided. For me this can be important information. Checking if information is what it suppose to be before actually looking/listening. Perhaps you can place some background investigation under the flag: checking for the history.

When opening both files I noticed that the quality online was better then the downloaded file. I made the decision to listen online accepting loss in connection etc. I also noticed that the file length was >55 minutes. The length of time added a condition towards my approach as normally one hour was used for testing and 1 hour for round-up. I would not be able to spend too much time listening particular part back and again. It also set the context that the chance of loosing time would be there and have to be checked against acceptability as in EWT I was also part of a team. The condition for the team was set to listen for about 1:10 hours.

Before I started I also opened a word document to make additional notes. I allowed myself to make notes starting with the approximate time in the audio file I heard something interesting. The note itself could be a remark, a quote or something I had in mind.

While listening
While starting listening I was again forced to make a decision: Was the person's tone of voice understandable and acceptable for me to listen to? Was their background noise which could distract me? How would the message and even more important which message would be brought?

After a few seconds I felt comfortable, making so now and then notes. And after a few minutes I came up with the conclusion to listen if there are parallels between fish management and test management. I made some notes where I thought it would be useful when explaining or discussing.

I could have followed a path to using my testing skills to see what I would have tested in the fishing process. I decided to leave this as is. The message it selves related to the process in respect to history and how people acted was more interesting for me. Somehow it seems we avoid watching to other disciplines and use the lessons they already have learned. Why are we so eager to make our own lessons learned?

So now and then I stopped and replayed the audio to capture the context correct. There were words I didn't understand and made the conclusion if it was necessary to understand the meaning of the word or the context. If not, then continue listening. Some side help was also available as certain words were explained by Michael. After a few explanations he provided I came up with the idea that he posted those words while listening also. Somehow it was useful to check if I was still on track looking at the time those words were mentioned in the audio. Of course it was not a reliable check, at least it gave me some information I acted on.

I started listening more carefully with lesser interruptions focussing more on the process and how I would translate it to testing.

Somehow during the session it was mentioned that after about 52 minutes the interesting part was told. When the audio file hit the 52 minutes I listened a bit further and found out that after that 52-minute-border some interesting information was provided. At least I valued it as useful.

The wrap up was also interesting, some were able to follow the discussion, some also draw the conclusion between fishing and testing. Either it was related to current certification for testing or fishing; or it was fishing for bugs.
Others were able to identify test objects with respect where the process failed. Or under which conditions/ requirements the fish had to be identified.

My wrap up
I see some comparison to testing. It didn’t take this audio to investigate what is said, I used it how I can translate the context to testing. There is a lot to say about this. The major conclusion I see here is that the fishermen learned that people are the target of management (49:xx) instead of the fish. The government provides quota (or testers certifications) and they own the fish now.

There is misunderstanding in measurement, scientific figures provided also by commercial parties tells that there is fish, the fisher cannot find them. Perhaps here the analogy is that commercial grounds are there to believe certification programs over craftsmanship.
They mentioned several mistakes like dealing with certain assumption, in the past an answer was provided to stop for a few years and everything would heal. Don’t complain now. Perhaps this is also with testing/certifcation0 the numbers are referring that it is necessary. Is it good also?

I think as in the audio, we are generalizing the context and believing one solution fits all, and avoiding looking to other relationships
I believe there is more to learn when time was available.

Amazing how we were able to carefully listen to 55 minutes finding our own understanding about this. Imagine you were listening to stakeholders, would you believe you missed now already interesting information, what would it be you remembered from your stakeholders :) or even: what would you miss and still able to deliver the appropriate value?! :)

Lessons Learned
I believe that out of every process you can learn lessons. Lessons learned are something different then learning specific skills. I believe you have to see things in broader perspective. Related to this session I can come up with the following lessons. Some of them are related to testing, some are related to history, some are related to processes and management. At least they are mine. Perhaps you can also learn from this.

Here some lessons:
1. Be aware of the audience and also which role you take in the process
2. You can learn also from other sciences. They can be a useful source
3. Sometimes you have to focus on the objects of testing, sometimes you can listen/watch the process.
4. If it is already hard to listen careful to a person for about an hour, and you know you miss something due to time pressure; imagine how the chance of missing issues is when your review documents.
5. The perspective about the approach can and might change. You need to be flexible as long as you keep track when and why you changed your mind
6. An audio file is also an expression of verbal communication, although the tone of voice is non verbal. This file provided more information then a written transcript, it is important to talk with the stakeholders, not only read their requirements. (although SMART) :)
7. Don't take anything for granted, not the scientist, government, figures and the so called facts.
8. Certificates (or fishing permissions provided by governments) should not be the goal, maintaining the business and supporting them by using the means useful and wise with respect for continuity is more important then short term profits.
9. As testers we have to continue working on our craftsmanship; certification and skills are not a guarantee of using it properly.
10. It was a fun weekend testing session with another mind set

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves!

Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/

Or follow them on twitter:
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Monday, June 7, 2010

EWT20: Your verdict as bugadvocate

Introduction
Usually I post a blog the same week I attended the weekend testing session. Unfortunately I was not able to post last week. This is the session of previous week. Nevertheless, an interesting session it was.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas
Tony Bruce
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi
Phil Kirkham
Mona Mariyappa
Ian McDonald
Thomas Ponnet
Jeroen Rosink (thats me: twitter: http://twitter.com/JeroenRo


Product and Mission
Product: OpenOffice Impress
Mission:
You're assigned to the triage meeting of OpenOffice.org Impress. Go through the bug list and make your position clear. You as a team are expected to triage at least half of the bugs today, as we want to ship the product next week.

Approach this time
Personally I prepared this session a bit by reading information about Bugadvocacy, I knew that there is information on the we about this.

This is what I found with a quick search and valued it.
Slides Cem Kaner
http://www.kaner.com/pdfs/BugAdvocacy.pdf
http://www.testingeducation.org/BBST/slides/BugAdvocacy2008.pdf
Great video!
http://video.google.com/videoplay?docid=6889335684288708018#

Article from Mike Kelly
http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1319524,00.html

I'm certain there is more valuable information on the web. Drop me a line if you have some good additions related to this topic.

Weekendsession EWT20
This time we were asked to take some role: project manager, programmer, tester, conference presenter (user), CEO (user), student (user). Somehow we managed to stick to it and also lost it because we made our own assumptions about the meaning of the role. Perhaps something to learn from?

During the session we decided to work as a team. Initially we were up to divide the judgement the tickets based on priority. During the process based on the performance we got the list divided in batches by the project manager :)

Although we tried to define the meaning of a good bug ad how to judge it (for example based on the mnemonic HICCUPPS) we managed to jump into the issues.
History: The present version of the system is consistent with past versions of itself.
Image: The system is consistent with an image that the organization wants to project.
Comparable Products: The system is consistent with comparable systems.
Claims: The system is consistent with what important people say it’s supposed to be.
Users’ Expectations: The system is consistent with what users want.
Product: Each element of the system is consistent with comparable elements in the same system.
Purpose: The system is consistent with its purposes, both explicit and implicit.
Statutes: The system is consistent with applicable laws.
That’s the HICCUPPS part. What’s with the (F)? “F” stands for “Familiar problems”:
Familiarity: The system is not consistent with the pattern of any familiar problem.

Although we might be aware of our individual understanding about good bugs and bug processes. I keep believing that it is important for a team to come to a mutual understanding about the process and the definition when a bug is written good and when a bug is a bug.

As usual in the Weekend testing sessions, the discussion is very useful. Also this time. I mentioned the idea about writing some kind of "bugadvocacy manifesto" perhaps this can be part of a session in the nearby future.

Lessons learned
The following lessons I learned at least from the session:

1. When process changes monitor if every one understands and joins the change
2. When tickets are send to a later moment, also define a process how to continue with it
3. In this hour investigation is done. That should be logged in a way. Preferable is the ticket itself
4. Judging bugs is done in several ways. Sometimes I think the quality of the bug is missed due to focusing on impact of the issue. Understanding is just a part of the quality of the bug
5. Judging bugs must be done within proper perspective of version, environment, reproducible and value,….
6. Before jumping into list of issues, make agreements how to write down the outcome of a bugadvocacy

Some conclusions of this session:
We came up with a list of issues which should be solved to answer the question of a "saver" go live. Looking back to the transcript I noticed that we accepted the judgement of each other. We didn't spend time to explain against which conditions the bugs were judged. Perhaps that should be done also next time.

As it seems, we were skilled to make some judgement about the bugs which were found. There are still lots to learn about judging bugs. Be careful to call yourself a bug advocate!

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Thursday, May 20, 2010

Thinking about testing and learning

A passionate tester
I’m not a scientist, I’m not a historian, I’m not religious follower and I’m not a native English speaker (bare with me and educate me if I’m wrong). What I am? I am a passionate tester and see in other disciplines lessons we can learn for testing.

So I come this posting. Yesterday I watched a documentary about the beginning of life. This documentary made me think about the discussion which is recently going on in my world of software testing.

Some references contributing the discussion:
Stuart Reid: Keynote 3: When Passion Obscures The Facts: The Case for Evidence-Based Testing
Cem Kaner: A new brand of snake oil for software testing
James Bach: Stuart Reid’s Bizarre Plea
Jon Bach: The Truth about Testing?
Nathalie Roosenboom de Vries- van Delft A lot on my mind…

The documentary
While watching the documentary on discovery channel I was captured by the example how John Needham (10 September 1713 – 30 December 1781 was an English biologist and Roman Catholic priest) performed an experiment to "proof" that live can be created in an "closed" environment. Based on his experiment he believed that a concept of "Vital Atoms" exists. This concept deals about the escape of atoms into the soil and are again taken up by plants. You might see this experiment of adding water in a sealed bottle and after a while life was growing in the bottle. As there was nothing and it was sealed, there must be something which is smaller and is created by parts of atoms.

If I'm correct he had quite some followers and the concept of "Vital Atoms" became a hype. People seemed to believe what he told based on his proof.

Fortunately Louis Pasteur (December 27, 1822 – September 28, 1895) was a French chemist and microbiologist born in Dole.) proofed with his experiment that a mistake was made. The obvious sealed bottle was not sealing the bottle completely from the outer world. Bacteria were able to enter the "isolated room".

The debate about the origin of life occurred later on triggered by Charles Darwin (12 February 1809 – 19 April 1882 was an English naturalist) who wrote the On the Origin of Species. With this document a new era is started. He didn't write about how life began. He brought biology and chemistry together in explaining how life evolves.
The debate started between a god who created life and life which evolved.

Between the followers that life evolves several experiments, hypothesis and theories were developed to proof that under various circumstances life can evolve and created. For example the combination of oxygen, carbon and other materials combined with some source of energy can result in "life-forms". The Oparin-Haldane Hypothesis by Aleksandr Oparin (in 1924), and John Haldane (in 1929, before Oparin's first book was translated into English), defined such a process. In short I would refer to this process in terms of chemical components which were individual present in the sea and transformed by ultraviolet or lightning into organic components.
Haldane even called it the 'prebiotic soup'.

Stanley Miller came with an experiment called Miller–Urey experiment (conducted in 1952, published in 1953) (re-concucted in 1982) to proof that in an isolated world life can be created. This experiment together with the outcome resulted in "the standard". They believed that it would be so easy to create life.

This concept also supported that life could be created else were but on earth, also called Panspermia. If I remembered well from last night watching the documentary, there is space in found pieces of meteors which are older then the earth resembles the structure of "simple" cells. Combine this with the theory that in isolated spaces also organic components can be created, the change is available to raise life from outer space.

Jeffrey Bada also executed the Miller-Urey experiments (see: Primordial Soup's On: Scientists Repeat Evolution's Most Famous Experiment by Douglas Fox) and continued on it. With the difference looking to the environment of the earth containing amounts of iron and carbonate minerals. He added them to the experiment and came to different outcome.

Other scientist followed their road bringing up hypothesis and research to see about the options creating life under extreme conditions, like near volcanoes, in caves, under water without light etc.

In the documentary I watched more exampled were provided which in my opinion also can be translated to testing.

What to do with testing?
Perhaps you wonder what this has to do with testing. Perhaps you made your own conclusion or picture. What I see is a process where evolution is involved. Not only evolution of the human species. You can see also an evolution of human thinking. Based on the known context John Needham came to his approach and method. He was able to sell it to the crowd and gained followers. Almost hundred years later a new person, Louis Pasteur, came with his conclusion to proof otherwise. I proofed that although the conclusion seems to be valid, the environment was not as what was expected. Based on the knowledge of John, he was right, only due to technique and new understanding; human kind was able to bring up other methods.

In testing I see also people evolve and continue to challenge "experiments" and "methods" and also people who accept certain outcome and become a follower.

Louis Pasteur did not proof how life was created, he just showed what went wrong in that experiment. In the same era Charles Darwin published his view about evolution. This triggered other scientist with other disciplines to continue the search how components evolve.
This evolution triggered me to think how just "zeros and ones" translate in bugs.

I would say that those zeros and ones alone won't do anything. It is the context how the will become visible into functionality and the environment how they can evolve. Even in new functionality or in flaws of evolution.

As testers we have to be open for other disciplines and understanding from those disciplines to continue. We can learn from it and should spend time for investigating those disciplines instead of spending time our approach is the only truth. For learning an open debate and does necessary based on mutual understanding and not solely on the perception own the single truth.

Like Stanley Miller re-conducted his experiment after years we have to be alert and keep learning and questioning our approach. It is mandatory to keep an open mindset. Like Jeffrey Bada did, also perform our own experiments. They might support or adapt visions of others, or even your own vision.

Testers should be able to discuss the possibility of own failure and learn from others if they perform similar experiments. In approaches like the Schools of software testing there must be space to discuss, challenge, disagree and agree with each other to make evolution in software testing possible.

Do you believe?
What I learned from the documentary is that people are followed by others who claim to have found the evidence for their hypothesis and that others are false. To me, it turns out that in history certain failures are often made. In the example above, initially they seemed to be right although time proofed the opposite.

I don't think it should matter who is wrong or who is right, you have to be able to define you own mind and not following people because they claim to have the proof. You might use their thoughts because it helps you. it helps you in your work. It helps you in your own process of learning.

When accepting this, you have to be aware that what you believe in now might be wrong or different later on.

Was John Needham wrong with his assumption? I think not, based on his knowledge and the lack of knowledge of others who lived in his era, he seems to assumable right. Others learned from it. It would be wrong if he deliberately misused situations/ did not present facts or ignore other(s) visions to obtain his proof.

If we follow some school and deliberately ignore others how would this support the evolution of our profession? How different are we then Pope Damasus I who assembled the first books of the bible at the Council of Rome in AD 382? Imagine how different the world would look like when the bible contained other chapters?

I think we should mutual accept each other thoughts and learn from it and adapt. The key word is here mutual. Are you making the step with an open mind and support mutual learning or are you leaving behind?

Additionally to read:
While searching for some background information I stumbled on this book. I believe it is worth reading
Thinking about Life: The History and Philosophy of Biology and Other Sciences By Paul S. Agutter, Denys N. Wheatley preview this book

Tuesday, May 18, 2010

Testnet Voorjaarsevent 2010

It happened on may 12th 2010
Last week I attended TestNet voorjaarsevent. Testnet is the Dutch Association for Software Testing. Besides several workshops and small events there are two major events, in the spring (voorjaarsevent) and in the fall (najaarsevent).

Last week the so called "Voorjaarsevent 2010" happened. As usual I got a bit prepared, made the selection of presentations I would like to attend and also which not. What I see often is that the presentations are dealing with old concepts in new clothes. This time there were a few which got my attention.

General impression
* Location: very good, as well as the main entrance, the route to get there as the overall presentation
* Dinner: Good, it was tasty and enough.
* Drinks: like to see the ability also to have drinks like coffee or tea while not attending a presentation
* Exhibitions: Well arranged of the main entrance, good to see those familiar names again and again.
* Presentations: There were a few I would like to see, as what I saw besides the first one, good. Still a lot of old work in new clothes
* Key-notes: Expected more fro the first key-note related to the main topic: "Secure testing"
* People: open minded and pleasant

Program (copied from TestNet.org)



Intentions to see:
- Key note from Stuart Reid: "Improving testing - with or without standards"
- a "debate related to ethics and testing" lead by Nathalie Roosenboom de Vries- van Delft & Budimir Hrnjak
- Rudi Niemeijer: "Safety helmet prohibited" (unfortunately this was parallel with the debate and I had to miss this one)
- Jurian van de Laar: "Testers helping developers or vice versa?"
- Menno Loggere & Nora Visser: "Privacy kills quality"
- Anna & linda Hoff: "The Supertesters"

Besides presentations this event is always a joy to meet "old" and "new" test friends. The setup of this event was a bit different compared with the past. This year they introduced some kind of knowledge tables were exhibiting companies would be able to talk with participants about a topic they had chosen. Also new in my opinion was the set up of also smaller tables which able us to sit down with our friends.

The presentations
I arrived just an half hour before the opening started followed by the key-note given by Stuart Reid. Arriving in the large hall I noticed it was already quite crowded. During the day I heard that over 450 attendees were joining this conference together with me. Who dare to tell that testing is death? We have already a quite strong testing community in the Netherlands. :)

Together with a fellow tester I know from participation/moderation on testforum.nl I went holding a cup of well deserved hot coffee to the main hall, found a spot and sit down. The show started when the chairman opened the conference.

Improving testing - with or without standards
He introduced Stuart Reid. I never saw him a presentation of before and he started with introducing himself. Based on his 27 year experience, tutoring on a university and participating on ISTQB testing techniques if I remembered well he started explaining about how our profession as testers lacking behind in knowledge compared to users and developers. To support this he provided a nice chart with figures from a survey held in 2000. Based on these figures our skills are looking very bad. Didn’t we learn more in the last 10 years? Are these figures still valid? Why are we testers so eager to see figures even if they are from old surveys and rely on them? Or should I accept those and make my own conclusion that certifications did not add any value to the marked as during the years certification exist, we did not added any value to the market. Of course I'm am wrong again.(?)


Using figures and charts from "old" surveys can be useful. Really, there seems to be people who think using "old" figures can reveal the truth.

I believe when using certain information you have to check how dynamic the environment is.

Somehow the IT environment looks to me very dynamic and fast growing when it comes to technology, perhaps also a bit related to the learning skills. If you look at other areas they are less dynamic. Stuart Reid provided also a nice picture from a model created by Hackman & Oldham related to motivation factors. I think that image had more value for his presentation than all the other figures. Here he made his point that we should not only look at the information sources and test techniques, we should be aware of our soft skills also.

Fortunately he was a bit out of time and skipped the slides related to ISTQB etc. In my opinion that kept the message which is more worth to me: we are behind in our knowledge or getting behind and we should adapt now. We have to become professionals with skills. Some testers might believe ISTQB- or other certifications is the only true path. For them, those skipped slides might have some addition.

I hoped he provide some information and guidance so we actually learned something instead of scaring us and directing us that having certifications are a must. Perhaps I am wrong.(?)

The debate between the ethics of testing
After the previous session I attended a "new" idea using a debate to bring testers together. Perhaps people think this is a contradiction. I believe when people talk and hear each other mutual understanding is growing. You don't have to agree with the vision of the other, you have to be aware.


During this debate there were several statements mentioned and introduced by the host. We had to choose party. A pitfall could be that every one would agree or disagree. The hosts mentioned that they will divide the audience in two sides.


There were some good statements although you might agree with the statement; it can be more fun to come up with valid arguments to challenge that statement.

examples:

- "A tester should always speak the truth"
- "A tester can be hold responsible for acceptance"
- and more


I believe it is a good habit when every one agrees upon something immediately, you question if this is true. A tester should be capable to come up with arguments to question any statements.



I heard some very good pro as contra arguments. After each discussion the hosts came up with a wrap up and their own thoughts again.



I see some future for such an approach of discussing about topics. You can play this game in different ways. And learn a lot from it.

Mistery guest
After the voting if testing and ethics can go through one door together a mystery guest was introduced. They found a "known" person, Bart Broekman willing to speak about his vision related to the statements. I think a debate before a "keynote" is a good way to set the mind set. It helps avoiding discussion about irrelevant topics. It supports discussion about what a person really had to say.


Winning the TestNet 2010 debate award
Finally at the end there was a prize to win. I was the lucky one who gained that award. Unfortunately I don't have a picture of accepting the award, though below you see the evidence. Did I won because I am a good debater? I don't think I'm the best. At least I was standing there with dedication and believe. Perhaps that made me win the award. I'm proud of it. Sometimes an award tells more then other valuable meanings. I know there are some testers who fancy the bottle of wine I won together with this cup. I knew that a cup valued more when presenting it to my kids.

Super testers or Super Sisters?
The sisters Anna and Linda Hoff from Know It from Sweden gave a tremendous show presenting their vision/act about testing. (Also to be seen on EuroStar 2010: Advice: See them!) The Supertesters - A Slightly True Story"


During the show I wondered if they were actors/comedians or testers. Based on how they used the terms and images they have to be great testers.

In my opinion they were GREAT, it was actual a very good show were they acted as program manager, a tester and a super tester. They performed using several techniques in presentation, discussion, drilling, singing, rehearsing, using pictures.

If you looked and listen carefully you might have heard how they were using the several testing schools from ISTQB addictives to Bach-followers. Their approach about changing the mind set of testers by drilling and forcing that their answer is the only true answer is in my opinion a good example how we testers are currently forced to think alike.

Some funny moments were added like hiding bugs, finding it, appreciating it and comparing it with the moment from "lord of the rings" and valuing it as "my precious".
Also their understanding about testing like smoke-testing, V-Model (in their vision it is actually a very nice model), load testing and performance testing (I still wondering if the sisters used pictures out their albums)

At the end the showed some mix of lyrics on known-melodies presenting their message.

I wish more people are able to present their message in a show like this with. I believe the time is well spending watching them.

At the end

I had some time reserved to see some other presentation after the dinner, only as usual I didn't make it. I had some interesting discussions afterwards with fellow testers. This is one part I also like about this event: meeting other people. What I understood of them was that some presentations could be better. Personally I think this was a great event, learned some personally, had some laughs, got annoyed about the first speaker and went home with a good feeling.

Sunday, May 16, 2010

EWT18: Zoom in or FOCUS and DEFOCUS?

Introduction
Another weekend and another session of European Weekend Testing - EWT18: "Zoom me in"
This time Markus Gärtner did a great job facilitating. Was it different then other times? I believe so. Although the number of participants were low, The team was good. This time a good mission was defined with a relation towards reporting to the manager about your test findings in relation to test a suitable tool which can be used during that presentation.

This time the Application Under Test was: ZoomIt v4.1. The main objective was to see whether the application was suitable for usage in during a presentation you have to give for you boss.

Participants were:
Jeroen Rosink,
Ashik Elahi,
Ajay Balamurugadas
Markus Gärtner

During the roundup:
Pradeep Soundararajan
Michael Bolton

My Approach
In contradiction to other sessions I changed a bit my approach. before I downloaded the tool I first read the web site for information and noticed that the tool was one with a small number of functionalities. After downloading I checked also the website from the developer for additional information. Looking to the application itself it runs without installing.

Basically it were the following steps:
- Read about the application
- Check for functionalities while running the application
- Ask to the facilitator questions about the context to test this application
- Define the conditions to test the application
- Defocus and check if there are other ways

Some questions to start with
Do you want to have an impression about the usability of zoomit?
If the functionality fits? and can be used during the presentation?
What about the information to use in the presentation?
When will it be suitable and correct to use and when will the boss be pleased about the presentation?

Confirmation of the approach
Before I actually started testing I tried to confirm the approach. The mission was to see whether the tool Zoomit is suitable for usage during a presentation. It should function under the defined conditions of the presentation.
It should be able to support the following objects of the presentation:
* high-level understanding
* graphs
* details where necessary
* interactive questions

Some Steps to mention
To check whether the application is suitable for usage in a presentation I performed the following steps:
- learn about the tool (documentation and it self)
- check the functionality
- use the tool
a. as given on the screen which is open
b. on a presentation, while not actively shown
c. on an active presentation
d. on a movie
- preparing some kind of matrix with combination: Hot-keys and environment (application vs video/chart etc)
- use the tool with respect to functionality and usage within a presentation
- on video and chart
- using different options
- checking the behaviour of the tool about changing the standard settings

Some findings
- CTRL-break+background fade: ok
- it is possible to enter negative time in box (copy-paste negative value: -1)
- boundary values of break are:
o enter: 1 until 99
o paste: -9 until 99 (pasting 100 results into 10)
- Hotkeys: it response on the key combination you enter, if you enter CTRL+SHIFT and hit enter then these values are preserved, manually it is not possible
- Used different font type. Also wingdings., seems to work, typing makes the cursor go off the screen, also when using the ENTER
- The timer also using the font as set on the type dialog
- Font size can only be altered with value between 16 and 16
- When using the tool while a video is running; on live zoom it is not shown at all.
- Mouse behaviour in live zoom is opposite

Lessons learned
Although some are not new, it is refreshing and valuable to mention
- Sometimes it is not clear what a tool must do, only under which conditions like, using it on charts etc.
- Using a tool with a few number of functions it is easy to prepare some matrix to test
- Awareness about environmental conditions is something not to make assumptions, I noticed that all functions would work on my PC (Vista), this might not be the environment to present on
- Frequently participating on weekend testing trains your mind
- Defocusing brings some peace in thinking process.
- Asking questions first to make the scope clear for yourself provides a great guidance during a mission
- I should train myself more on the questions, perhaps a “golden” heuristic might help.

While discussing
During the discussion some nice other lessons were possed. It is not always obvious that the environment you are testing on is the same you have to use in presentation. Another option was the availability of a beamer and other digital means. This resulted in the suggestion to use a flip-over, white-boards etc as Oracle to check whether the application would be supportive for the presentation.

Ajay also introduced a term he learned from Pradeep while participating on Bangalore testing meeting: "gaining the context" instead of "setting the context".
For me this brought some ideas and thoughts also. Somehow I see "gaining" as something you have to earn. In my opinion a context for testing must be gained. Not always is information provided clear, even when asking the right questions, it is the persons attitude and willingness to share information with you.

While we were discussing the differences Pradeep Soundararajan entered the discussion to support us on this. He challenged us with a alarm clock example. A bit later also Michael Bolton entered the discussion to provide us some guidance.

In my first impression it seems that in general approaches there is less considereation between human aspects. Michael pointed us towards the mnemonic: CIDTESTD from Heuristic test Strategy Model (20100516: Changed heuristic into mnemonic)

At the end there was the question why it would be so important to discuss about the difference between gaining, exploring etc.; it can be all the same; it might be just a word game.
I believe there is some difference. Like earning respect, you have to gain knowledge and information. This can be done by using your skills as a person and adapting to the situation. I believe information should not be take for granted. Or as Markus mentioned: "don't look where everyone's already pointing". This can also be used in some way as: "Don't ask information others already asked"

Conclusion
At the end it was a challenging, good moderated and fun weekend session.
Well done to all.

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Monday, May 10, 2010

Rorschach, the power of visualization and software testing?

Introduction
I blogged about my experience in Weekendtesting were I used Astra Site Manager creating a map WTANZ02: Same Language, different sites and places. In that post Shrini Kulkarni challenged me to expand on how to use this as test strategy.

When you look at the images posted there, you might notice that the images look a bit like spots/stains.

Rorschach test
When thinking about spots/stains and deriving information from it reminds me immediately on the Rorschach test.


From Wikipedia: Rorschach test: "also known as the Rorschach inkblot test or simply the Inkblot test) is a psychological test in which subjects' perceptions of inkblots are recorded and then analyzed using psychological interpretation, complex scientifically derived algorithms, or both."


Below you see an example of a Rorschach image. Are you able to read this picture? Are you able to assign functionality to areas? Do you see bugs?

Image saved from wikipedia http://en.wikipedia.org/wiki/File:Rorschach_blot_01.jpg

Primarily based on the perception of these spots the user is asked what and how he experience this and why. What does the spot tells you.

Testing spots

Below you see the 2 images I obtained from "testing" the 2 websites as stated in the challenge from WTANZ02: Same Language, different sites and places.


Just tell me: what do you see?

Image 1


Image 2

It depends how you look at the images, you might identify some shapes. Perhaps you only see dots or animals. Perhaps you see bugs.


The strategy
Defining a strategy is a challenge itself. Writing about it and sharing your idea is even more a challenge. Writing about it and trying to come with a Heuristic is more challenging for me as this is quite new to me. So bare with me, support me and make me teach you as I can learn from you.


First steps
I suggest first to define the approach based on patters. Ask what the image itself can tell you and what information do you need to define the approach.


Imaging: Create a map of the website/ functionality to define a certain landscape
Defocus: Don’t approach the image as a system, approach it as a painting, approach it different, what else do you see? Use your imagination.
Interpret: Are you able to tell a story about what you see (colours, lines, drawings, etc.) and argument it?
Density: is there a structure available representing the first impression you had?


Next steps
After you got a main overview about what the system could look like you might play with the following components.


Complexity: Is there some kind of structure? Are there lots of nodes and are you distracted by it?
Number of objects: Are there too much objects visible you are not able to zoom in without missing details?
Environments: Can the map also be used to identify other systems/ secure areas?
Risk Areas: Are you able to point areas of risks in the map based on "important" functionality?
Process: Is there a order available in the structure which also might support any process?


Other steps
Looking to the previous actions, I hope to provide some additional ideas how images from a website structure can support defining an test approach. I believe looking on a different way to images or structures you might come up with other concepts and thoughts which supports your test approach. The next step could be adapting the newly gained view into your test process. Based on this information you can define alternative test cases or perhaps product risks analysis.

It might help to get back some creativity back in testing.

EWT17: Rocket science in software testing

Introduction
This weekend I attended another session of European Weekend Testers. This session was facilitated by Thomas Ponnet and had another approach then in the past. This time we could prepare ourselves a bit. The tool under test was offered before the session started.

What has this to do with rocket science? It was the plug-in which had to be tested.

The Participants were:
Shruti Gudi,
Jeroen Rosink,
Tony Bruce,
Zeger van Hese,
Catalin Anastasoaie,
Katya Kemeneva,
Dominique Comte,
Pradeep Soundararajan,
Jaswinder Kaur Nagi,
Thomas Ponnet,
Anna Baik,
Markus Gärtner

I have to admit it was a great crowd, a great session and an useful round-up.

What to learn
Last week Andreas Prins posted on his blog the question what can be learned: Attitude or methods? with weekendtesting. In his posting I wonders when reading articles related to weekendtesting he never see a reference like: "As ISTQB Chapter X page xxx we must do this or that" it is a good remark from him. Mainly in projects I don't refer to pages of ISTQB either. Not in this context. When testing in the weekend, or on a project, you refer towards your experience or even sources were people are telling about their experience in a certain context. That experience can be based on theory combined with common since and the situation.

What can you learn in Weekend Testing. I'm not able to tell you what you will learn. You might learn how to look at yourselves. You might learn to think beyond the borders of the regular testing projects you are into. You might learn from the approaches from others. You might learn how to learn.

The mission
The missing this week was different then others, this time we had a manager of a band who had a gig that evening and wanted to make sure that the plug-in he found was suitable and stable. If not, would we be able to propose alternatives.

The approach
Basically I looked at the application to the following points
- try to play wav while plug-in is not available
- try to play wav after plug-in is selected
- tried to alter the sound of the wav using several preset schemes.
- asked the manager about when it is stable, plug-in /laptop
- asked the manager about under which condition plug-in was used
- played with multiple files,
- used other wavs
- used wav and midi together,, no option to mix tunes
- used the key board and options while music was playing, it interferes with the output.
- the midi/wav player, played a bit with that.
- looked at the minihost and used several schemes/presets
- used the buttons on the minihost
- tried to work together with multiple minihosts
- try to record music
- tried using strange actions like using short-keys how app. reacted

Some issues
Below you find the highlight of issues I found during the session. These were findings from my side.
Error01: message shown when opening the minihost
Sometimes when using the minihost this error is shown. Not always reproducible
Error02: error shown when opening recorded wav
when opening the “test.wav file just recorded this message is shown, although the wav file created/recorded using mic is 1kb.
Error3: opening another own wav file error is shown, wav not played
When opening another .wav file format the following message is shown and wav is not played.
Err0r04: Recording not working
When using the recorder it shows that a number of kb is created. Even the file and location is shown correct only the actual recording is not made
Error05: when new file is played, not selected
When a song is ended and a new is played in the list, the selection is not made, the original keeps “blue”
Error 06: in global settings window: “tempo” is not working
When using the Tempo slider, no effect on wav-output
Error07 multiple files able to select, last file is played
Error08 buttons not fine approachable, usability is less

When trying to turn on the buttons it is no following the direction of the mouse
Error09when playing a song, and hitting button on midi/wav recorder interferes with output
When playing a tune and you press other buttons of this app, then music/tune is stopped/ hanging for a few moments.
Error10 pressing The F3 button while playing a wav makes the music hang,
When pressing the F3 button on minihost.exe while playing application is hanging. No other interaction with system possible

Some Lessons Learned
Of course there are a lot of things you can learn when testing. There are even more things you already have learned. Some of the lessons I learned this weekend are just refreshing or confirmations of other valuable lessons.
1. Although information about a single application is required, when testing it together it is a combined answer. You might consider it as single object, when it is tested together with other tools you have to consider their stability also
2. To understand or able to test a part of a object, you need to know the context, in this case about what stability means for the user and not according to the tester.
3. You are not in the position to provide advice, look at the article from Michael Bolton: http://www.developsense.com/blog/2010/05/when-testers-are-asked-for-a-shipno-ship-opinion/ You can provide information
4. When you are asked as a team, you have to work as a team. Even after short introduction it is hard to get everyone’s attention.
5. Domain knowledge is a prerequisite when it is directly asked by “ the manager”
6. if the manager is not there, find someone in the team with domain knowledge
7. Don’t get distracted by crashes, when they are reproducible, then you can avoid them, if you are still able to use the functionality then, you might earn something with your gig
8. It is easy to forget the lessons learned from previous sessions, The assumption is easily made that every one knows you and how you think. Information which seems to be obvious is often forgotten when acting longer in a project. Perhaps recap some questions? Magic words: FOCUS/DEFOCUS
9. Reminded about the posting of Markus about being blunt or not towards manager: http://blog.shino.de/2010/04/11/testing-and-management-mistakes-causes/



The discussion part
During the session several questions and suggestions were raised. Information was missing or needed. Some were about the domain knowledge like "what software compressor for music is", "How to communicate with the manager", "acting like a team or not" and so on (you might check the transcript for details)

Also at the end some valuable remarks were made related to "old experience", "skype is not a good tool to use ", "the manager already checked for a tool, why should we check for more functionality", "if the plug-in was the objective, should it be tested alone?", "Are we able to answer the question to provide and advice?"

Looking as a process to the discussion you can also notice some familiar behaviour. We all had a common goal, still we acted like individuals. We try to get information which would be valuable for us at that moment. In my opinion we did not asked what would be valuable for the team. We also tried to do our job good due to the minimum of time and focussed therefore more on ourselves. When you look carefully, there were some persons who tried to become a group and act like a group. Perhaps due to time, differences in experience, differences in testing approach, differences in objectives we did not succeed to act like a team. If you look at the end, we are more explaining what we have done and what the traps are. The focus lied more on "did we succeed the mission". I believe we missed in some part a good lesson: "what did we learn and was it fun?" and also "Which personal lessons can you take to a next session."

What would be more valuable, to meet the mission as an individual or to act as a team, learn from each other and perhaps meet the missions objectives or perhaps change it during and afterwards?
Conclusion
This weekend session was a great one. A mission with an attitude of the manager. A great crowd of testers, a discussion you can learn from. I had a lot of fun and learned old and new lessons.

Friday, May 7, 2010

What you can learn from your kids and yourself?

It is so obvious, I knew it and although I search for someone to blame, it is my mistake.
Sure, how easy it is to accuse my little daughter who sat behind my PC playing some games on it. How adorable she was when she laughed and called me for help. How priceless her smile was when she was proud to be allowed playing on my PC.

It doesn't mind at all. It is broke and I didn't do it. So I am not to blame or am I?

Here is the situation:
I have a external disk drive and I used that one as back up facility. So far so good, why not use an additional drive for backup instead those disks. (I have still some 3,5 inch disks with stuff on it and those cannot be used for backup anymore) It seemed reasonable to use an external HD for backup facility.

That hard drive was standing on my tower, and as it is a tiny one it felt down already quite a few times and afterwards it worked every time. Amazing how solid that Freecom HD was. Until last weekend. It felt, and no one told me. And today after some times I needed that nice solid backup facility. Unfortunately, it was not approachable anymore.

An unapproachable device is nothing new. Sometime USB -ports are just mixed up or deactivated due to some stupid installation etc. As I needed the HD I did some checks.
The initial check was turning on the power. He, that is strange, the power was already on, normally the HD would be recognized. Hmmm. I turned the device on and off. no result

I checked the USB cable, I tried another port. I restarted the PC. Checked the USB settings. Shouted a bit loud just to express some kind of frustration. I checked again the HD if the blue lights was glowing. I listened if the HD was running while I restarted it (turn of and turn on)
I actually checked if it was recognized in the "remove safely-dialog". I kept refreshing my explored window. Did this using THE F5 button, The combination with the SHIFT+F5 button. I used the refresh-option in the explorer menu. I even tried the F9 button as in MS Excel this refresh also sometimes the results.

I stood up and connected that external HD to the PC of my daughter. Tried all USB ports on that PC as well. I actually carried the HD to the PC of my son. All with same results.

While I was walking downstairs my daughter asked me about the status of her new PC. And there something happened, I found another victim. I reminded about the situation finding it strange that the HD was lying on the ground. Only didn't suspect anything at that moment.

While I was walking downstairs my daughter asked me about the status of her new PC. And there something happened, I found another victim. I reminded about the situation finding it strange that the HD was laying on the ground. Only didn't suspect anything at that moment.

I asked her if she remembered if the HD felt down from my PC. She looked with glazy eyes wondering what I meant. I asked her again if that grey little box felt on the ground, and she remembered that. Somehow I felt even more frustrated, it came actually in my mind that to tell her I would work on her PC until cure was found. Only, which person can be angry on her little princess. not me, I went downstairs, informed my wife about the situations, made some strange sound to express my frustration and went upstairs back to the PC.

That gray little box, when I shake it I hear some noise. It sounds like something very tiny was broken. As a skilled engineer I tried a way to open the box. OK, I admit, I'm not skilled in those grey little boxes. Somehow I believe I can become skilled opening the item when it is broken. Also when it is not broken I hope I learn.

Looking back at that gray box no screws visible. I questioned myself if it was worth the effort to open the box using brute force? This time I decided not to open the box. Instead I calculated and argument the damage.
I look at the damages in terms of:
- what is lost?
- is losing a lost?
- time taken to collect,
- times of usage,
- emotional value,
- options to recover (from other) resources,
- time it would take to recover,
- time when information was needed,
- what have I done to prevent loss
- what were my intentions to prevent loss
- what did I not do

Evaluating the process
Looking back and thinking what I have learned or could have learned I come up with this blog and the following identifications in the process
- I found an defect
- I made sure it was broken
- I investigated it in several ways
a. Functional
i. What was behaviour
ii. What should behaviour be
iii. Was the drive approachable
b. Technical
i. Was it turned on
ii. Was there power available
iii. Was the power-cable plugged in
iv. Was the power source connected?
c. Hardware
i. Cables working
ii. Light working
iii. Does it make sound
iv. Can it be opened
v. What effort must be used to open it
d. Connections
i. Was there hardware recognition
ii. Did behaviour occur on all other USB-ports
e. Reproducible
i. Own system
ii. Other systems
f. Information
i. What was on it
ii. What is gone
iii. Were are some pieces stored
iv. How old was data
v. What was the value
g. History
i. What did I remember
ii. What did I do with data
- I checked if others were to blame
- I noticed some strange behaviour as result of my fury/frustration
- I found someone only didn’t want to blame
- I noticed that blaming is not solving the problem
- I searched for arguments not start blaming
- I try to evaluate the loss
- I evaluate and wanted to learn from it

Lessons Learned
When looking back at this situation you might noticed that there are some similarities between this situation and testing.
How often did you:
- face issues and became frustrated about it?
- looked at a system in different ways?
- spend time to find the one to blame?
- did you try to look at behaviour of others and yourself?
- did you learn from that situation?
- did you aimed for value instead of spending time for too detailed proof?

When I look back at that situation I see I didn't stick to problem pointing, even problem solving was not the issue. I valued the situation and took actions. This time it was apologizing to my daughter, removing the external HD from my PC to avoid other damage and check some old other storages and made sure that they are approachable. Finally I scheduled some time to check for other hardware and decided it could wait for a month.

I still have the image of her face sitting proudly behind my PC smiling at me. That is priceless. In other terms also valuable. I would say, that is even valuable then the damage/loss.

A valuable lesson I want to give to the reader: Don't take things for granted, if something happened; look what you learn from it and how you can learn from it.