Tuesday, June 15, 2010

EWT22: Test Fishing for bugs and mismanagement

Introduction
This time the famous tester, Michael Bolton, facilitated the European Weekend Testing session. He provided a link to an audio file: http://www.cbc.ca/ideas/features/science/#episode13 It contains an interview, an episode in a radio series that he has been talking about for a long time, called How To Think About Science.

MB stated that it has nothing to do with testing, or did it?

With respect to other Weekend Testing sessions, this session did not had to do with testing an application. It was more likely listening to other sciences and taking advantage of that information. I believe it was an interesting session.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas,
Michael Bolton, (Guest Facilitator)
Ken De Souza,
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi,
Ian McDonald,
Thomas Ponnet
Dr. Meeta Prakash,
Richard Robinson,
Jeroen Rosink,
Artyom Silivonchik


Mission
The mission as provided was:
"Listen to the recording. Take notes on what you're hearing, with the goal of capturing lessons from the interview that might usefully influence how we test, and how we think about testing. Stories or patterns from your own testing experience that you can relate to something in the interview are especially valuable."

Context:
"Here's the background to the interview, from the link above: On July 3, 1992, the Canadian Fisheries Minister John Crosbie announced a moratorium on the fishing of northern cod. It was the largest single day lay-off in Canadian history: 30,000 people unemployed at a stroke. The ban was expected to last for two years, after which, it was hoped, the fishery could resume. But the cod have never recovered, and more than 15 years later the moratorium remains in effect. How could a fishery that had been for years under apparently careful scientific management just collapse? David Cayley talks to environmental philosopher Dean Bavington about the role of science in the rise and fall of the cod fishery."

Approach
I believe you can approach listening in different ways. The challenge what I noticed here is the type of stakeholder you would identify yourself with.

You might be the tester, seeing the story as a metaphor and trying to identify the circumstances you assume to be useful and valuable for testing.

You can hear to it from a process point of view, call yourself the "manager". Or perhaps you are the historian, trying to find facts and similarities perhaps useful for lessons to learn. Perhaps you are the student, trying to see with an open mind what can be learned.

You might even come up with other exciting roles. The question remains: How would you approach such a challenge? Which tools do you need? What back ground is important? How would you focus on the assignment?

For this mission I came up with several options.
1. Would I use the audio-file which could be downloaded and was locked by an password?
2. Would I use the audio-file provided online?
3. Would I actually care for this challenge?

The pre-process
I had the file already downloaded and also looked at the web site with the audio-file. The website also provided me some information/ some background which confirmed the content of the interview as Michael also provided. For me this can be important information. Checking if information is what it suppose to be before actually looking/listening. Perhaps you can place some background investigation under the flag: checking for the history.

When opening both files I noticed that the quality online was better then the downloaded file. I made the decision to listen online accepting loss in connection etc. I also noticed that the file length was >55 minutes. The length of time added a condition towards my approach as normally one hour was used for testing and 1 hour for round-up. I would not be able to spend too much time listening particular part back and again. It also set the context that the chance of loosing time would be there and have to be checked against acceptability as in EWT I was also part of a team. The condition for the team was set to listen for about 1:10 hours.

Before I started I also opened a word document to make additional notes. I allowed myself to make notes starting with the approximate time in the audio file I heard something interesting. The note itself could be a remark, a quote or something I had in mind.

While listening
While starting listening I was again forced to make a decision: Was the person's tone of voice understandable and acceptable for me to listen to? Was their background noise which could distract me? How would the message and even more important which message would be brought?

After a few seconds I felt comfortable, making so now and then notes. And after a few minutes I came up with the conclusion to listen if there are parallels between fish management and test management. I made some notes where I thought it would be useful when explaining or discussing.

I could have followed a path to using my testing skills to see what I would have tested in the fishing process. I decided to leave this as is. The message it selves related to the process in respect to history and how people acted was more interesting for me. Somehow it seems we avoid watching to other disciplines and use the lessons they already have learned. Why are we so eager to make our own lessons learned?

So now and then I stopped and replayed the audio to capture the context correct. There were words I didn't understand and made the conclusion if it was necessary to understand the meaning of the word or the context. If not, then continue listening. Some side help was also available as certain words were explained by Michael. After a few explanations he provided I came up with the idea that he posted those words while listening also. Somehow it was useful to check if I was still on track looking at the time those words were mentioned in the audio. Of course it was not a reliable check, at least it gave me some information I acted on.

I started listening more carefully with lesser interruptions focussing more on the process and how I would translate it to testing.

Somehow during the session it was mentioned that after about 52 minutes the interesting part was told. When the audio file hit the 52 minutes I listened a bit further and found out that after that 52-minute-border some interesting information was provided. At least I valued it as useful.

The wrap up was also interesting, some were able to follow the discussion, some also draw the conclusion between fishing and testing. Either it was related to current certification for testing or fishing; or it was fishing for bugs.
Others were able to identify test objects with respect where the process failed. Or under which conditions/ requirements the fish had to be identified.

My wrap up
I see some comparison to testing. It didn’t take this audio to investigate what is said, I used it how I can translate the context to testing. There is a lot to say about this. The major conclusion I see here is that the fishermen learned that people are the target of management (49:xx) instead of the fish. The government provides quota (or testers certifications) and they own the fish now.

There is misunderstanding in measurement, scientific figures provided also by commercial parties tells that there is fish, the fisher cannot find them. Perhaps here the analogy is that commercial grounds are there to believe certification programs over craftsmanship.
They mentioned several mistakes like dealing with certain assumption, in the past an answer was provided to stop for a few years and everything would heal. Don’t complain now. Perhaps this is also with testing/certifcation0 the numbers are referring that it is necessary. Is it good also?

I think as in the audio, we are generalizing the context and believing one solution fits all, and avoiding looking to other relationships
I believe there is more to learn when time was available.

Amazing how we were able to carefully listen to 55 minutes finding our own understanding about this. Imagine you were listening to stakeholders, would you believe you missed now already interesting information, what would it be you remembered from your stakeholders :) or even: what would you miss and still able to deliver the appropriate value?! :)

Lessons Learned
I believe that out of every process you can learn lessons. Lessons learned are something different then learning specific skills. I believe you have to see things in broader perspective. Related to this session I can come up with the following lessons. Some of them are related to testing, some are related to history, some are related to processes and management. At least they are mine. Perhaps you can also learn from this.

Here some lessons:
1. Be aware of the audience and also which role you take in the process
2. You can learn also from other sciences. They can be a useful source
3. Sometimes you have to focus on the objects of testing, sometimes you can listen/watch the process.
4. If it is already hard to listen careful to a person for about an hour, and you know you miss something due to time pressure; imagine how the chance of missing issues is when your review documents.
5. The perspective about the approach can and might change. You need to be flexible as long as you keep track when and why you changed your mind
6. An audio file is also an expression of verbal communication, although the tone of voice is non verbal. This file provided more information then a written transcript, it is important to talk with the stakeholders, not only read their requirements. (although SMART) :)
7. Don't take anything for granted, not the scientist, government, figures and the so called facts.
8. Certificates (or fishing permissions provided by governments) should not be the goal, maintaining the business and supporting them by using the means useful and wise with respect for continuity is more important then short term profits.
9. As testers we have to continue working on our craftsmanship; certification and skills are not a guarantee of using it properly.
10. It was a fun weekend testing session with another mind set

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves!

Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/

Or follow them on twitter:
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Monday, June 7, 2010

EWT20: Your verdict as bugadvocate

Introduction
Usually I post a blog the same week I attended the weekend testing session. Unfortunately I was not able to post last week. This is the session of previous week. Nevertheless, an interesting session it was.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas
Tony Bruce
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi
Phil Kirkham
Mona Mariyappa
Ian McDonald
Thomas Ponnet
Jeroen Rosink (thats me: twitter: http://twitter.com/JeroenRo


Product and Mission
Product: OpenOffice Impress
Mission:
You're assigned to the triage meeting of OpenOffice.org Impress. Go through the bug list and make your position clear. You as a team are expected to triage at least half of the bugs today, as we want to ship the product next week.

Approach this time
Personally I prepared this session a bit by reading information about Bugadvocacy, I knew that there is information on the we about this.

This is what I found with a quick search and valued it.
Slides Cem Kaner
http://www.kaner.com/pdfs/BugAdvocacy.pdf
http://www.testingeducation.org/BBST/slides/BugAdvocacy2008.pdf
Great video!
http://video.google.com/videoplay?docid=6889335684288708018#

Article from Mike Kelly
http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1319524,00.html

I'm certain there is more valuable information on the web. Drop me a line if you have some good additions related to this topic.

Weekendsession EWT20
This time we were asked to take some role: project manager, programmer, tester, conference presenter (user), CEO (user), student (user). Somehow we managed to stick to it and also lost it because we made our own assumptions about the meaning of the role. Perhaps something to learn from?

During the session we decided to work as a team. Initially we were up to divide the judgement the tickets based on priority. During the process based on the performance we got the list divided in batches by the project manager :)

Although we tried to define the meaning of a good bug ad how to judge it (for example based on the mnemonic HICCUPPS) we managed to jump into the issues.
History: The present version of the system is consistent with past versions of itself.
Image: The system is consistent with an image that the organization wants to project.
Comparable Products: The system is consistent with comparable systems.
Claims: The system is consistent with what important people say it’s supposed to be.
Users’ Expectations: The system is consistent with what users want.
Product: Each element of the system is consistent with comparable elements in the same system.
Purpose: The system is consistent with its purposes, both explicit and implicit.
Statutes: The system is consistent with applicable laws.
That’s the HICCUPPS part. What’s with the (F)? “F” stands for “Familiar problems”:
Familiarity: The system is not consistent with the pattern of any familiar problem.

Although we might be aware of our individual understanding about good bugs and bug processes. I keep believing that it is important for a team to come to a mutual understanding about the process and the definition when a bug is written good and when a bug is a bug.

As usual in the Weekend testing sessions, the discussion is very useful. Also this time. I mentioned the idea about writing some kind of "bugadvocacy manifesto" perhaps this can be part of a session in the nearby future.

Lessons learned
The following lessons I learned at least from the session:

1. When process changes monitor if every one understands and joins the change
2. When tickets are send to a later moment, also define a process how to continue with it
3. In this hour investigation is done. That should be logged in a way. Preferable is the ticket itself
4. Judging bugs is done in several ways. Sometimes I think the quality of the bug is missed due to focusing on impact of the issue. Understanding is just a part of the quality of the bug
5. Judging bugs must be done within proper perspective of version, environment, reproducible and value,….
6. Before jumping into list of issues, make agreements how to write down the outcome of a bugadvocacy

Some conclusions of this session:
We came up with a list of issues which should be solved to answer the question of a "saver" go live. Looking back to the transcript I noticed that we accepted the judgement of each other. We didn't spend time to explain against which conditions the bugs were judged. Perhaps that should be done also next time.

As it seems, we were skilled to make some judgement about the bugs which were found. There are still lots to learn about judging bugs. Be careful to call yourself a bug advocate!

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters