Introduction
Usually I post a blog the same week I attended the weekend testing session. Unfortunately I was not able to post last week. This is the session of previous week. Nevertheless, an interesting session it was.
Participants
Anna Baik (Facilitator)
Ajay Balamurugadas
Tony Bruce
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi
Phil Kirkham
Mona Mariyappa
Ian McDonald
Thomas Ponnet
Jeroen Rosink (thats me: twitter: http://twitter.com/JeroenRo
Product and Mission
Product: OpenOffice Impress
Mission:
You're assigned to the triage meeting of OpenOffice.org Impress. Go through the bug list and make your position clear. You as a team are expected to triage at least half of the bugs today, as we want to ship the product next week.
Approach this time
Personally I prepared this session a bit by reading information about Bugadvocacy, I knew that there is information on the we about this.
This is what I found with a quick search and valued it.
Slides Cem Kaner
http://www.kaner.com/pdfs/BugAdvocacy.pdf
http://www.testingeducation.org/BBST/slides/BugAdvocacy2008.pdf
Great video!
http://video.google.com/videoplay?docid=6889335684288708018#
Article from Mike Kelly
http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1319524,00.html
I'm certain there is more valuable information on the web. Drop me a line if you have some good additions related to this topic.
Weekendsession EWT20
This time we were asked to take some role: project manager, programmer, tester, conference presenter (user), CEO (user), student (user). Somehow we managed to stick to it and also lost it because we made our own assumptions about the meaning of the role. Perhaps something to learn from?
During the session we decided to work as a team. Initially we were up to divide the judgement the tickets based on priority. During the process based on the performance we got the list divided in batches by the project manager :)
Although we tried to define the meaning of a good bug ad how to judge it (for example based on the mnemonic HICCUPPS) we managed to jump into the issues.
History: The present version of the system is consistent with past versions of itself.
Image: The system is consistent with an image that the organization wants to project.
Comparable Products: The system is consistent with comparable systems.
Claims: The system is consistent with what important people say it’s supposed to be.
Users’ Expectations: The system is consistent with what users want.
Product: Each element of the system is consistent with comparable elements in the same system.
Purpose: The system is consistent with its purposes, both explicit and implicit.
Statutes: The system is consistent with applicable laws.
That’s the HICCUPPS part. What’s with the (F)? “F” stands for “Familiar problems”:
Familiarity: The system is not consistent with the pattern of any familiar problem.
Although we might be aware of our individual understanding about good bugs and bug processes. I keep believing that it is important for a team to come to a mutual understanding about the process and the definition when a bug is written good and when a bug is a bug.
As usual in the Weekend testing sessions, the discussion is very useful. Also this time. I mentioned the idea about writing some kind of "bugadvocacy manifesto" perhaps this can be part of a session in the nearby future.
Lessons learned
The following lessons I learned at least from the session:
1. When process changes monitor if every one understands and joins the change
2. When tickets are send to a later moment, also define a process how to continue with it
3. In this hour investigation is done. That should be logged in a way. Preferable is the ticket itself
4. Judging bugs is done in several ways. Sometimes I think the quality of the bug is missed due to focusing on impact of the issue. Understanding is just a part of the quality of the bug
5. Judging bugs must be done within proper perspective of version, environment, reproducible and value,….
6. Before jumping into list of issues, make agreements how to write down the outcome of a bugadvocacy
Some conclusions of this session:
We came up with a list of issues which should be solved to answer the question of a "saver" go live. Looking back to the transcript I noticed that we accepted the judgement of each other. We didn't spend time to explain against which conditions the bugs were judged. Perhaps that should be done also next time.
As it seems, we were skilled to make some judgement about the bugs which were found. There are still lots to learn about judging bugs. Be careful to call yourself a bug advocate!
WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters
Monday, June 7, 2010
EWT20: Your verdict as bugadvocate
Posted by Jeroen Rosink at 10:53 AM
Labels: EWT, Testing in General, Weekend testing
Subscribe to:
Post Comments (Atom)
Nice Post Jeroen,the links are really helpful & good :)
ReplyDeleteCheers,
Jassi
Thanks Jassi,
ReplyDeleteEspecially the movie was useful.
Cheers!
Jeroen