Showing posts with label EWT. Show all posts
Showing posts with label EWT. Show all posts

Tuesday, June 15, 2010

EWT22: Test Fishing for bugs and mismanagement

Introduction
This time the famous tester, Michael Bolton, facilitated the European Weekend Testing session. He provided a link to an audio file: http://www.cbc.ca/ideas/features/science/#episode13 It contains an interview, an episode in a radio series that he has been talking about for a long time, called How To Think About Science.

MB stated that it has nothing to do with testing, or did it?

With respect to other Weekend Testing sessions, this session did not had to do with testing an application. It was more likely listening to other sciences and taking advantage of that information. I believe it was an interesting session.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas,
Michael Bolton, (Guest Facilitator)
Ken De Souza,
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi,
Ian McDonald,
Thomas Ponnet
Dr. Meeta Prakash,
Richard Robinson,
Jeroen Rosink,
Artyom Silivonchik


Mission
The mission as provided was:
"Listen to the recording. Take notes on what you're hearing, with the goal of capturing lessons from the interview that might usefully influence how we test, and how we think about testing. Stories or patterns from your own testing experience that you can relate to something in the interview are especially valuable."

Context:
"Here's the background to the interview, from the link above: On July 3, 1992, the Canadian Fisheries Minister John Crosbie announced a moratorium on the fishing of northern cod. It was the largest single day lay-off in Canadian history: 30,000 people unemployed at a stroke. The ban was expected to last for two years, after which, it was hoped, the fishery could resume. But the cod have never recovered, and more than 15 years later the moratorium remains in effect. How could a fishery that had been for years under apparently careful scientific management just collapse? David Cayley talks to environmental philosopher Dean Bavington about the role of science in the rise and fall of the cod fishery."

Approach
I believe you can approach listening in different ways. The challenge what I noticed here is the type of stakeholder you would identify yourself with.

You might be the tester, seeing the story as a metaphor and trying to identify the circumstances you assume to be useful and valuable for testing.

You can hear to it from a process point of view, call yourself the "manager". Or perhaps you are the historian, trying to find facts and similarities perhaps useful for lessons to learn. Perhaps you are the student, trying to see with an open mind what can be learned.

You might even come up with other exciting roles. The question remains: How would you approach such a challenge? Which tools do you need? What back ground is important? How would you focus on the assignment?

For this mission I came up with several options.
1. Would I use the audio-file which could be downloaded and was locked by an password?
2. Would I use the audio-file provided online?
3. Would I actually care for this challenge?

The pre-process
I had the file already downloaded and also looked at the web site with the audio-file. The website also provided me some information/ some background which confirmed the content of the interview as Michael also provided. For me this can be important information. Checking if information is what it suppose to be before actually looking/listening. Perhaps you can place some background investigation under the flag: checking for the history.

When opening both files I noticed that the quality online was better then the downloaded file. I made the decision to listen online accepting loss in connection etc. I also noticed that the file length was >55 minutes. The length of time added a condition towards my approach as normally one hour was used for testing and 1 hour for round-up. I would not be able to spend too much time listening particular part back and again. It also set the context that the chance of loosing time would be there and have to be checked against acceptability as in EWT I was also part of a team. The condition for the team was set to listen for about 1:10 hours.

Before I started I also opened a word document to make additional notes. I allowed myself to make notes starting with the approximate time in the audio file I heard something interesting. The note itself could be a remark, a quote or something I had in mind.

While listening
While starting listening I was again forced to make a decision: Was the person's tone of voice understandable and acceptable for me to listen to? Was their background noise which could distract me? How would the message and even more important which message would be brought?

After a few seconds I felt comfortable, making so now and then notes. And after a few minutes I came up with the conclusion to listen if there are parallels between fish management and test management. I made some notes where I thought it would be useful when explaining or discussing.

I could have followed a path to using my testing skills to see what I would have tested in the fishing process. I decided to leave this as is. The message it selves related to the process in respect to history and how people acted was more interesting for me. Somehow it seems we avoid watching to other disciplines and use the lessons they already have learned. Why are we so eager to make our own lessons learned?

So now and then I stopped and replayed the audio to capture the context correct. There were words I didn't understand and made the conclusion if it was necessary to understand the meaning of the word or the context. If not, then continue listening. Some side help was also available as certain words were explained by Michael. After a few explanations he provided I came up with the idea that he posted those words while listening also. Somehow it was useful to check if I was still on track looking at the time those words were mentioned in the audio. Of course it was not a reliable check, at least it gave me some information I acted on.

I started listening more carefully with lesser interruptions focussing more on the process and how I would translate it to testing.

Somehow during the session it was mentioned that after about 52 minutes the interesting part was told. When the audio file hit the 52 minutes I listened a bit further and found out that after that 52-minute-border some interesting information was provided. At least I valued it as useful.

The wrap up was also interesting, some were able to follow the discussion, some also draw the conclusion between fishing and testing. Either it was related to current certification for testing or fishing; or it was fishing for bugs.
Others were able to identify test objects with respect where the process failed. Or under which conditions/ requirements the fish had to be identified.

My wrap up
I see some comparison to testing. It didn’t take this audio to investigate what is said, I used it how I can translate the context to testing. There is a lot to say about this. The major conclusion I see here is that the fishermen learned that people are the target of management (49:xx) instead of the fish. The government provides quota (or testers certifications) and they own the fish now.

There is misunderstanding in measurement, scientific figures provided also by commercial parties tells that there is fish, the fisher cannot find them. Perhaps here the analogy is that commercial grounds are there to believe certification programs over craftsmanship.
They mentioned several mistakes like dealing with certain assumption, in the past an answer was provided to stop for a few years and everything would heal. Don’t complain now. Perhaps this is also with testing/certifcation0 the numbers are referring that it is necessary. Is it good also?

I think as in the audio, we are generalizing the context and believing one solution fits all, and avoiding looking to other relationships
I believe there is more to learn when time was available.

Amazing how we were able to carefully listen to 55 minutes finding our own understanding about this. Imagine you were listening to stakeholders, would you believe you missed now already interesting information, what would it be you remembered from your stakeholders :) or even: what would you miss and still able to deliver the appropriate value?! :)

Lessons Learned
I believe that out of every process you can learn lessons. Lessons learned are something different then learning specific skills. I believe you have to see things in broader perspective. Related to this session I can come up with the following lessons. Some of them are related to testing, some are related to history, some are related to processes and management. At least they are mine. Perhaps you can also learn from this.

Here some lessons:
1. Be aware of the audience and also which role you take in the process
2. You can learn also from other sciences. They can be a useful source
3. Sometimes you have to focus on the objects of testing, sometimes you can listen/watch the process.
4. If it is already hard to listen careful to a person for about an hour, and you know you miss something due to time pressure; imagine how the chance of missing issues is when your review documents.
5. The perspective about the approach can and might change. You need to be flexible as long as you keep track when and why you changed your mind
6. An audio file is also an expression of verbal communication, although the tone of voice is non verbal. This file provided more information then a written transcript, it is important to talk with the stakeholders, not only read their requirements. (although SMART) :)
7. Don't take anything for granted, not the scientist, government, figures and the so called facts.
8. Certificates (or fishing permissions provided by governments) should not be the goal, maintaining the business and supporting them by using the means useful and wise with respect for continuity is more important then short term profits.
9. As testers we have to continue working on our craftsmanship; certification and skills are not a guarantee of using it properly.
10. It was a fun weekend testing session with another mind set

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves!

Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/

Or follow them on twitter:
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Monday, June 7, 2010

EWT20: Your verdict as bugadvocate

Introduction
Usually I post a blog the same week I attended the weekend testing session. Unfortunately I was not able to post last week. This is the session of previous week. Nevertheless, an interesting session it was.

Participants
Anna Baik (Facilitator)
Ajay Balamurugadas
Tony Bruce
Markus Gärtner (Facilitator)
Jaswinder Kaur Nagi
Phil Kirkham
Mona Mariyappa
Ian McDonald
Thomas Ponnet
Jeroen Rosink (thats me: twitter: http://twitter.com/JeroenRo


Product and Mission
Product: OpenOffice Impress
Mission:
You're assigned to the triage meeting of OpenOffice.org Impress. Go through the bug list and make your position clear. You as a team are expected to triage at least half of the bugs today, as we want to ship the product next week.

Approach this time
Personally I prepared this session a bit by reading information about Bugadvocacy, I knew that there is information on the we about this.

This is what I found with a quick search and valued it.
Slides Cem Kaner
http://www.kaner.com/pdfs/BugAdvocacy.pdf
http://www.testingeducation.org/BBST/slides/BugAdvocacy2008.pdf
Great video!
http://video.google.com/videoplay?docid=6889335684288708018#

Article from Mike Kelly
http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1319524,00.html

I'm certain there is more valuable information on the web. Drop me a line if you have some good additions related to this topic.

Weekendsession EWT20
This time we were asked to take some role: project manager, programmer, tester, conference presenter (user), CEO (user), student (user). Somehow we managed to stick to it and also lost it because we made our own assumptions about the meaning of the role. Perhaps something to learn from?

During the session we decided to work as a team. Initially we were up to divide the judgement the tickets based on priority. During the process based on the performance we got the list divided in batches by the project manager :)

Although we tried to define the meaning of a good bug ad how to judge it (for example based on the mnemonic HICCUPPS) we managed to jump into the issues.
History: The present version of the system is consistent with past versions of itself.
Image: The system is consistent with an image that the organization wants to project.
Comparable Products: The system is consistent with comparable systems.
Claims: The system is consistent with what important people say it’s supposed to be.
Users’ Expectations: The system is consistent with what users want.
Product: Each element of the system is consistent with comparable elements in the same system.
Purpose: The system is consistent with its purposes, both explicit and implicit.
Statutes: The system is consistent with applicable laws.
That’s the HICCUPPS part. What’s with the (F)? “F” stands for “Familiar problems”:
Familiarity: The system is not consistent with the pattern of any familiar problem.

Although we might be aware of our individual understanding about good bugs and bug processes. I keep believing that it is important for a team to come to a mutual understanding about the process and the definition when a bug is written good and when a bug is a bug.

As usual in the Weekend testing sessions, the discussion is very useful. Also this time. I mentioned the idea about writing some kind of "bugadvocacy manifesto" perhaps this can be part of a session in the nearby future.

Lessons learned
The following lessons I learned at least from the session:

1. When process changes monitor if every one understands and joins the change
2. When tickets are send to a later moment, also define a process how to continue with it
3. In this hour investigation is done. That should be logged in a way. Preferable is the ticket itself
4. Judging bugs is done in several ways. Sometimes I think the quality of the bug is missed due to focusing on impact of the issue. Understanding is just a part of the quality of the bug
5. Judging bugs must be done within proper perspective of version, environment, reproducible and value,….
6. Before jumping into list of issues, make agreements how to write down the outcome of a bugadvocacy

Some conclusions of this session:
We came up with a list of issues which should be solved to answer the question of a "saver" go live. Looking back to the transcript I noticed that we accepted the judgement of each other. We didn't spend time to explain against which conditions the bugs were judged. Perhaps that should be done also next time.

As it seems, we were skilled to make some judgement about the bugs which were found. There are still lots to learn about judging bugs. Be careful to call yourself a bug advocate!

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Sunday, May 16, 2010

EWT18: Zoom in or FOCUS and DEFOCUS?

Introduction
Another weekend and another session of European Weekend Testing - EWT18: "Zoom me in"
This time Markus Gärtner did a great job facilitating. Was it different then other times? I believe so. Although the number of participants were low, The team was good. This time a good mission was defined with a relation towards reporting to the manager about your test findings in relation to test a suitable tool which can be used during that presentation.

This time the Application Under Test was: ZoomIt v4.1. The main objective was to see whether the application was suitable for usage in during a presentation you have to give for you boss.

Participants were:
Jeroen Rosink,
Ashik Elahi,
Ajay Balamurugadas
Markus Gärtner

During the roundup:
Pradeep Soundararajan
Michael Bolton

My Approach
In contradiction to other sessions I changed a bit my approach. before I downloaded the tool I first read the web site for information and noticed that the tool was one with a small number of functionalities. After downloading I checked also the website from the developer for additional information. Looking to the application itself it runs without installing.

Basically it were the following steps:
- Read about the application
- Check for functionalities while running the application
- Ask to the facilitator questions about the context to test this application
- Define the conditions to test the application
- Defocus and check if there are other ways

Some questions to start with
Do you want to have an impression about the usability of zoomit?
If the functionality fits? and can be used during the presentation?
What about the information to use in the presentation?
When will it be suitable and correct to use and when will the boss be pleased about the presentation?

Confirmation of the approach
Before I actually started testing I tried to confirm the approach. The mission was to see whether the tool Zoomit is suitable for usage during a presentation. It should function under the defined conditions of the presentation.
It should be able to support the following objects of the presentation:
* high-level understanding
* graphs
* details where necessary
* interactive questions

Some Steps to mention
To check whether the application is suitable for usage in a presentation I performed the following steps:
- learn about the tool (documentation and it self)
- check the functionality
- use the tool
a. as given on the screen which is open
b. on a presentation, while not actively shown
c. on an active presentation
d. on a movie
- preparing some kind of matrix with combination: Hot-keys and environment (application vs video/chart etc)
- use the tool with respect to functionality and usage within a presentation
- on video and chart
- using different options
- checking the behaviour of the tool about changing the standard settings

Some findings
- CTRL-break+background fade: ok
- it is possible to enter negative time in box (copy-paste negative value: -1)
- boundary values of break are:
o enter: 1 until 99
o paste: -9 until 99 (pasting 100 results into 10)
- Hotkeys: it response on the key combination you enter, if you enter CTRL+SHIFT and hit enter then these values are preserved, manually it is not possible
- Used different font type. Also wingdings., seems to work, typing makes the cursor go off the screen, also when using the ENTER
- The timer also using the font as set on the type dialog
- Font size can only be altered with value between 16 and 16
- When using the tool while a video is running; on live zoom it is not shown at all.
- Mouse behaviour in live zoom is opposite

Lessons learned
Although some are not new, it is refreshing and valuable to mention
- Sometimes it is not clear what a tool must do, only under which conditions like, using it on charts etc.
- Using a tool with a few number of functions it is easy to prepare some matrix to test
- Awareness about environmental conditions is something not to make assumptions, I noticed that all functions would work on my PC (Vista), this might not be the environment to present on
- Frequently participating on weekend testing trains your mind
- Defocusing brings some peace in thinking process.
- Asking questions first to make the scope clear for yourself provides a great guidance during a mission
- I should train myself more on the questions, perhaps a “golden” heuristic might help.

While discussing
During the discussion some nice other lessons were possed. It is not always obvious that the environment you are testing on is the same you have to use in presentation. Another option was the availability of a beamer and other digital means. This resulted in the suggestion to use a flip-over, white-boards etc as Oracle to check whether the application would be supportive for the presentation.

Ajay also introduced a term he learned from Pradeep while participating on Bangalore testing meeting: "gaining the context" instead of "setting the context".
For me this brought some ideas and thoughts also. Somehow I see "gaining" as something you have to earn. In my opinion a context for testing must be gained. Not always is information provided clear, even when asking the right questions, it is the persons attitude and willingness to share information with you.

While we were discussing the differences Pradeep Soundararajan entered the discussion to support us on this. He challenged us with a alarm clock example. A bit later also Michael Bolton entered the discussion to provide us some guidance.

In my first impression it seems that in general approaches there is less considereation between human aspects. Michael pointed us towards the mnemonic: CIDTESTD from Heuristic test Strategy Model (20100516: Changed heuristic into mnemonic)

At the end there was the question why it would be so important to discuss about the difference between gaining, exploring etc.; it can be all the same; it might be just a word game.
I believe there is some difference. Like earning respect, you have to gain knowledge and information. This can be done by using your skills as a person and adapting to the situation. I believe information should not be take for granted. Or as Markus mentioned: "don't look where everyone's already pointing". This can also be used in some way as: "Don't ask information others already asked"

Conclusion
At the end it was a challenging, good moderated and fun weekend session.
Well done to all.

WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Monday, May 10, 2010

EWT17: Rocket science in software testing

Introduction
This weekend I attended another session of European Weekend Testers. This session was facilitated by Thomas Ponnet and had another approach then in the past. This time we could prepare ourselves a bit. The tool under test was offered before the session started.

What has this to do with rocket science? It was the plug-in which had to be tested.

The Participants were:
Shruti Gudi,
Jeroen Rosink,
Tony Bruce,
Zeger van Hese,
Catalin Anastasoaie,
Katya Kemeneva,
Dominique Comte,
Pradeep Soundararajan,
Jaswinder Kaur Nagi,
Thomas Ponnet,
Anna Baik,
Markus Gärtner

I have to admit it was a great crowd, a great session and an useful round-up.

What to learn
Last week Andreas Prins posted on his blog the question what can be learned: Attitude or methods? with weekendtesting. In his posting I wonders when reading articles related to weekendtesting he never see a reference like: "As ISTQB Chapter X page xxx we must do this or that" it is a good remark from him. Mainly in projects I don't refer to pages of ISTQB either. Not in this context. When testing in the weekend, or on a project, you refer towards your experience or even sources were people are telling about their experience in a certain context. That experience can be based on theory combined with common since and the situation.

What can you learn in Weekend Testing. I'm not able to tell you what you will learn. You might learn how to look at yourselves. You might learn to think beyond the borders of the regular testing projects you are into. You might learn from the approaches from others. You might learn how to learn.

The mission
The missing this week was different then others, this time we had a manager of a band who had a gig that evening and wanted to make sure that the plug-in he found was suitable and stable. If not, would we be able to propose alternatives.

The approach
Basically I looked at the application to the following points
- try to play wav while plug-in is not available
- try to play wav after plug-in is selected
- tried to alter the sound of the wav using several preset schemes.
- asked the manager about when it is stable, plug-in /laptop
- asked the manager about under which condition plug-in was used
- played with multiple files,
- used other wavs
- used wav and midi together,, no option to mix tunes
- used the key board and options while music was playing, it interferes with the output.
- the midi/wav player, played a bit with that.
- looked at the minihost and used several schemes/presets
- used the buttons on the minihost
- tried to work together with multiple minihosts
- try to record music
- tried using strange actions like using short-keys how app. reacted

Some issues
Below you find the highlight of issues I found during the session. These were findings from my side.
Error01: message shown when opening the minihost
Sometimes when using the minihost this error is shown. Not always reproducible
Error02: error shown when opening recorded wav
when opening the “test.wav file just recorded this message is shown, although the wav file created/recorded using mic is 1kb.
Error3: opening another own wav file error is shown, wav not played
When opening another .wav file format the following message is shown and wav is not played.
Err0r04: Recording not working
When using the recorder it shows that a number of kb is created. Even the file and location is shown correct only the actual recording is not made
Error05: when new file is played, not selected
When a song is ended and a new is played in the list, the selection is not made, the original keeps “blue”
Error 06: in global settings window: “tempo” is not working
When using the Tempo slider, no effect on wav-output
Error07 multiple files able to select, last file is played
Error08 buttons not fine approachable, usability is less

When trying to turn on the buttons it is no following the direction of the mouse
Error09when playing a song, and hitting button on midi/wav recorder interferes with output
When playing a tune and you press other buttons of this app, then music/tune is stopped/ hanging for a few moments.
Error10 pressing The F3 button while playing a wav makes the music hang,
When pressing the F3 button on minihost.exe while playing application is hanging. No other interaction with system possible

Some Lessons Learned
Of course there are a lot of things you can learn when testing. There are even more things you already have learned. Some of the lessons I learned this weekend are just refreshing or confirmations of other valuable lessons.
1. Although information about a single application is required, when testing it together it is a combined answer. You might consider it as single object, when it is tested together with other tools you have to consider their stability also
2. To understand or able to test a part of a object, you need to know the context, in this case about what stability means for the user and not according to the tester.
3. You are not in the position to provide advice, look at the article from Michael Bolton: http://www.developsense.com/blog/2010/05/when-testers-are-asked-for-a-shipno-ship-opinion/ You can provide information
4. When you are asked as a team, you have to work as a team. Even after short introduction it is hard to get everyone’s attention.
5. Domain knowledge is a prerequisite when it is directly asked by “ the manager”
6. if the manager is not there, find someone in the team with domain knowledge
7. Don’t get distracted by crashes, when they are reproducible, then you can avoid them, if you are still able to use the functionality then, you might earn something with your gig
8. It is easy to forget the lessons learned from previous sessions, The assumption is easily made that every one knows you and how you think. Information which seems to be obvious is often forgotten when acting longer in a project. Perhaps recap some questions? Magic words: FOCUS/DEFOCUS
9. Reminded about the posting of Markus about being blunt or not towards manager: http://blog.shino.de/2010/04/11/testing-and-management-mistakes-causes/



The discussion part
During the session several questions and suggestions were raised. Information was missing or needed. Some were about the domain knowledge like "what software compressor for music is", "How to communicate with the manager", "acting like a team or not" and so on (you might check the transcript for details)

Also at the end some valuable remarks were made related to "old experience", "skype is not a good tool to use ", "the manager already checked for a tool, why should we check for more functionality", "if the plug-in was the objective, should it be tested alone?", "Are we able to answer the question to provide and advice?"

Looking as a process to the discussion you can also notice some familiar behaviour. We all had a common goal, still we acted like individuals. We try to get information which would be valuable for us at that moment. In my opinion we did not asked what would be valuable for the team. We also tried to do our job good due to the minimum of time and focussed therefore more on ourselves. When you look carefully, there were some persons who tried to become a group and act like a group. Perhaps due to time, differences in experience, differences in testing approach, differences in objectives we did not succeed to act like a team. If you look at the end, we are more explaining what we have done and what the traps are. The focus lied more on "did we succeed the mission". I believe we missed in some part a good lesson: "what did we learn and was it fun?" and also "Which personal lessons can you take to a next session."

What would be more valuable, to meet the mission as an individual or to act as a team, learn from each other and perhaps meet the missions objectives or perhaps change it during and afterwards?
Conclusion
This weekend session was a great one. A mission with an attitude of the manager. A great crowd of testers, a discussion you can learn from. I had a lot of fun and learned old and new lessons.

Sunday, May 2, 2010

EWT16: What values you in barcode

Flashing barcodes and great participants
This time a great session about flashing value and barcodes. Also with great participants and discussions afterwards

The participants this weekend were:
Anna Baik
Michael Bolton
Stephen Hill
Thomas Ponnet
Ram

This time the product was a funny barcode reader:
Product: http://www.barcodeart.com/artwork/netart/yourself/yourself.swf

This app is about generating barcodes based on your input like: gender, country, age, weight, height and calculates also a bogus price value.

The mission was about finding out how the calculation worked and what the highest value would be to obtain. Combine this with reporting invalid values.

My approach
Below you find the summary I gave during the round up
First I tried out the app by just pressing the buttons and identify its behaviour.
I checked if the values entered are used in the calculation, I did this by using same values twice to see if there is some kind of randomizer active. This was not the case. I also check only changing the gender. It actually matched as described in diagram, only that value changed.

I tried with highest and lowest values. Which result at the end of showing incorrect values in the Scan report. I noticed also that when using actual values, in the barcode there is some mix-up of entered values and presented values.

At the end I left some part of the URL address and came to the actual site. Here there was some valuable info In FAQ about calculation.
The next time I would spend more time to check the logic as described in the FAQ with respect to the outcome of the app. An hour is just too short for me to check if that formula tending to use actually match the actual outcome. Here the important value is to agree upon the perfect BMI.

Some funny issues
Of course it was fun to find some issues. When you test this application using highest numbers you will find out that the calculation between the metric systems is not done properly, this with respect to the offered diagram.
Also testing with lowest numbers return some $NaN tags when looking at the "Scan" list. At least the price value is $0.00

When navigating back and forward you will notice that the dropdown of the country will be emptied which lead also in a strange outcome on the "scan-list"

Initial lessons learned
During the round up I came up with the following lessons learned.
1- Agree upon the level of detail you prepare your model about the app.
With the level of detail I meant how deep and how broad will you test knowing that this decision ask effort and knowledge.
2- Avoid the pitfall that if app is simple and no documentation available using the app, search for other means.
Wonder every time what kind of documentation you need, are you searching for it or using the application as some kind of oracle to ask the questions to.
3- Tools to read code might help
If you know about tools to read code from flash applications, perhaps some this helps as documentation source.
4- Translation between metric systems is often an area for failure. (it was also in the Arianne 5 project if I'm correct)
One of the pitfalls for me is every time the differences between the metrics systems. I should spend some time to learn about and learn to use it instead of using tools for conversion

Lessons during the discussion
Again this time there was a great discussion afterwards. Thomas came with a suggestion to use iterations for trying out test data, this would force thoughts to focus and defocus.
Michael posted an interesting lead which reminds me of some earlier work of him:

It seems to me that one of the principal issues that this exercise brings up is the alternation between focusing and defocusing heuristics--varying one factor at a time (OFAT) or varying many factors at a time (MFAT). (There's also another kind of factor-oriented heuristic noted in the book Exploring Science: hold one factor at a time, or HOFAT.) You use OFAT when you're trying to focus on the effect of a particular factor; MFAT when you're seeking to confirm or disconfirm your ideas about factors in combination with each other

Somehow i couldn't find the source of Michael, on Wikipedia there is something mentioned about it One-factor-at-a-time method When Googling on varying one factor at a time I found some interesting documents I have to investigate later on.

During the discussions I mentioned the approach called TMap defined by Sogeti and at least well know in The Netherlands and also a standard for approaching test projects.

For me TMap is a strong approach which is more process oriented instead of value deliverance to business (perhaps TMap next can serve this better). As for every model/method, it must be used with common sense. We should be warned not to focus on making the method work instead of that, we should watch out to be able to deliver value to business. It is so easy to say that we do it as the method tells you because based on the method agreements are made.

With common sense I meant as said in the discussion:
"to me the skills for common sense is knowing when you are using a method for the benefit of the actual outcome. And you are not using a method to proof you are able to be able to use that method and based on that claiming you do the right thing as the methods is right, you follow the method, therefore you are right.
If you are able to judge your approach against the initial goal you were hired for then you might be able to get the benefits of an approach like this. Otherwise you are selling other things you are hired for."


WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Sunday, April 4, 2010

EWT12: Mind mapping and testing

Great minds came together on EWT12
This session of European Weekend Testing: EWT12: Mapping the maps involved using mind mapping to inform the test manager about how to reach a certain coverage for the online map functionality: Bing Maps

This seems to be a mission with too much open endings. I keep referring that one of the major goals for me to participate are knowing more passionate testers and learn during the session and also afterwards. I'm not intending to test the software and find all issues I can found. for me it is also not a game which can be won on the number of issues there are found. Or who reached the highest defect ratio within that hour.

Passionate testers this time were:
Anna Baik
Tony Bruce
Markus Gärtner
Jeroen Rosink


For this session we were working with either the tool Freemind (offline tool) or Mindmeister (online tool with the ability of using 3 maps for free)

On Wikipedia a list of mind mapping software is offered which might be worth to check

Lessons Learned
#1 Prepare practice session how to use a mind mapping tool
Even when you have heard about mind mapping and or practice it sort of, when a tool is used not to share your thoughts but also compare thoughts within a limited timeframe, it is good to spend first some time to guide the participants through the tool.

#2 Mind maps differ in usage, colours and notations
As people think differently, the chance mind maps differ is huge. I believe here we have a thin line how to use an approach like mind mapping. We can demand the participants to use certain ways of notations, this will also limit their thoughts. I suggest providing the participant several ways how mind mapping can be used and how icons and colours can be used. Let the participant decide as long as the decision can and is explained.

#3 Mind maps are not a single mean of communication
In my opinion a mind map cannot be used as a single mean of communication. You cannot use it as test basis when the creator is not providing some explanation. It is a model of the mind of the author of that moment under certain conditions. Often is written and spoken that an image tells more then words. In this case I believe the strength lies in the combination, without words the map is almost meaningless

#4 If mind maps are used in testing keep it dynamic
I think mind maps can be used in testing; perhaps as a certain touring scheme. When it is used; it also should be maintained. When it is used in a test project and a certain level of information is presented, it has to become part of the whole team and the whole team can make changes on it. It can be introduced as an extra output of a stand-up meeting?

#5 Heuristics can be used within a map
You might introduce some structure by using heuristics. I used SDFPOT to play with and continue with that route.

#6 Use multiple mind maps within the team
After I compared the different mind maps I noticed we all have different approaches and different ways of details in the map. I like ton use single words, other like to use notations like test cases and also some times routing within the mind is visible. the strength here is that different levels of detailed information is presented. I'm convinced that not restricting people using mapping in notation will strengthen others by offering ideas.

#7 Mind maps provide more information
When I compared the maps of the mind I see more information then only what is written. If you look at how it is written, in which areas more details are provided and were not, you might come up with questions about importance. or why certain decisions are made. For this mind mapping can be a great approach, not only to tell what is done, also to come up with questions why certain decisions are made.

#8 Mind is changing also the maps
I think using maps like this should not be just once. you should generate this type of maps frequently. To gain more information from the maps how you mind/thoughts are evolving versioning should be used on the maps. On frequent basis you have to schedule also meetings to explain and investigate how thoughts were changing.

My thoughts, my map
When you look at the picture, you see a result of my trials using the tool and adapting some structure in the map. You also see I tried to use some icons, what I will do and what I won't do. Also what I have done and have to do. I played with icons were I think issues are as I noticed strange behaviour. I even tried to see of adding a priority to the items is an option.



Using a tool is great as you can adapt your thoughts by re-shaping, re-placing or even removing without generating a mess. On paper you have to be very careful as mistakes cannot easily undone. You are forced to continue with mistakes. For me, there is more to explore how to use this tool.


Process of mind mapping
Here a suggestion how to use mind mapping. I'm sure there are other ways. I'm sure there are better ways. This is how I think it can also be of some usage.
1. Prepare mind mapping introduction session: exchange knowledge about mind mapping, the tool and experience
2. Assign roles, like tester, analyst, test coordinator/manager, developer, user, etc
3. Agree that no map is wrong or is right, accept there will be differences in level of details
4. When necessary agree on usage of certain icons and colours (not all)
5. Define the mission
6. Execute mission within a defined period (1 hour?)
7. Present and explain the mind maps
8. Make adaptations on you mind map based on mind changes due to new input
9. Make agreement how mind map will be used: every day during a period, or was this one time moment?
10. Maintain mind map
11. Gain information about how the "mind" was working related to mood, level of detail (perhaps the mind was up to something), time behaviour, structure,...

I think it can be used for explaining/presenting and obtain information about:
- the route for testing
- the areas for testing
- the test cases
- the coverage of execution
- which decisions are made, what to test and what not to (lesser details is less important?)
- matching each other thoughts about the test goal of that day
- the possible risks
- the black spots in testing
- ....


WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters

Sunday, March 28, 2010

WeekendTesting EWT11: Giving back is not giving up

A new weekend, new chances, new lessons
This weekend I participated in another session of the European Chapter of Weekend Testing. for more details see: EWT11: "To secure the area" The objective for this time was testing a financial application with a focus on security.

This time we had a good guest-facilitator: Anne-Marie Charrett who guided us true the mission and this weekend event. One part of the fun of weekend testing is you get to know different people. See it as the first step. Imagine you participate for the first time; you meet people you perhaps heard of and never worked with. The next time you participate you remember those names. Imagine what can happen when you meet them in real life for instance on a conference. Normally they would be one of those unknown fellow testers sharing the profession. Now you have something else in common which might make it easier to find each other. I hope to meet them some time some day, until now, I will meet them on weekend testing. Perhaps you too?

This time the following persons attended the European weekend testing session:
- Anna Baik (facilitator)
- Markus Gärtner (facilitator)
- Anne-Marie Charrett (facilitator)
- Anuradha
- Ajay Balamurugadas
- Markus Deibel
- Jaswinder Kaur Nagi (just found out that this is "Jassi" :) )
- Maik Nogens
- Thomas Ponnet
- Ravisuriya
- Jeroen Rosink (that's me)

Was it Fun?
As every weekend you have to make the decision: can and will I participate and should I? I decided although I knew I might not be able to attend the whole session to participate.

This weekend I was not able to make the application work, I didn’t succeed to get passed the registration form. Although some others did, and some shared the same “errors” I faced.

Lessons Learned
#Lesson 1: for me it is an error for the organization/developer it is a warning message
This is a mistake we often make. We see an message shown to inform us what is wrong and we cannot continue. Based on all kind of rules like business rules we should not be allowed to continue. It actually tells what to do: contact helpdesk. This seems to be an informational message.
If you see it with respect the situation then it also might be an error message as that message stopped me to continue. The situation was that I was allowed to use the software as I was provided with a license key which I was allowed to use. I also was under time pressure: testing within an hour. Another thing was: I was not supposed to call the helpdesk, so the message was not for me.

As you see; a message can be more then just informational. It is also is an error on multiple layers:
1. it was falsely shown as I ought to have a valid license key
2. it bothered me as I couldn't continue my task
3. the information provided was incorrect as I should contact helpdesk.

#Lesson 2: Security is not only within the application
One of the objectives for this session was to learn more about the security of the application. Although I didn't got that far to use the application, the chance is there that security of my PC (test environment) blocked me for using the app.

When thinking; this made me draw the conclusion that we often focus on the security within the Application Under Test (AUT). During testing we also should consider the security settings of the (test) environment. Although for some testers this has nothing to do with security and more with authorization. In that case you might consider the impact of settings of the environment of the users for the security. Perhaps add checks in your application to check if the security settings are correct? Or inform the users/ application managers under which conditions the application should be used?

#Lesson 3: Giving back an application is not a shame, it should be done considerably
There is a thin border between giving back and giving up. That border should be crossed carefully. Giving back an assignment is the last thing you can/should do. As you are admitting you are not able to do the task. Not doing your task can be taken as not skilled enough, not committed enough, not dedicated enough, or just not enough.
So be careful when giving back, come with argument you did not gave up.

I gave back the mission after I found arguments for myself to stop. I didn't give up as I stayed with the team as long it was possible for me.
Arguments were:
- I tried several option on my PC to get pass the registration form without success
- I tried to obtain help and other license keys, without success
- I tried to get help from other fellow testers, their support didn't work out either
- I check the time with respect to the mission: less time was available even when I would manage testing would make less sense
- I would do better to see what lessons I could learn about this (learning is another mission for weekend testing for me:) )and spent time on that
- not spend more valuable time of other by asking for help.

During the session I formally gave back the mission. I believe it is important to tell it instead of keeping silent so others are aware you stopped. for me the session was njot a failure, I learned from it.

# Lesson 4: On every edge there is something to learn
Although I didn't follow the mission as I intended to do; I took the time to think about lessons I can learn. This confirmed me that you can, should learn and continue learning not only looking to expected outcome. You might have to take a step back and see what the actual outcome could teach you. If you believe that nothing was to learn, you might spend more time. On every corner, edge, situations, there are lessons to learn.

WeekendTesting
For those who also wants to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves!

For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
weekend testing: http://twitter.com/weekendtesting
europe weekend testing: http://twitter.com/europetesters

Sunday, March 14, 2010

Weekend testing EWT09: Add value to a mission

Another session of EWT
This weekend I participated another session EWT09: The Imperial Strikes Back for weekend testing, again. You might say that you have something better to do during the weekends. Sure, we all have. For some people it might look like testing another application in your own free time. Others might see it as a waste of time. There are also people who don't understand it. And fortunately there are also people who share the same interest.


What does EWT values to me
Currently I attended already three sessions and each time I'm having fun an learning. as the title tells you, it is in the weekend and sometimes it doesn't fit within your daily schedule. This time I try t make it fit. Next to getting to know more people I use these session also to learn more about how they are approaching testing, how I am approaching testing and what I can learn from it.

As in the last session M. Bolton pointed me about building a model before you start testing. This was besides the mission of this EWT session a personal mission for me.

The initial mission
Before the start the following mission was stated.
"You are moving from lovely Europe with measurements based on the metrics system to the US with imperial units. Test Converber v2.2.1 (http://www.xyntec.com/converber.htm) for usability in all the situations you may face. Report back test scenarios for usability testing"

Looking to this mission is was about thinking about scenarios for testing usability. Basically I would fulfil the mission just by writing down scenarios. I have to admit, I didn't add scenarios to the bug depository. I failed over there. Was this session a of less value for me?

My lessons learned
Of course not. I choose my way in this EWT and use the mission as a boundary set. Boundaries should also be judged. I choose not to write down scenarios in front. I have done that in the past. I choose to build myself a model, define questions and think about what usability is for me.

I know that often we think about usability and actually we are checking functionality like: "Is the currency converted correctly?" Instead of terms like: "is the output readable for me?"

Based on the questions I used the application to provide me some guiding lines to define the type of tour I would be follow. This result in unwritten scenarios.

The questions I started with:
- Is the documentation was helpful and readable.
- Is the menu clear understandable and intuitive?
- Will the application fit the screen?
- Does it deal with the international settings?
- Can you be lost in this tool?
- Will it give value for me?

As mentioned I made my own mission also. I believe I am allowed to it because it is my free-time and I still kept to the original mission, define scenarios. for me the questions mentioned in the discussion were a bit the scenarios. Another way of writing down scenarios were the bugs I add to the repository. Based on type of issues you might get a picture of the route I followed within the system

Add value to a mission
Perhaps one of the main thoughts here is that you have to judge yourself if you are skilled to fulfil a mission. If not, perhaps you can define your own goals to gain those skills. If you are able to add value to a mission and you are able to explain this, you also should be able to add value to a customer. I can say that I learned again from this session.

Sunday, March 7, 2010

Weekend testing EWT08: Kept sticking in testing

The start of EWT08
I managed to attend this weekend another session facilitated by Anna and Markus on behalf of the European chapter of Weekend testing The application to test was this time Wiki on a Stick. Part of the mission was using SFDPOT
For more information see:
http://www.satisfice.com/articles/sfdpo.shtml
http://www.developsense.com/articles/2005-10-ElementalModels.pdf

Also check out the site EWT08:Wiki in your pocket for a transcript of the session

leaves me just another bit of advertisement to point you towards the weblogs of the testers who attended:
Anna Baik
Ajay Balamurugadas
Michael Bolton
Tony Bruce
Anne-Marie Charrett
Markus Gärtner

The Mission
For me this session was another great one. In front you will never know what you can learn. This time I tried to be more prepared. have the bug depository running, the screen capture tool in place, the connection with my virtual machine using Debian running, information about heuristics ready, I even read a bit again about them. I tried to prepare what questions I could ask with respect to SFDPOT.

My preparation
Like the other session I attended online it started with an introduction and testing in the first our and a wrap-up in the second hour. This time preparation saved me some time. Unfortunately, and that is how life goes, the start is different then you expected. This time I expected a tool I had to install. Instead it was sort of a webpage which should be stored on a place and could be used immediately. I was a bit distracted by this behaviour and stopped working with the idea to test on 2 platforms.

It took me some time to figure out what type and kind of application it was. While "touring" around the application to see in a quick view what the edges of the application was and what would be interesting to investigate further.

Retrospective
After wards I noticed it was hard to stick to an approach with heuristics based on SFDPOT. I remembered that the next time I should start defining the questions I want information about it. For me this is another way of thinking then I am used to like an approach conform TMap or ISTQB. Most of the time you start with a well defined process. Keep in mind that that will not work in EWT as the time is limited to an hour.

I believe that based on defining questions you can explain a bit about coverage. In this case is explanation more like picturing/telling what you have covered and what not in terms of the questions in stead of how many cases you executed and what that coverage would be.

I explained to the group that it was hard to stick to the initial idea and one of the pitfalls is start testing immediately. Michael explained that it is better to build a model first about the system. I like that idea. You can build a model based on documentation. I think reading the documentation with respect to the heuristic model can help you defining you questions.

Value of documentation and model
Building an initial model based on documentation is in my opinion a good way to start. This seems to look similar to other methods: reviewing intensively, defining all objects, prepare test cases. And then start testing.
Off course everything is sort of based on the Deming-cycle: Plan-Do-Check-Act. Only a lot of approaches are made too robust and formal that you loose time in documentation. And try to strive only for perfection instead of a good mix of result within in conditions.

The goal of documentation is mainly provide information. Based on this information developers and testers are lead to define their approach for building and testing. If the information is weak then the chance for mistakes are bigger. An option is to formalize a process of reviewing and adjusting the documentation to details to narrow the perception and vision of the people who has to build and test it.

To start with reviewing is an approach which might work when time is unlimited. A mistake can be to collect all information, judge the information, derive test cases based on that information and sell an advice based on the test results.

How about first building a draft model of the system and the environment? You can document this, you also can make a draft picture of it or make it up in your mind. Based on this you define your boundaries where to look at. If information is missing, you ask the owners/stakeholders. Keep in mind the time you have. For example: during EWT08, we had the program manager also online, we had only one hour available and instead of claiming and demanding information we tried to find our own way. We could have asked a lot of questions which would extend the hour testing.
This would be another way of receiving information about the system.

Information about the system
Seems that information about the system to test is important, the time window is also a leading factor this in combination of the quality of information. It turned out that the available information for this application was harder to get. Some testers who paid thoroughly attention to that and made a remark about the documentation.

Looking back it turns out that information was available in several ways:
- online manual
- in the tool itself (written information)
- the tool itself (functionality)
- people online (the project manager)
- people online (fellow testers)
- bug repository
- information regarding heuristics like SFDPOT

The needed information was different. My approach was focussed more on the functionality and there were I found something interesting Itried to find the information within the tool. this approach distracted me as already mentioned.

A lesson learned
Lesson learned here: Create a model of the system and see what kind of borders you will find. Check this with respect to the environment. The next time I would explicitly ask me questions like "how it will work when using a USB stick." I would build a framework of initial questions using the SFDPOT and during the tour I tried to refine the questions. afterwards I would try to validate if the answers to the questions are enough to make a statement of it. This can also be a statement of gaining too less information.

I believe we testers should be careful to provide a judgement about systems. When will you know you spend enough time and gained enough information? The challenge here is that we are asked to provide an objective advice although the information mostly is translated based on subjective experience.

Saturday, February 13, 2010

Weekend testing EWT05: my first attempt

I read about it, heard about and received the tweets. I was (indirectly) asked to participate. It should be exciting. It must be something new. So I became curious about it.

Initially I started to find excuses not to participate. It is weekend, why testing. I have more important things to do. It is too late or too early. I need to reconfigure my PC. Dinner must be cooked. My batteries are low. I might even thought about starting to write a left-handed course for software testing written by a right handed person. I'm sure when I would have took more time to think about excuses I would have been more creative.

Instead I looked up for the tweets, visited the website http://weekendtesting.com/, send a PT to Markus, downloaded and installed Skype and registered to the site mentioned above. Confirmed the mail for registration and was ready for business. I thought. This time it seems that acting to get things moving was shorter then finding arguments not to do. I was in the race.

For me it was testing multiple applications. not only was I new to the application which was introduced by EWT. I also am new to Skype and needed to find my way again in the mantis database :)

Somehow I managed to be there on time. "There" was the place somewhere on the Skype-universe. We grouped as a cluster of bugs you sometimes see. Although we seems to be unknown to each other, somehow it felt familiar. Amazing how people with a same goal could be able to communicate with each other. Like bugs, there were some who were introduced on a later moment. They also made a difference in our quest for fun on the European chapter of weekend testers. (you can follow them on twitter by: http://twitter.com/europetesters)

As you might have read on the website of EWT the time frame is short, the tour takes just two hours. Like I wrote, I first started testing the Skype. One lesson here is that under time pressure you focus on the goal: " am I able to use Skype within time without reading manuals ready tom communicate with the people I need to talk to?" I have to say that I succeeded that part. OK, I had to close several windows which might be worth reading after all. I tried to search on the email address for "europetesters" and managed to find a contact in Skype. Yes! I was in business. After a while the crowd came together and formed a group.

The process was quite easy: take the time to get to know each other, spend a bit of that time to be polite and say "hello and welcome" and work on smiley’s when someone says something nice. We used the text-part of Skype as far as I know. This we did until the first assignment was given: install a tool called: Virtual Magnifying Glass 3.3.2

Now I can exaggerate how complex this tool is. Keep in mind that there was just a small period of time to test the application. When going into retrospective mode I might tell that the testing starts when opening the website for downloading. There are immediately a few questions which are risen.
- is this the right site?
- is the tool available
- is there a tool which can be downloaded (like I said I didn’t had that much time, it was somehow a kind of challenge for me.)
- which version to download. I have an Windows operating system, only also a virtual machine which is running a Linux distribution. I had thoughts to download also the Linux version. And left that thought due to time as the application was unknown for me.

Just before downloading I asked myself a question I automatically ask myself every time I download something: where to store the application? Normally it would become part of the directory where I store applications. This time I decided that they are part of this testing project and stored in the project folder. Testing starts not only when the application is running. It starts when you first got aware of questions you ask yourselves about the application.

After I confirmed I was able to download the application I choose not to install it immediately. Installation of an application is also something which can be tested. I placed that item into the scope of my test challenge. so I waited until the first mission was mentioned. it was a very vague mission and soon it was sharpen to more words to leave the statement still nothing more then "Test this"

I heard that testers are willing to neglect such an assignment because it is to vague. I'm sure we would be able to spend a lot of the available time to make the assignment SMART. Weren't time which was left more valuable for us so we started testing. As I did also. In the meantime they send us a link with the location where to submit the issues found. I was glad to see that they used Mantis as I worked with that tool.

As the time for testing was started, I missed some of it by registration of myself to that tool. I lost some valuable time because the CAPTCHA was too small; it was too hard to see whether the O was an "O" or an "0". After 4 attempts and actually considering installing the magnifier to support me on this I managed to get a CAPTCHA without those nasty symbols. Like I said, tested multiple applications. I also tested myself.

I managed to subscribe, even manage to find the proper database if was confused and distracted by the subscription issues that an empty database made me think I was on the wrong place. It turns out that no one yet entered any issues. Perception of expectation was distracted by previous experience and also be pushed by time.

After spending valuable minutes to important things I started my tour using the application of the weekend. When starting an application you can ask several questions like:
- will the executable start an installation script?
- or is the application started because it seems a small application?
- will the application start?
- what can I expect?

After double clicking on the executable I noticed a wizard was started. How beautiful a mind works, at that moment I decided to test the wizard and making screen prints of it to make sure when something goes wrong I can reproduce it without installing it again. again I was confronted with lack of good preparation, were was that good screen capture tool I used in the past? On my old PC of course which left me with the plain good old CTRL+ALT+PrtSc combination.

The approach of the wizard was quite easy:
- look at the text
- are buttons working
- can I install on selected location
- what about using the back button
- is the application installed
- is the application ready for usage
- what else can I learn from the application

Sometimes you see in the installation wizard references to manuals. While writing this down I missed to check this. Manuals can be a source of information too. Thanks to the screen-print I can confirm that there is no such reference. (I checked the documentation I made)

The application can be started! The first thing I learned from the application is that my key-combination was of no use because it functions mostly on menu-pop-ups. I wished I had that other tool back again. There was no time to lose anymore as while testing communication was also by us, members. You might think that communication was distracting us, sure it was, every time you need to look who was replying now. Also valuable information was provided. like: " just 10 minutes left".

This left me just to recheck some functions I decided to focus on and to open the folder where the application was installed to check what kind of files were stored. I noticed an editable application file, not only the manual, also the "ini-file"! When time is short and you want to learn about the application in a different way: open the "ini-file" and start messing around, change the value, raise the number, check if the values match what you saw in the application like controls set to 0 or 1. Keep restarting the application. Stress the application using the means you have. In this case an text editor can be valuable.

During testing I found several issues which I submitted to the database. While doing this I noticed that the way writing down issues divers sometimes. Initialy I forgot to write about my system (PC and OS and installed language) I also noticed that the expectation was not always mentioned. I remembered to think in front about what to write down and what the audience is for whom Im writing it down. Although I posted some information like:
- what was the behaviour found
- what was the expectation (after 2 issues)
- which actions used
- reference towards screen prints
- when necessary: which data used

Finally the remaining 10 minutes were finished. Now the wrap up for another hour took place. Here we came up with some good lessons learned. I believe that the next time we could be more active on sharing our experience. I had difficulties not to continue testing as the conversation started late :). After a few minutes the structure was there, a strong part was that we listen to each other and hopefully learned. I did.

So this was a "short" description of my first experience. At the end I had fun and sure will attend again. Thanks!

For me I came up with the following quote: "Minds are shaped when guided under pressure in a certain direction trying to maintain vision and control."