A passionate tester
I’m not a scientist, I’m not a historian, I’m not religious follower and I’m not a native English speaker (bare with me and educate me if I’m wrong). What I am? I am a passionate tester and see in other disciplines lessons we can learn for testing.
So I come this posting. Yesterday I watched a documentary about the beginning of life. This documentary made me think about the discussion which is recently going on in my world of software testing.
Some references contributing the discussion:
Stuart Reid: Keynote 3: When Passion Obscures The Facts: The Case for Evidence-Based Testing
Cem Kaner: A new brand of snake oil for software testing
James Bach: Stuart Reid’s Bizarre Plea
Jon Bach: The Truth about Testing?
Nathalie Roosenboom de Vries- van Delft A lot on my mind…
While watching the documentary on discovery channel I was captured by the example how John Needham (10 September 1713 – 30 December 1781 was an English biologist and Roman Catholic priest) performed an experiment to "proof" that live can be created in an "closed" environment. Based on his experiment he believed that a concept of "Vital Atoms" exists. This concept deals about the escape of atoms into the soil and are again taken up by plants. You might see this experiment of adding water in a sealed bottle and after a while life was growing in the bottle. As there was nothing and it was sealed, there must be something which is smaller and is created by parts of atoms.
If I'm correct he had quite some followers and the concept of "Vital Atoms" became a hype. People seemed to believe what he told based on his proof.
Fortunately Louis Pasteur (December 27, 1822 – September 28, 1895) was a French chemist and microbiologist born in Dole.) proofed with his experiment that a mistake was made. The obvious sealed bottle was not sealing the bottle completely from the outer world. Bacteria were able to enter the "isolated room".
The debate about the origin of life occurred later on triggered by Charles Darwin (12 February 1809 – 19 April 1882 was an English naturalist) who wrote the On the Origin of Species. With this document a new era is started. He didn't write about how life began. He brought biology and chemistry together in explaining how life evolves.
The debate started between a god who created life and life which evolved.
Between the followers that life evolves several experiments, hypothesis and theories were developed to proof that under various circumstances life can evolve and created. For example the combination of oxygen, carbon and other materials combined with some source of energy can result in "life-forms". The Oparin-Haldane Hypothesis by Aleksandr Oparin (in 1924), and John Haldane (in 1929, before Oparin's first book was translated into English), defined such a process. In short I would refer to this process in terms of chemical components which were individual present in the sea and transformed by ultraviolet or lightning into organic components.
Haldane even called it the 'prebiotic soup'.
Stanley Miller came with an experiment called Miller–Urey experiment (conducted in 1952, published in 1953) (re-concucted in 1982) to proof that in an isolated world life can be created. This experiment together with the outcome resulted in "the standard". They believed that it would be so easy to create life.
This concept also supported that life could be created else were but on earth, also called Panspermia. If I remembered well from last night watching the documentary, there is space in found pieces of meteors which are older then the earth resembles the structure of "simple" cells. Combine this with the theory that in isolated spaces also organic components can be created, the change is available to raise life from outer space.
Jeffrey Bada also executed the Miller-Urey experiments (see: Primordial Soup's On: Scientists Repeat Evolution's Most Famous Experiment by Douglas Fox) and continued on it. With the difference looking to the environment of the earth containing amounts of iron and carbonate minerals. He added them to the experiment and came to different outcome.
Other scientist followed their road bringing up hypothesis and research to see about the options creating life under extreme conditions, like near volcanoes, in caves, under water without light etc.
In the documentary I watched more exampled were provided which in my opinion also can be translated to testing.
What to do with testing?
Perhaps you wonder what this has to do with testing. Perhaps you made your own conclusion or picture. What I see is a process where evolution is involved. Not only evolution of the human species. You can see also an evolution of human thinking. Based on the known context John Needham came to his approach and method. He was able to sell it to the crowd and gained followers. Almost hundred years later a new person, Louis Pasteur, came with his conclusion to proof otherwise. I proofed that although the conclusion seems to be valid, the environment was not as what was expected. Based on the knowledge of John, he was right, only due to technique and new understanding; human kind was able to bring up other methods.
In testing I see also people evolve and continue to challenge "experiments" and "methods" and also people who accept certain outcome and become a follower.
Louis Pasteur did not proof how life was created, he just showed what went wrong in that experiment. In the same era Charles Darwin published his view about evolution. This triggered other scientist with other disciplines to continue the search how components evolve.
This evolution triggered me to think how just "zeros and ones" translate in bugs.
I would say that those zeros and ones alone won't do anything. It is the context how the will become visible into functionality and the environment how they can evolve. Even in new functionality or in flaws of evolution.
As testers we have to be open for other disciplines and understanding from those disciplines to continue. We can learn from it and should spend time for investigating those disciplines instead of spending time our approach is the only truth. For learning an open debate and does necessary based on mutual understanding and not solely on the perception own the single truth.
Like Stanley Miller re-conducted his experiment after years we have to be alert and keep learning and questioning our approach. It is mandatory to keep an open mindset. Like Jeffrey Bada did, also perform our own experiments. They might support or adapt visions of others, or even your own vision.
Testers should be able to discuss the possibility of own failure and learn from others if they perform similar experiments. In approaches like the Schools of software testing there must be space to discuss, challenge, disagree and agree with each other to make evolution in software testing possible.
Do you believe?
What I learned from the documentary is that people are followed by others who claim to have found the evidence for their hypothesis and that others are false. To me, it turns out that in history certain failures are often made. In the example above, initially they seemed to be right although time proofed the opposite.
I don't think it should matter who is wrong or who is right, you have to be able to define you own mind and not following people because they claim to have the proof. You might use their thoughts because it helps you. it helps you in your work. It helps you in your own process of learning.
When accepting this, you have to be aware that what you believe in now might be wrong or different later on.
Was John Needham wrong with his assumption? I think not, based on his knowledge and the lack of knowledge of others who lived in his era, he seems to assumable right. Others learned from it. It would be wrong if he deliberately misused situations/ did not present facts or ignore other(s) visions to obtain his proof.
If we follow some school and deliberately ignore others how would this support the evolution of our profession? How different are we then Pope Damasus I who assembled the first books of the bible at the Council of Rome in AD 382? Imagine how different the world would look like when the bible contained other chapters?
I think we should mutual accept each other thoughts and learn from it and adapt. The key word is here mutual. Are you making the step with an open mind and support mutual learning or are you leaving behind?
Additionally to read:
While searching for some background information I stumbled on this book. I believe it is worth reading
Thinking about Life: The History and Philosophy of Biology and Other Sciences By Paul S. Agutter, Denys N. Wheatley preview this book
Thursday, May 20, 2010
A passionate tester
Tuesday, May 18, 2010
It happened on may 12th 2010
Last week I attended TestNet voorjaarsevent. Testnet is the Dutch Association for Software Testing. Besides several workshops and small events there are two major events, in the spring (voorjaarsevent) and in the fall (najaarsevent).
Last week the so called "Voorjaarsevent 2010" happened. As usual I got a bit prepared, made the selection of presentations I would like to attend and also which not. What I see often is that the presentations are dealing with old concepts in new clothes. This time there were a few which got my attention.
* Location: very good, as well as the main entrance, the route to get there as the overall presentation
* Dinner: Good, it was tasty and enough.
* Drinks: like to see the ability also to have drinks like coffee or tea while not attending a presentation
* Exhibitions: Well arranged of the main entrance, good to see those familiar names again and again.
* Presentations: There were a few I would like to see, as what I saw besides the first one, good. Still a lot of old work in new clothes
* Key-notes: Expected more fro the first key-note related to the main topic: "Secure testing"
* People: open minded and pleasant
Program (copied from TestNet.org)
Intentions to see:
- Key note from Stuart Reid: "Improving testing - with or without standards"
- a "debate related to ethics and testing" lead by Nathalie Roosenboom de Vries- van Delft & Budimir Hrnjak
- Rudi Niemeijer: "Safety helmet prohibited" (unfortunately this was parallel with the debate and I had to miss this one)
- Jurian van de Laar: "Testers helping developers or vice versa?"
- Menno Loggere & Nora Visser: "Privacy kills quality"
- Anna & linda Hoff: "The Supertesters"
Besides presentations this event is always a joy to meet "old" and "new" test friends. The setup of this event was a bit different compared with the past. This year they introduced some kind of knowledge tables were exhibiting companies would be able to talk with participants about a topic they had chosen. Also new in my opinion was the set up of also smaller tables which able us to sit down with our friends.
I arrived just an half hour before the opening started followed by the key-note given by Stuart Reid. Arriving in the large hall I noticed it was already quite crowded. During the day I heard that over 450 attendees were joining this conference together with me. Who dare to tell that testing is death? We have already a quite strong testing community in the Netherlands. :)
Together with a fellow tester I know from participation/moderation on testforum.nl I went holding a cup of well deserved hot coffee to the main hall, found a spot and sit down. The show started when the chairman opened the conference.
Improving testing - with or without standards
He introduced Stuart Reid. I never saw him a presentation of before and he started with introducing himself. Based on his 27 year experience, tutoring on a university and participating on ISTQB testing techniques if I remembered well he started explaining about how our profession as testers lacking behind in knowledge compared to users and developers. To support this he provided a nice chart with figures from a survey held in 2000. Based on these figures our skills are looking very bad. Didn’t we learn more in the last 10 years? Are these figures still valid? Why are we testers so eager to see figures even if they are from old surveys and rely on them? Or should I accept those and make my own conclusion that certifications did not add any value to the marked as during the years certification exist, we did not added any value to the market. Of course I'm am wrong again.(?)
Using figures and charts from "old" surveys can be useful. Really, there seems to be people who think using "old" figures can reveal the truth.
I believe when using certain information you have to check how dynamic the environment is.
Somehow the IT environment looks to me very dynamic and fast growing when it comes to technology, perhaps also a bit related to the learning skills. If you look at other areas they are less dynamic. Stuart Reid provided also a nice picture from a model created by Hackman & Oldham related to motivation factors. I think that image had more value for his presentation than all the other figures. Here he made his point that we should not only look at the information sources and test techniques, we should be aware of our soft skills also.
Fortunately he was a bit out of time and skipped the slides related to ISTQB etc. In my opinion that kept the message which is more worth to me: we are behind in our knowledge or getting behind and we should adapt now. We have to become professionals with skills. Some testers might believe ISTQB- or other certifications is the only true path. For them, those skipped slides might have some addition.
I hoped he provide some information and guidance so we actually learned something instead of scaring us and directing us that having certifications are a must. Perhaps I am wrong.(?)
The debate between the ethics of testing
After the previous session I attended a "new" idea using a debate to bring testers together. Perhaps people think this is a contradiction. I believe when people talk and hear each other mutual understanding is growing. You don't have to agree with the vision of the other, you have to be aware.
During this debate there were several statements mentioned and introduced by the host. We had to choose party. A pitfall could be that every one would agree or disagree. The hosts mentioned that they will divide the audience in two sides.
There were some good statements although you might agree with the statement; it can be more fun to come up with valid arguments to challenge that statement.
- "A tester should always speak the truth"
- "A tester can be hold responsible for acceptance"
- and more
I believe it is a good habit when every one agrees upon something immediately, you question if this is true. A tester should be capable to come up with arguments to question any statements.
I heard some very good pro as contra arguments. After each discussion the hosts came up with a wrap up and their own thoughts again.
I see some future for such an approach of discussing about topics. You can play this game in different ways. And learn a lot from it.
After the voting if testing and ethics can go through one door together a mystery guest was introduced. They found a "known" person, Bart Broekman willing to speak about his vision related to the statements. I think a debate before a "keynote" is a good way to set the mind set. It helps avoiding discussion about irrelevant topics. It supports discussion about what a person really had to say.
Winning the TestNet 2010 debate award
Finally at the end there was a prize to win. I was the lucky one who gained that award. Unfortunately I don't have a picture of accepting the award, though below you see the evidence. Did I won because I am a good debater? I don't think I'm the best. At least I was standing there with dedication and believe. Perhaps that made me win the award. I'm proud of it. Sometimes an award tells more then other valuable meanings. I know there are some testers who fancy the bottle of wine I won together with this cup. I knew that a cup valued more when presenting it to my kids.
Super testers or Super Sisters?
The sisters Anna and Linda Hoff from Know It from Sweden gave a tremendous show presenting their vision/act about testing. (Also to be seen on EuroStar 2010: Advice: See them!) The Supertesters - A Slightly True Story"
During the show I wondered if they were actors/comedians or testers. Based on how they used the terms and images they have to be great testers.
In my opinion they were GREAT, it was actual a very good show were they acted as program manager, a tester and a super tester. They performed using several techniques in presentation, discussion, drilling, singing, rehearsing, using pictures.
If you looked and listen carefully you might have heard how they were using the several testing schools from ISTQB addictives to Bach-followers. Their approach about changing the mind set of testers by drilling and forcing that their answer is the only true answer is in my opinion a good example how we testers are currently forced to think alike.
Some funny moments were added like hiding bugs, finding it, appreciating it and comparing it with the moment from "lord of the rings" and valuing it as "my precious".
Also their understanding about testing like smoke-testing, V-Model (in their vision it is actually a very nice model), load testing and performance testing (I still wondering if the sisters used pictures out their albums)
At the end the showed some mix of lyrics on known-melodies presenting their message.
At the end
I had some time reserved to see some other presentation after the dinner, only as usual I didn't make it. I had some interesting discussions afterwards with fellow testers. This is one part I also like about this event: meeting other people. What I understood of them was that some presentations could be better. Personally I think this was a great event, learned some personally, had some laughs, got annoyed about the first speaker and went home with a good feeling.
Sunday, May 16, 2010
Another weekend and another session of European Weekend Testing - EWT18: "Zoom me in"
This time Markus Gärtner did a great job facilitating. Was it different then other times? I believe so. Although the number of participants were low, The team was good. This time a good mission was defined with a relation towards reporting to the manager about your test findings in relation to test a suitable tool which can be used during that presentation.
This time the Application Under Test was: ZoomIt v4.1. The main objective was to see whether the application was suitable for usage in during a presentation you have to give for you boss.
During the roundup:
In contradiction to other sessions I changed a bit my approach. before I downloaded the tool I first read the web site for information and noticed that the tool was one with a small number of functionalities. After downloading I checked also the website from the developer for additional information. Looking to the application itself it runs without installing.
Basically it were the following steps:
- Read about the application
- Check for functionalities while running the application
- Ask to the facilitator questions about the context to test this application
- Define the conditions to test the application
- Defocus and check if there are other ways
Some questions to start with
Do you want to have an impression about the usability of zoomit?
If the functionality fits? and can be used during the presentation?
What about the information to use in the presentation?
When will it be suitable and correct to use and when will the boss be pleased about the presentation?
Confirmation of the approach
Before I actually started testing I tried to confirm the approach. The mission was to see whether the tool Zoomit is suitable for usage during a presentation. It should function under the defined conditions of the presentation.
It should be able to support the following objects of the presentation:
* high-level understanding
* details where necessary
* interactive questions
Some Steps to mention
To check whether the application is suitable for usage in a presentation I performed the following steps:
- learn about the tool (documentation and it self)
- check the functionality
- use the tool
a. as given on the screen which is open
b. on a presentation, while not actively shown
c. on an active presentation
d. on a movie
- preparing some kind of matrix with combination: Hot-keys and environment (application vs video/chart etc)
- use the tool with respect to functionality and usage within a presentation
- on video and chart
- using different options
- checking the behaviour of the tool about changing the standard settings
- CTRL-break+background fade: ok
- it is possible to enter negative time in box (copy-paste negative value: -1)
- boundary values of break are:
o enter: 1 until 99
o paste: -9 until 99 (pasting 100 results into 10)
- Hotkeys: it response on the key combination you enter, if you enter CTRL+SHIFT and hit enter then these values are preserved, manually it is not possible
- Used different font type. Also wingdings., seems to work, typing makes the cursor go off the screen, also when using the ENTER
- The timer also using the font as set on the type dialog
- Font size can only be altered with value between 16 and 16
- When using the tool while a video is running; on live zoom it is not shown at all.
- Mouse behaviour in live zoom is opposite
Although some are not new, it is refreshing and valuable to mention
- Sometimes it is not clear what a tool must do, only under which conditions like, using it on charts etc.
- Using a tool with a few number of functions it is easy to prepare some matrix to test
- Awareness about environmental conditions is something not to make assumptions, I noticed that all functions would work on my PC (Vista), this might not be the environment to present on
- Frequently participating on weekend testing trains your mind
- Defocusing brings some peace in thinking process.
- Asking questions first to make the scope clear for yourself provides a great guidance during a mission
- I should train myself more on the questions, perhaps a “golden” heuristic might help.
During the discussion some nice other lessons were possed. It is not always obvious that the environment you are testing on is the same you have to use in presentation. Another option was the availability of a beamer and other digital means. This resulted in the suggestion to use a flip-over, white-boards etc as Oracle to check whether the application would be supportive for the presentation.
Ajay also introduced a term he learned from Pradeep while participating on Bangalore testing meeting: "gaining the context" instead of "setting the context".
For me this brought some ideas and thoughts also. Somehow I see "gaining" as something you have to earn. In my opinion a context for testing must be gained. Not always is information provided clear, even when asking the right questions, it is the persons attitude and willingness to share information with you.
While we were discussing the differences Pradeep Soundararajan entered the discussion to support us on this. He challenged us with a alarm clock example. A bit later also Michael Bolton entered the discussion to provide us some guidance.
In my first impression it seems that in general approaches there is less considereation between human aspects. Michael pointed us towards the mnemonic: CIDTESTD from Heuristic test Strategy Model (20100516: Changed heuristic into mnemonic)
At the end there was the question why it would be so important to discuss about the difference between gaining, exploring etc.; it can be all the same; it might be just a word game.
I believe there is some difference. Like earning respect, you have to gain knowledge and information. This can be done by using your skills as a person and adapting to the situation. I believe information should not be take for granted. Or as Markus mentioned: "don't look where everyone's already pointing". This can also be used in some way as: "Don't ask information others already asked"
At the end it was a challenging, good moderated and fun weekend session.
Well done to all.
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters
Monday, May 10, 2010
I blogged about my experience in Weekendtesting were I used Astra Site Manager creating a map WTANZ02: Same Language, different sites and places. In that post Shrini Kulkarni challenged me to expand on how to use this as test strategy.
When you look at the images posted there, you might notice that the images look a bit like spots/stains.
When thinking about spots/stains and deriving information from it reminds me immediately on the Rorschach test.
From Wikipedia: Rorschach test: "also known as the Rorschach inkblot test or simply the Inkblot test) is a psychological test in which subjects' perceptions of inkblots are recorded and then analyzed using psychological interpretation, complex scientifically derived algorithms, or both."
Below you see an example of a Rorschach image. Are you able to read this picture? Are you able to assign functionality to areas? Do you see bugs?
Image saved from wikipedia http://en.wikipedia.org/wiki/File:Rorschach_blot_01.jpg
Primarily based on the perception of these spots the user is asked what and how he experience this and why. What does the spot tells you.
Testing spotsBelow you see the 2 images I obtained from "testing" the 2 websites as stated in the challenge from WTANZ02: Same Language, different sites and places.
Just tell me: what do you see?
It depends how you look at the images, you might identify some shapes. Perhaps you only see dots or animals. Perhaps you see bugs.
Defining a strategy is a challenge itself. Writing about it and sharing your idea is even more a challenge. Writing about it and trying to come with a Heuristic is more challenging for me as this is quite new to me. So bare with me, support me and make me teach you as I can learn from you.
I suggest first to define the approach based on patters. Ask what the image itself can tell you and what information do you need to define the approach.
Imaging: Create a map of the website/ functionality to define a certain landscape
Defocus: Don’t approach the image as a system, approach it as a painting, approach it different, what else do you see? Use your imagination.
Interpret: Are you able to tell a story about what you see (colours, lines, drawings, etc.) and argument it?
Density: is there a structure available representing the first impression you had?
After you got a main overview about what the system could look like you might play with the following components.
Complexity: Is there some kind of structure? Are there lots of nodes and are you distracted by it?
Number of objects: Are there too much objects visible you are not able to zoom in without missing details?
Environments: Can the map also be used to identify other systems/ secure areas?
Risk Areas: Are you able to point areas of risks in the map based on "important" functionality?
Process: Is there a order available in the structure which also might support any process?
Looking to the previous actions, I hope to provide some additional ideas how images from a website structure can support defining an test approach. I believe looking on a different way to images or structures you might come up with other concepts and thoughts which supports your test approach. The next step could be adapting the newly gained view into your test process. Based on this information you can define alternative test cases or perhaps product risks analysis.
It might help to get back some creativity back in testing.
This weekend I attended another session of European Weekend Testers. This session was facilitated by Thomas Ponnet and had another approach then in the past. This time we could prepare ourselves a bit. The tool under test was offered before the session started.
What has this to do with rocket science? It was the plug-in which had to be tested.
The Participants were:
Zeger van Hese,
Jaswinder Kaur Nagi,
I have to admit it was a great crowd, a great session and an useful round-up.
What to learn
Last week Andreas Prins posted on his blog the question what can be learned: Attitude or methods? with weekendtesting. In his posting I wonders when reading articles related to weekendtesting he never see a reference like: "As ISTQB Chapter X page xxx we must do this or that" it is a good remark from him. Mainly in projects I don't refer to pages of ISTQB either. Not in this context. When testing in the weekend, or on a project, you refer towards your experience or even sources were people are telling about their experience in a certain context. That experience can be based on theory combined with common since and the situation.
What can you learn in Weekend Testing. I'm not able to tell you what you will learn. You might learn how to look at yourselves. You might learn to think beyond the borders of the regular testing projects you are into. You might learn from the approaches from others. You might learn how to learn.
The missing this week was different then others, this time we had a manager of a band who had a gig that evening and wanted to make sure that the plug-in he found was suitable and stable. If not, would we be able to propose alternatives.
Basically I looked at the application to the following points
- try to play wav while plug-in is not available
- try to play wav after plug-in is selected
- tried to alter the sound of the wav using several preset schemes.
- asked the manager about when it is stable, plug-in /laptop
- asked the manager about under which condition plug-in was used
- played with multiple files,
- used other wavs
- used wav and midi together,, no option to mix tunes
- used the key board and options while music was playing, it interferes with the output.
- the midi/wav player, played a bit with that.
- looked at the minihost and used several schemes/presets
- used the buttons on the minihost
- tried to work together with multiple minihosts
- try to record music
- tried using strange actions like using short-keys how app. reacted
Below you find the highlight of issues I found during the session. These were findings from my side.
Error01: message shown when opening the minihost
Sometimes when using the minihost this error is shown. Not always reproducible
Error02: error shown when opening recorded wav
when opening the “test.wav file just recorded this message is shown, although the wav file created/recorded using mic is 1kb.
Error3: opening another own wav file error is shown, wav not played
When opening another .wav file format the following message is shown and wav is not played.
Err0r04: Recording not working
When using the recorder it shows that a number of kb is created. Even the file and location is shown correct only the actual recording is not made
Error05: when new file is played, not selected
When a song is ended and a new is played in the list, the selection is not made, the original keeps “blue”
Error 06: in global settings window: “tempo” is not working
When using the Tempo slider, no effect on wav-output
Error07 multiple files able to select, last file is played
Error08 buttons not fine approachable, usability is less
When trying to turn on the buttons it is no following the direction of the mouse
Error09when playing a song, and hitting button on midi/wav recorder interferes with output
When playing a tune and you press other buttons of this app, then music/tune is stopped/ hanging for a few moments.
Error10 pressing The F3 button while playing a wav makes the music hang,
When pressing the F3 button on minihost.exe while playing application is hanging. No other interaction with system possible
Some Lessons Learned
Of course there are a lot of things you can learn when testing. There are even more things you already have learned. Some of the lessons I learned this weekend are just refreshing or confirmations of other valuable lessons.
1. Although information about a single application is required, when testing it together it is a combined answer. You might consider it as single object, when it is tested together with other tools you have to consider their stability also
2. To understand or able to test a part of a object, you need to know the context, in this case about what stability means for the user and not according to the tester.
3. You are not in the position to provide advice, look at the article from Michael Bolton: http://www.developsense.com/blog/2010/05/when-testers-are-asked-for-a-shipno-ship-opinion/ You can provide information
4. When you are asked as a team, you have to work as a team. Even after short introduction it is hard to get everyone’s attention.
5. Domain knowledge is a prerequisite when it is directly asked by “ the manager”
6. if the manager is not there, find someone in the team with domain knowledge
7. Don’t get distracted by crashes, when they are reproducible, then you can avoid them, if you are still able to use the functionality then, you might earn something with your gig
8. It is easy to forget the lessons learned from previous sessions, The assumption is easily made that every one knows you and how you think. Information which seems to be obvious is often forgotten when acting longer in a project. Perhaps recap some questions? Magic words: FOCUS/DEFOCUS
9. Reminded about the posting of Markus about being blunt or not towards manager: http://blog.shino.de/2010/04/11/testing-and-management-mistakes-causes/
The discussion part
During the session several questions and suggestions were raised. Information was missing or needed. Some were about the domain knowledge like "what software compressor for music is", "How to communicate with the manager", "acting like a team or not" and so on (you might check the transcript for details)
Also at the end some valuable remarks were made related to "old experience", "skype is not a good tool to use ", "the manager already checked for a tool, why should we check for more functionality", "if the plug-in was the objective, should it be tested alone?", "Are we able to answer the question to provide and advice?"
Looking as a process to the discussion you can also notice some familiar behaviour. We all had a common goal, still we acted like individuals. We try to get information which would be valuable for us at that moment. In my opinion we did not asked what would be valuable for the team. We also tried to do our job good due to the minimum of time and focussed therefore more on ourselves. When you look carefully, there were some persons who tried to become a group and act like a group. Perhaps due to time, differences in experience, differences in testing approach, differences in objectives we did not succeed to act like a team. If you look at the end, we are more explaining what we have done and what the traps are. The focus lied more on "did we succeed the mission". I believe we missed in some part a good lesson: "what did we learn and was it fun?" and also "Which personal lessons can you take to a next session."
What would be more valuable, to meet the mission as an individual or to act as a team, learn from each other and perhaps meet the missions objectives or perhaps change it during and afterwards?
This weekend session was a great one. A mission with an attitude of the manager. A great crowd of testers, a discussion you can learn from. I had a lot of fun and learned old and new lessons.
Friday, May 7, 2010
It is so obvious, I knew it and although I search for someone to blame, it is my mistake.
Sure, how easy it is to accuse my little daughter who sat behind my PC playing some games on it. How adorable she was when she laughed and called me for help. How priceless her smile was when she was proud to be allowed playing on my PC.
It doesn't mind at all. It is broke and I didn't do it. So I am not to blame or am I?
Here is the situation:
I have a external disk drive and I used that one as back up facility. So far so good, why not use an additional drive for backup instead those disks. (I have still some 3,5 inch disks with stuff on it and those cannot be used for backup anymore) It seemed reasonable to use an external HD for backup facility.
That hard drive was standing on my tower, and as it is a tiny one it felt down already quite a few times and afterwards it worked every time. Amazing how solid that Freecom HD was. Until last weekend. It felt, and no one told me. And today after some times I needed that nice solid backup facility. Unfortunately, it was not approachable anymore.
An unapproachable device is nothing new. Sometime USB -ports are just mixed up or deactivated due to some stupid installation etc. As I needed the HD I did some checks.
The initial check was turning on the power. He, that is strange, the power was already on, normally the HD would be recognized. Hmmm. I turned the device on and off. no result
I checked the USB cable, I tried another port. I restarted the PC. Checked the USB settings. Shouted a bit loud just to express some kind of frustration. I checked again the HD if the blue lights was glowing. I listened if the HD was running while I restarted it (turn of and turn on)
I actually checked if it was recognized in the "remove safely-dialog". I kept refreshing my explored window. Did this using THE F5 button, The combination with the SHIFT+F5 button. I used the refresh-option in the explorer menu. I even tried the F9 button as in MS Excel this refresh also sometimes the results.
I stood up and connected that external HD to the PC of my daughter. Tried all USB ports on that PC as well. I actually carried the HD to the PC of my son. All with same results.
While I was walking downstairs my daughter asked me about the status of her new PC. And there something happened, I found another victim. I reminded about the situation finding it strange that the HD was lying on the ground. Only didn't suspect anything at that moment.
While I was walking downstairs my daughter asked me about the status of her new PC. And there something happened, I found another victim. I reminded about the situation finding it strange that the HD was laying on the ground. Only didn't suspect anything at that moment.
I asked her if she remembered if the HD felt down from my PC. She looked with glazy eyes wondering what I meant. I asked her again if that grey little box felt on the ground, and she remembered that. Somehow I felt even more frustrated, it came actually in my mind that to tell her I would work on her PC until cure was found. Only, which person can be angry on her little princess. not me, I went downstairs, informed my wife about the situations, made some strange sound to express my frustration and went upstairs back to the PC.
That gray little box, when I shake it I hear some noise. It sounds like something very tiny was broken. As a skilled engineer I tried a way to open the box. OK, I admit, I'm not skilled in those grey little boxes. Somehow I believe I can become skilled opening the item when it is broken. Also when it is not broken I hope I learn.
Looking back at that gray box no screws visible. I questioned myself if it was worth the effort to open the box using brute force? This time I decided not to open the box. Instead I calculated and argument the damage.
I look at the damages in terms of:
- what is lost?
- is losing a lost?
- time taken to collect,
- times of usage,
- emotional value,
- options to recover (from other) resources,
- time it would take to recover,
- time when information was needed,
- what have I done to prevent loss
- what were my intentions to prevent loss
- what did I not do
Evaluating the process
Looking back and thinking what I have learned or could have learned I come up with this blog and the following identifications in the process
- I found an defect
- I made sure it was broken
- I investigated it in several ways
i. What was behaviour
ii. What should behaviour be
iii. Was the drive approachable
i. Was it turned on
ii. Was there power available
iii. Was the power-cable plugged in
iv. Was the power source connected?
i. Cables working
ii. Light working
iii. Does it make sound
iv. Can it be opened
v. What effort must be used to open it
i. Was there hardware recognition
ii. Did behaviour occur on all other USB-ports
i. Own system
ii. Other systems
i. What was on it
ii. What is gone
iii. Were are some pieces stored
iv. How old was data
v. What was the value
i. What did I remember
ii. What did I do with data
- I checked if others were to blame
- I noticed some strange behaviour as result of my fury/frustration
- I found someone only didn’t want to blame
- I noticed that blaming is not solving the problem
- I searched for arguments not start blaming
- I try to evaluate the loss
- I evaluate and wanted to learn from it
When looking back at this situation you might noticed that there are some similarities between this situation and testing.
How often did you:
- face issues and became frustrated about it?
- looked at a system in different ways?
- spend time to find the one to blame?
- did you try to look at behaviour of others and yourself?
- did you learn from that situation?
- did you aimed for value instead of spending time for too detailed proof?
When I look back at that situation I see I didn't stick to problem pointing, even problem solving was not the issue. I valued the situation and took actions. This time it was apologizing to my daughter, removing the external HD from my PC to avoid other damage and check some old other storages and made sure that they are approachable. Finally I scheduled some time to check for other hardware and decided it could wait for a month.
I still have the image of her face sitting proudly behind my PC smiling at me. That is priceless. In other terms also valuable. I would say, that is even valuable then the damage/loss.
A valuable lesson I want to give to the reader: Don't take things for granted, if something happened; look what you learn from it and how you can learn from it.
Thursday, May 6, 2010
Images above words
How often did you not heard that images tell more then words. Of course we all believe that; as our project manager of also you wants fancy colour charts and dynamic results, deliver real time.
A while ago I planned to make dinner. It is one of the recipes I got from a fellow student, we even named the recipe "Drietdrap" (don't ask me for the explanation) At that time I cooked it for some ladies and they were sold. Although it doesn't look good. I will be honest, it looks awful when you see it the first time, it taste even better. Perhaps that is the deception, it taste better then it looks so therefore it is good.
I have cooked this meal more then often for friends and family, even my children like this meal. Every time I was asked to write down the recipe so they were able to reproduce. When starting this time I noticed that I wrote the recipe already multiple times and depending on the time it was more or less detailed. This made me think about the job I love: testing. Do you taste the similarity already?
Here my story for success. Enjoy your dinner!
Would this be enough information to make create the meal and also test the meal?
Words above images
Another approach to tell the same story with words.
Ingredients for 4 persons (looks like requirements?)
- 500 gram minced meat
- 250 gram mushrooms
- 1 or 2 unions
- 2 pieces of garlic
- crème fraiche
- pasta tri-colore
Bake the meat, add pepper and salt on taste. Boil the water, chop the union, mushrooms and garlic. Add them to the meat. Heat up the spinach and add the pasta to the water when it is boiling. When it is all done remove the water from the pasta. Add the crème fraiche to the spinach. Add the Boursin to the pasta. And mix all together. now you have a lovely meal.
Are you familiar to the recipe? Do you know when you are done?
Mind mapping dinner
Like every meal I cook, I do it merely using my mind and my sense for taste instead of the actual recipe. To structure the cooking process you can you use some kind of a mind map. perhaps the one below helps you out?
Do you also see and feel it? Do you feel comfortable now there is a bit of structure? is this best of both worlds? An image and some words?
The story so far
Like always, the result you have experienced before never succeed the success you gained now. There are several ways to communicate and there always will be information missing. As tester you have learned never make assumptions. And you now the results only at the end, is it? Sometimes there is a combination between documentation, written words and models. The image below is not a correct way to explain. For me at this moment, it is a story I could have told while cooking.
I provided you a few examples how you could explain cooking this meal. There seems not much a difference between cooking and testing. You have different roads to approach cooking. You have different ways to tell. If you like to cook for profession or a person who likes to cook for fun or a cook who does it because someone has to do it. It is the result which counts. We have to eat for living. Sometimes the looks and the taste don’t care. It is the value of it, is it an in between snack or dinner to survive?
In testing you have these kind of testers also. You have persons who live by the book, you have testers who act on vision and experience. You have those who are creative and willing to try and accepting to fail. As long as they learn from failure and there is time for failure.
in the examples above you will see that information can be brought in different ways. People who are familiar with the recipe or keen on learning perhaps need less detailed information than others. For others detailed information is mandatory. It is also not only the recipes which are counting, also other information related to expectations. Like I started I once start cooking this very successfully, which might be important information as it doesn't add value in preparing the meal, it colours the expectations.
The same is about how old and trustworthy the expectations are. How about I tell you that that nice achievement is about 20 years ago. I cooked over the years more often and was also successful as friends and family were satisfied. Perhaps there is some kind of additional context, designed by memories.
There is not one recipe for success, There are more ways to serve. I hope you learned that we should NOT ask for detailed documentation, we should ask for information needed to add value. In this case the value was when dinner was served and eaten. Less important was the order things happened.
Michael Bolton posted and excellent article called Blog: When Testers Are Asked For A Ship/No-Ship Opinion which made me think and respond about it. I started with commenting on his blog and during that I came up with some thoughts I wanted to share.
Reading this story raised some questions for me I normally are aware of only never asked directly. Perhaps because when dealing myself with it; it is too close to me; the project is in stress. I agree with you that we should not make the decision shipping. Here some questions I have as response to the project manager whether to ship/or not:
- Where did we miss providing enough information? If we provided the proper information she would be more conformable.
- Why did she ask that question at the end of the project and not during the project?
- Why did not we guide her to ask “valid” questions?
- What could we do better to avoid discussions and questions like this at the end?
Do you notice that these are questions to myself instead directly to the project manager. if you have to change, first think what you can do. What value you can deliver. And also when. Looking at these questions, there is more then just providing test results. You have to communicate on other items also. In this case, which message will you have to bring and do you have mutual understanding on this.
I’m sure there are other questions to ask, even more answers to be provided. In my opinion you posted here a basic rule. When thinking further on this, based on this question to ask or not to ask, testers have to deliver all kinds of documents/ metrics and so on, just to “help” the project manager making decisions.
The bright and dark side
The bright side is not having all information and asking the team. Only the moment is “too” late when you get this question at the end. It is a bright situation since you are not exaggerating the documents you deliver and you have time to adapt to the situation. You must check continually if you provide value.
I believe there is another dark side. The dark side is asking the team “all kinds of information not knowing yet it will be valuable or usable and still I need it just in case I come up with questions afterwards forgetting that providing information cost time and resources not delivering other valuable products”.
What I have seen in the past was to gain control by collecting all possible information. Sometimes collecting information is not that bad. It becomes bad when you communicate about it and no one is waiting for it and you have to explain they should.
If you have to explain the value of information afterwards, then you are too late. You have to guide them and help them to understand the information you create/ provide. You also are responsible only to deliver that information which is needed/values to the product directly or indirectly. This means you have to communicate and interpret the behaviour of the stakeholder.
To me, testing is more then only finding issues, or proving functionality works. It is also a process to make the results you find be accepted. You must be aware that you have to deliver that information which is requested and make sure that vision about responsibility is agreed upon. Perhaps keep checking if the information you provided is valuable and also understood as you meant it to be understood.
Are you the messenger for the GO/No-Go advice? I believe you are not the decision maker on this. You should provide information the decision maker can make that decision. Sure, you should help him/her. Only by providing information within the proper context. You also have to explain and guide how that information can and should be used.
If a question like this is coming from the project manager, you might see that as a sign you did not provided the right information and or guided her/him through the information you provided. Instead of asking question to the project manager, first start asking them to yourself.
Monday, May 3, 2010
Weekentesting on the other side of the world
At least it is for me and there were some benefits. WeekendTesting-chapter in Australia and New Zealand (WTANZ) had their second session. As it was raining and still early in the Netherlands (just 8 PM) I asked to participate. As I was almost an hour too late I had less time available to test the mission as provided.
Here's our mission today: The mission: Exploratory testing of how easy it is to get data in different formats about education in the United States and the United Kingdom from http://data.gov/ and http://data.gov.uk/.
The Participants were:
Marlena Compton (facilitator)
Jaswinder Kaur Nagi (aka Jassi)
As mentioned I attended too late so I had another challenge, instead of following the mission am I able to get enough information to be able to start next time. Like in normal life you are faced with situations were an approach have to be defined and less time and information is available.
As I understood from the discussion and de briefing, the website ought to be similar and also with a similar objective. To understand more about the sites I came to the idea to find out about the objectives and compared them. I also checked on visual sight the structure of the site based on the menu items.
Next to it the tone of voice was important for me the learn more about the audience.
During the checking I scrolled a bit thru the menu and decided to use an old tool called Astra Site Manager which was developed by Mercury (now HP). Although this tool is not flawless, it sure provided the information I was looking for. how complex is the site.
Website map of http://www.data.gov/ created with Astra Site Manager
Website map of http://data.gov.uk/ created with Astra Site Manager
If you compare the images you will noticed that there is some differences in structure. I think a map like this is usable to identify areas/ pin point areas were risk can be identified. If an area contains some risk you might come up with some other exploring questions as: "if user data is used how does it flow through other screens?"
As result of this tool I came up with some unreliable metrics like the number of URL's.
The UK site counted over 5961 URL's and US-site counted over 4903 URL's.
If I use these numbers with the goal of the sites: sharing information to the public, then I question: How will the public be able to find valuable information if it exceed their ideas. How will the public be able to find the right information? The change of finding some information is due to the high number if links high, the change if that information is the correct information depends how the search engine works. When will the result be the best and reliable result?
Looking to the technology: On the US-site they just use the icons for facebook and twitter. On the UK-site they explain what they do. Does this mean that the audience is different?
What I also noticed when running the tool is the differences in files which can be downloaded, from .xls, .csv, .pdf, .txt to .xlm. Also there is no usages of naming conventions in the documents as well in the webpages and directories.The discussion
The round up was interesting, they all shared their experience and wondered if they met the mission. Some found their way using google for information, others came up with an well spoken approach. I learned from this session as well and hope others did too.
- Comparing different web site: decide which will be your "Oracle" and why
- Tone of voice is different and tells something about the expected audience
- Question the value of information when it is offered in huge numbers and what is the change the right information is found
- Creating a map can be useful to pin-point risk areas and pin-pint value for the users.
- Usage of file names and the similarity can tell some about the quality of the site, at least the change of errors
- Huge number of web-pages might result in higher chance of failure, why are these kind of websites this huge?
Sunday, May 2, 2010
Flashing barcodes and great participants
This time a great session about flashing value and barcodes. Also with great participants and discussions afterwards
The participants this weekend were:
This time the product was a funny barcode reader:
This app is about generating barcodes based on your input like: gender, country, age, weight, height and calculates also a bogus price value.
The mission was about finding out how the calculation worked and what the highest value would be to obtain. Combine this with reporting invalid values.
Below you find the summary I gave during the round up
First I tried out the app by just pressing the buttons and identify its behaviour.
I checked if the values entered are used in the calculation, I did this by using same values twice to see if there is some kind of randomizer active. This was not the case. I also check only changing the gender. It actually matched as described in diagram, only that value changed.
I tried with highest and lowest values. Which result at the end of showing incorrect values in the Scan report. I noticed also that when using actual values, in the barcode there is some mix-up of entered values and presented values.
At the end I left some part of the URL address and came to the actual site. Here there was some valuable info In FAQ about calculation.
The next time I would spend more time to check the logic as described in the FAQ with respect to the outcome of the app. An hour is just too short for me to check if that formula tending to use actually match the actual outcome. Here the important value is to agree upon the perfect BMI.
Some funny issues
Of course it was fun to find some issues. When you test this application using highest numbers you will find out that the calculation between the metric systems is not done properly, this with respect to the offered diagram.
Also testing with lowest numbers return some $NaN tags when looking at the "Scan" list. At least the price value is $0.00
When navigating back and forward you will notice that the dropdown of the country will be emptied which lead also in a strange outcome on the "scan-list"
Initial lessons learned
During the round up I came up with the following lessons learned.
1- Agree upon the level of detail you prepare your model about the app.
With the level of detail I meant how deep and how broad will you test knowing that this decision ask effort and knowledge.
2- Avoid the pitfall that if app is simple and no documentation available using the app, search for other means.
Wonder every time what kind of documentation you need, are you searching for it or using the application as some kind of oracle to ask the questions to.
3- Tools to read code might help
If you know about tools to read code from flash applications, perhaps some this helps as documentation source.
4- Translation between metric systems is often an area for failure. (it was also in the Arianne 5 project if I'm correct)
One of the pitfalls for me is every time the differences between the metrics systems. I should spend some time to learn about and learn to use it instead of using tools for conversion
Lessons during the discussion
Again this time there was a great discussion afterwards. Thomas came with a suggestion to use iterations for trying out test data, this would force thoughts to focus and defocus.
Michael posted an interesting lead which reminds me of some earlier work of him:
It seems to me that one of the principal issues that this exercise brings up is the alternation between focusing and defocusing heuristics--varying one factor at a time (OFAT) or varying many factors at a time (MFAT). (There's also another kind of factor-oriented heuristic noted in the book Exploring Science: hold one factor at a time, or HOFAT.) You use OFAT when you're trying to focus on the effect of a particular factor; MFAT when you're seeking to confirm or disconfirm your ideas about factors in combination with each other
Somehow i couldn't find the source of Michael, on Wikipedia there is something mentioned about it One-factor-at-a-time method When Googling on varying one factor at a time I found some interesting documents I have to investigate later on.
During the discussions I mentioned the approach called TMap defined by Sogeti and at least well know in The Netherlands and also a standard for approaching test projects.
For me TMap is a strong approach which is more process oriented instead of value deliverance to business (perhaps TMap next can serve this better). As for every model/method, it must be used with common sense. We should be warned not to focus on making the method work instead of that, we should watch out to be able to deliver value to business. It is so easy to say that we do it as the method tells you because based on the method agreements are made.
With common sense I meant as said in the discussion:
"to me the skills for common sense is knowing when you are using a method for the benefit of the actual outcome. And you are not using a method to proof you are able to be able to use that method and based on that claiming you do the right thing as the methods is right, you follow the method, therefore you are right.
If you are able to judge your approach against the initial goal you were hired for then you might be able to get the benefits of an approach like this. Otherwise you are selling other things you are hired for."
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters