Questions?
Michael Bolton posted and excellent article called Blog: When Testers Are Asked For A Ship/No-Ship Opinion which made me think and respond about it. I started with commenting on his blog and during that I came up with some thoughts I wanted to share.
Reading this story raised some questions for me I normally are aware of only never asked directly. Perhaps because when dealing myself with it; it is too close to me; the project is in stress. I agree with you that we should not make the decision shipping. Here some questions I have as response to the project manager whether to ship/or not:
- Where did we miss providing enough information? If we provided the proper information she would be more conformable.
- Why did she ask that question at the end of the project and not during the project?
- Why did not we guide her to ask “valid” questions?
- What could we do better to avoid discussions and questions like this at the end?
Do you notice that these are questions to myself instead directly to the project manager. if you have to change, first think what you can do. What value you can deliver. And also when. Looking at these questions, there is more then just providing test results. You have to communicate on other items also. In this case, which message will you have to bring and do you have mutual understanding on this.
I’m sure there are other questions to ask, even more answers to be provided. In my opinion you posted here a basic rule. When thinking further on this, based on this question to ask or not to ask, testers have to deliver all kinds of documents/ metrics and so on, just to “help” the project manager making decisions.
The bright and dark side
The bright side is not having all information and asking the team. Only the moment is “too” late when you get this question at the end. It is a bright situation since you are not exaggerating the documents you deliver and you have time to adapt to the situation. You must check continually if you provide value.
I believe there is another dark side. The dark side is asking the team “all kinds of information not knowing yet it will be valuable or usable and still I need it just in case I come up with questions afterwards forgetting that providing information cost time and resources not delivering other valuable products”.
What I have seen in the past was to gain control by collecting all possible information. Sometimes collecting information is not that bad. It becomes bad when you communicate about it and no one is waiting for it and you have to explain they should.
Awareness
If you have to explain the value of information afterwards, then you are too late. You have to guide them and help them to understand the information you create/ provide. You also are responsible only to deliver that information which is needed/values to the product directly or indirectly. This means you have to communicate and interpret the behaviour of the stakeholder.
To me, testing is more then only finding issues, or proving functionality works. It is also a process to make the results you find be accepted. You must be aware that you have to deliver that information which is requested and make sure that vision about responsibility is agreed upon. Perhaps keep checking if the information you provided is valuable and also understood as you meant it to be understood.
Go/No-go?
Are you the messenger for the GO/No-Go advice? I believe you are not the decision maker on this. You should provide information the decision maker can make that decision. Sure, you should help him/her. Only by providing information within the proper context. You also have to explain and guide how that information can and should be used.
If a question like this is coming from the project manager, you might see that as a sign you did not provided the right information and or guided her/him through the information you provided. Instead of asking question to the project manager, first start asking them to yourself.
Thursday, May 6, 2010
Response on Go/No-Go to ship
Posted by
Jeroen Rosink
at
10:52 AM
0
comments
Labels: Michael Bolton, Strategy, Test Management, Testing in General
Monday, May 3, 2010
WTANZ02: Same language, different sites and places
Weekentesting on the other side of the world
At least it is for me and there were some benefits. WeekendTesting-chapter in Australia and New Zealand (WTANZ) had their second session. As it was raining and still early in the Netherlands (just 8 PM) I asked to participate. As I was almost an hour too late I had less time available to test the mission as provided.
Here's our mission today: The mission: Exploratory testing of how easy it is to get data in different formats about education in the United States and the United Kingdom from http://data.gov/ and http://data.gov.uk/.
The Participants were:
Marlena Compton (facilitator)
Ajay balamurugadas
Allmas Mullah
Dhara Sapu
Oliver Erlewein
Jaswinder Kaur Nagi (aka Jassi)
Keis
Approach:
As mentioned I attended too late so I had another challenge, instead of following the mission am I able to get enough information to be able to start next time. Like in normal life you are faced with situations were an approach have to be defined and less time and information is available.
As I understood from the discussion and de briefing, the website ought to be similar and also with a similar objective. To understand more about the sites I came to the idea to find out about the objectives and compared them. I also checked on visual sight the structure of the site based on the menu items.
Next to it the tone of voice was important for me the learn more about the audience.
During the checking I scrolled a bit thru the menu and decided to use an old tool called Astra Site Manager which was developed by Mercury (now HP). Although this tool is not flawless, it sure provided the information I was looking for. how complex is the site.
Some Results
Website map of http://www.data.gov/ created with Astra Site Manager
Website map of http://data.gov.uk/ created with Astra Site Manager
If you compare the images you will noticed that there is some differences in structure. I think a map like this is usable to identify areas/ pin point areas were risk can be identified. If an area contains some risk you might come up with some other exploring questions as: "if user data is used how does it flow through other screens?"
As result of this tool I came up with some unreliable metrics like the number of URL's.
The UK site counted over 5961 URL's and US-site counted over 4903 URL's.
If I use these numbers with the goal of the sites: sharing information to the public, then I question: How will the public be able to find valuable information if it exceed their ideas. How will the public be able to find the right information? The change of finding some information is due to the high number if links high, the change if that information is the correct information depends how the search engine works. When will the result be the best and reliable result?
Looking to the technology: On the US-site they just use the icons for facebook and twitter. On the UK-site they explain what they do. Does this mean that the audience is different?
What I also noticed when running the tool is the differences in files which can be downloaded, from .xls, .csv, .pdf, .txt to .xlm. Also there is no usages of naming conventions in the documents as well in the webpages and directories.
The discussionThe round up was interesting, they all shared their experience and wondered if they met the mission. Some found their way using google for information, others came up with an well spoken approach. I learned from this session as well and hope others did too.
Lessons learned
- Comparing different web site: decide which will be your "Oracle" and why
- Tone of voice is different and tells something about the expected audience
- Question the value of information when it is offered in huge numbers and what is the change the right information is found
- Creating a map can be useful to pin-point risk areas and pin-pint value for the users.
- Usage of file names and the similarity can tell some about the quality of the site, at least the change of errors
- Huge number of web-pages might result in higher chance of failure, why are these kind of websites this huge?
For more information see:
Website: http://weekendtesting.com/ or follow them
on Twitter Weekend Testing: http://twitter.com/weekendtesting
Posted by
Jeroen Rosink
at
10:25 AM
4
comments
Labels: Testing in General, Weekend testing, WTANZ
Sunday, May 2, 2010
EWT16: What values you in barcode
Flashing barcodes and great participants
This time a great session about flashing value and barcodes. Also with great participants and discussions afterwards
The participants this weekend were:
Anna Baik
Michael Bolton
Stephen Hill
Thomas Ponnet
Ram
This time the product was a funny barcode reader:
Product: http://www.barcodeart.com/artwork/netart/yourself/yourself.swf
This app is about generating barcodes based on your input like: gender, country, age, weight, height and calculates also a bogus price value.
The mission was about finding out how the calculation worked and what the highest value would be to obtain. Combine this with reporting invalid values.
My approach
Below you find the summary I gave during the round up
First I tried out the app by just pressing the buttons and identify its behaviour.
I checked if the values entered are used in the calculation, I did this by using same values twice to see if there is some kind of randomizer active. This was not the case. I also check only changing the gender. It actually matched as described in diagram, only that value changed.
I tried with highest and lowest values. Which result at the end of showing incorrect values in the Scan report. I noticed also that when using actual values, in the barcode there is some mix-up of entered values and presented values.
At the end I left some part of the URL address and came to the actual site. Here there was some valuable info In FAQ about calculation.
The next time I would spend more time to check the logic as described in the FAQ with respect to the outcome of the app. An hour is just too short for me to check if that formula tending to use actually match the actual outcome. Here the important value is to agree upon the perfect BMI.
Some funny issues
Of course it was fun to find some issues. When you test this application using highest numbers you will find out that the calculation between the metric systems is not done properly, this with respect to the offered diagram.
Also testing with lowest numbers return some $NaN tags when looking at the "Scan" list. At least the price value is $0.00
When navigating back and forward you will notice that the dropdown of the country will be emptied which lead also in a strange outcome on the "scan-list"
Initial lessons learned
During the round up I came up with the following lessons learned.
1- Agree upon the level of detail you prepare your model about the app.
With the level of detail I meant how deep and how broad will you test knowing that this decision ask effort and knowledge.
2- Avoid the pitfall that if app is simple and no documentation available using the app, search for other means.
Wonder every time what kind of documentation you need, are you searching for it or using the application as some kind of oracle to ask the questions to.
3- Tools to read code might help
If you know about tools to read code from flash applications, perhaps some this helps as documentation source.
4- Translation between metric systems is often an area for failure. (it was also in the Arianne 5 project if I'm correct)
One of the pitfalls for me is every time the differences between the metrics systems. I should spend some time to learn about and learn to use it instead of using tools for conversion
Lessons during the discussion
Again this time there was a great discussion afterwards. Thomas came with a suggestion to use iterations for trying out test data, this would force thoughts to focus and defocus.
Michael posted an interesting lead which reminds me of some earlier work of him:
It seems to me that one of the principal issues that this exercise brings up is the alternation between focusing and defocusing heuristics--varying one factor at a time (OFAT) or varying many factors at a time (MFAT). (There's also another kind of factor-oriented heuristic noted in the book Exploring Science: hold one factor at a time, or HOFAT.) You use OFAT when you're trying to focus on the effect of a particular factor; MFAT when you're seeking to confirm or disconfirm your ideas about factors in combination with each other
Somehow i couldn't find the source of Michael, on Wikipedia there is something mentioned about it One-factor-at-a-time method When Googling on varying one factor at a time I found some interesting documents I have to investigate later on.
During the discussions I mentioned the approach called TMap defined by Sogeti and at least well know in The Netherlands and also a standard for approaching test projects.
For me TMap is a strong approach which is more process oriented instead of value deliverance to business (perhaps TMap next can serve this better). As for every model/method, it must be used with common sense. We should be warned not to focus on making the method work instead of that, we should watch out to be able to deliver value to business. It is so easy to say that we do it as the method tells you because based on the method agreements are made.
With common sense I meant as said in the discussion:
"to me the skills for common sense is knowing when you are using a method for the benefit of the actual outcome. And you are not using a method to proof you are able to be able to use that method and based on that claiming you do the right thing as the methods is right, you follow the method, therefore you are right.
If you are able to judge your approach against the initial goal you were hired for then you might be able to get the benefits of an approach like this. Otherwise you are selling other things you are hired for."
WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters
Posted by
Jeroen Rosink
at
11:21 AM
0
comments
Labels: EWT, Testing in General, Weekend testing
Thursday, April 29, 2010
Benchmarking test effort time useful?
Recently I was asked by another professional about the information which is created during a test process. Which products are delivered? Which information is reported and therefore collected?
I provided him some information with the remark: It depends.
It used the some of phases from TMap® to give him some food for thoughts about figures/product delivered per phase. This is just short list which cam in my mind at that moment, there are items which can and should be added and or removed.
Preparation
- Review results – provided to test team
- overview progress preparation functional designs/ use cases – provided to test - team/management/ project team
- Bi-weekly progress reports – provided to management and project team
- E-mail/ memo instead of escalation to stakeholder
- Test plan, provided to project team and test team
Specification
- Progress specification of test cases including the number of test cases who owns etc. provided to test team and project team
- Bi-weekly progress reports – provided to management and project team
- E-mail/ memo instead of escalation to stakeholder
Execution
- Issue report provided to project team
- Progress report provided to management and project team
- Checklists test scripts provided to test team
- Daily or ad hoc progress report provided to test team or stakeholders
- E-mail/ memo instead of escalation to stakeholder
Completion
- End report
- Issue data base
- Delivered value
When I met the professional I asked him about the project he is on to. It turns out he was collecting information based on experiences how much effort it cost and how much result is delivered during a period of testing. Based on this information from several different sources he is working on some general figures to support providing information in front how much testing will cost.
This is a very noble idea. Every one wants information how much testing will cost in terms of people, money, resources, skills. Every one wants to know what the results are like number of test cases, number of checks, number of issues found, coverage of testing and more.
Support people who challenging themselves to think further were others have been, continued or stopped.
I hope the professional I’m talking about will succeed and share his information with us.
To support him I will share some thoughts of mine which might help. Somehow I doubt it is easily done to collect information from several projects and compare it with each other and use those results as some kind of benchmark.
I believe this is hard because of differences in:
- Organizations: not only the business differs, they also change in strategy etc.
- Projects: you have to make a distinction between development approaches
- People: teams are not supported by the same people and same commitment
- Technology: It looks like you can compare counting in function points. In my opinion there are differences as technology is used by humans and they have different skills and approaches for example in error handling.
- Test approach and used techniques: these are often adapted based on influences from the items above. This results in numbers with similar names only the value cannot be compared. It will be comparing green apples with red apples. You know about the amount only the taste might differ or not.
Basically it is hard to compare the figures which are collected. What is the value of the number of test cases which are written in a certain period for a certain project? Can these figures be used to calculate the effort and money it will cost? Perhaps adding a certain percentage to be sure?
Can the figures how many cases in combination with found issues be used to tell how many test cases must be executed to provide trust in a system without the system being built yet?
Can the figures how many testers found the number of issues by executing a certain number of test cases to predict the needed resources?
Is benchmarking good? Perhaps some people need this kind of trust and use it as experience because they are missing experience? Is the prediction valuable to use?
I wonder when you are finished with benchmarking/collecting figures/information to provide a fancy chart on which people should trust.
Why should we spend time to collect? I value more the skills and experience I have and the ability to compare them to any new project. Sometimes the experience in numbers can be used, sometimes I can rely on, at least it gives me direction. Should I share it with others? Yes of course, in combination with the context and the lessons learned from it. Can others use it? If they are able to judge it against their particular situation and not using as common proven figures.
Posted by
Jeroen Rosink
at
10:06 AM
1 comments
Labels: Metrics, Testing in General
Tuesday, April 27, 2010
Testing on the other side
How often do you see that others believe it is not their failure, it are the others? How often are you sure it is not because of you, it is because of them? How often do you see that you did everything to prevent failure and others are to blame? How often do we all blame the other side?
Perhaps not that often. Still if it is not happening at our place, then it must be triggered at the other side. If on the other side failures are triggered, created and on our side solved/prevented. Why are we spending time at our side to proof it works here?
Currently we seem to focus to measure that everything is OK on our side. We are not to blame when the failure hits the fan. So it must be elsewhere to be found. The other side!
If we are sure that it is OK on our side, and failure is on the other side, why aren't we adapting our strategy of testing and look on the other side? Sot the approach would be: create tests what we should do. This will proof we are right. Now reverse the test scripts and test other things as we reversed our thoughts to blame others, then they must be blamed by things we didn't found. That leaves us the parts we are not testing, the parts from the other side.
The thought is acting opposite. If right is OK, and wrong should be found, we believe everything is right on our side; then the other side is wrong. We should find the failures on the other side. Finding failures is NOT blaming. Finding failures is in this case helping out the others. This means working with them on the other side. This means joining for example development, become a team.
To be able to help we should force us to think differently. This also means that we should force us to change our minds to create space for other thoughts and ideas. Perhaps adding a new role to testing: “the devil’s advocate-tester”. He/she will try to force the team think in a different way; although controlled. This role might change every week just to force us think beyond boundaries.
This might result in an approach were space is created to think differently and to extend borders within a testing project.
So prepare to test also on the other side, just for fun!?
Posted by
Jeroen Rosink
at
9:47 AM
0
comments
Friday, April 23, 2010
This sounds like testing
This week I already made a posting with a relation between music and testing Patterns in music and software testing. In that post I tried to link it to the way we try to value music towards testing based on identification of certain patterns.
Today I drove in my car to work, normally I listen to the programs on the radio, this time I shared that time. I listened to the radio and again to the band Conorach.
Previously I tried to compare the music with other artist I know. As already spoken I noticed there were some patterns I recognized in the music and based on those I compared it with the others. This time I made also a comparison within the music.
Comparing with numbers
It starts with identification of patterns. If you are aware of patterns you are willing to look at it in different ways. Instead of trusting the known approaches you can search for other relationships.
I think there are certain steps you can make. I'm sure there are books which explain this better than I will do know. Only, books are not available yet/now when the mind is throwing ideas :)
You can listen to music in different ways. Your perception will be defined based on who, how, when, what, why you will listen the music. Are you listening a song you already heard about, listening the whole CD or is it live? Are you in a good mood which fit the music or artist? Do you want to listen since you need to recover from an exhausting testing session? Or, do you need to energy to start with it?
You can listen in different ways, focus might shift/change.
When I was driving in my car I shifted to focus from comparing the music and certain pieces with other artist towards comparing parts between the songs on de CD.
Seems in songs you can use other artists as Oracles, you can also use the songs of that artist as an Oracle. This sounds familiar, don't you think.
Comparing with testing
Imagine you replace artist with system and you replace songs with functionality. This gives the following sentence: "Seems in functionality you can use other systems as Oracles, you can also use the functionality of that system as an Oracle."
To me it seems that music can be used well defining approaches for testing. Perhaps even better then the traditional methods. Ouch, now I might be walking on thin ice. It is not that I am against the traditional project or test methods/approaches. I only experienced too often that maintaining the arguments for using methods is the goal instead of the goal you start using those methods. For me it always helped to change the glasses I was looking to. I tried to look at projects/systems with other point of views. This is necessary as systems are built for humans and organisations. They are also different which result in differences in perception.
If a human being is able to value it using their mind and willing to learn from it. Why not use music as a metaphor to help testing software. If people on a work floor can come together about music then it must be valuable to use it also coming together about using systems.
Some instruments are easy to play, especially by experienced people. Some words are better to sing and understand. There might be differences in keeping the tune right. Some can some won't. Compare the guitar sound between songs, artist and time. Even an artist is growing in a direction.
What I'm trying to say is that it might be good if you are following traditional steps to make other moves and learning from music might be such a move. I challenge you to keep learning from situations and don't be distracted by ideas from others like methods etc. Try to listen to the music of the artist I started with: Conorach. I'm curious what you can learn from it. And how you learn from it.
I noticed some riffs from Joe Sattrianni, tunes from the Dubbliners, voices from The Nits, Iron Maiden is also involved and somehow the quietness of Pink Floyd and the piece of Bach. I also noticed other artists I forgot the name. If I try, I can find other songs which seem a bit similar.
I noticed that in a system functionality can be pointed to instruments, some are solid and some are fragile. Some are nice and some are fast. Some are played from paper and some jamming.
Posted by
Jeroen Rosink
at
11:57 AM
0
comments
Labels: Metaphor, Music, Testing in General
Wednesday, April 21, 2010
Patterns in music and software testing
New music
Yesterday I bought a CD from a colleague who plays in a band called Conorach. In the past he already mentioned about this activity and after an announcement of a new CD release I spend time to pay more attention to it and listen to the demo on their site. Somehow I got convinced to learn more about this band. I decided to support them by buying their CD. That CD I obtained yesterday and was able to listen in the car driving back home.
The ride back home was quite a journey; next to learn from their experience I also learnt about how I look/listen at things.
Travelling home
Imagine: you are sitting in a car, driving all alone, it is already dark outside and there are just a few people on the road. You listen to music you were curious about and never heard much about it. It is not that type of cover-band who brings you music of songs you heard somewhere. It is a band with their own sound.
I like to drive in my car when it is quite and dark. It enables me to think more about things I usually don't care/mind/think about. The situation here is that I like different types of music. If I have to define a range it is hard to do as the music cannot be compared. It differs from The Dubbliners to U2 from Bach to Marillion, from Metallica to de Dijk, from the Baseballs to Pink Floyd, from Fats Domino to Rage against the machine. There is no direct similarity between the choices of sound. Though, the knowledge and my experience with these bands shaped my vision and experience with music.
Try to like it
I started to listen to the music which was playing on my radio. Let me call it the "new" music. Somehow I heard sounds which were familiar to me, they were even sounds I like. First thing I noticed is that I started comparing the "new" music with the perception and knowledge I have about my known "old" music. I wondered if it is human nature when you are doing things with an open vision you try to compare it with other similar things you like.
I did. I tried to compare it with music I know and like. I tried to find the best feeling of my knowledge in the "new" music. I did this for every song and noticed that a song as whole item could not be compared with another artist. Parts of it were comparable.
Are there parts I liked?
After a few songs I changed my approach from comparing whole songs with known artists to parts of songs and even usage of instruments to artists and other music. I choose to compare it on a positive way; compare with things I like. I did this on purpose as the trip was not yet to end. It is better to listen in a good mood then in a negative mood. (I assume) Somehow, this approach made me to learn more about the music. I was able to extend my vision. Not only to recognizable sounds of instruments, patterns of music; also combination of instruments compared with voices. I managed to listen how the volume was used and value that experience.
Finally
As trips ends a CD has also an ending. Within the hour I learned some about a “new” sound, how things can be compared, how I compare things, which I had a good feeling about the music, much more to look at and to check. I manage to value this music and willing to spend more time to listen to it. Listening becomes a journey of its own. Therefore alone it must be some good music, at least parts of it.
Relation with testing?
I know there are items written which might support certain ideas, or part of it. Recently I read a blog which is referring to a book from G. Weinberg with respect to holistic thinking and quality assurance. Unfortunately I’m not in the luck yet to read that book/books. To me it doesn’t matter that much. I believe that it values more to a person who is able to come up with own ideas/thoughts then reproducing others; although it is good to support the thoughts with other ideas :)
Looking back to this experience I believe that in software testing we can experience similar situations.
- compare projects as whole vs compare projects as parts
- compare it with one method vs use more models as comparison
- try to find the best of a situation vs wasting energy to worse part
- try to challenge your thoughts vs sticking to traditional ideas
- see it as a tour of experience vs focus on safe processes
- start with open mind vs use best practices to define approach
Personal lesson
From just listening to new music I learned something about myself. I was triggered to be careful to when using best practices. Knowing methods is good, having experience might count. It only count when it is used justified. My thoughts might be fallible, even my first impressions. I believe in the strength of teaching myself and sharing these thoughts with you will be valuable.
It is good for people to be aware of the pitfall to step in old behaviour and come up with “old conclusions”.
Another thing I liked about this was the awareness you can learn from everything if you open your eyes, ears and mind and try to find relationships. What seems to be different is not definitely wrong. Changes focus from negative perception towards positive attitude might result in new energy which leads to new areas to explore.
As patterns can be recognized based on the view you have, you might be aware of patterns. If patterns are in music in this example, patterns are in testing. If you don’t see a pattern, then spend time to focus and defocus.
Awareness can grow if you are open for it and support skills and craft.
Posted by
Jeroen Rosink
at
10:25 AM
2
comments
Labels: Ideas, Metaphor, Music, Testing in General
Friday, April 9, 2010
Value of certificate or understanding
Today there was a great happening at our family. After weeks of training, my son and daughter were invited to take their exam for karate.
Almost more then half year ago they started to attend this course on weekly basis. Now and then they are showing the moves they learned in the lessons. They not only paying attention to the moves, also to the words, terms and how they should be used.
They are aware that they are preparing for the next grade. Last time it was the white turtle, this time it was the white cranes. Somehow I believe that they understand exactly the meaning of these grades. They know exactly in which order the grades can be obtained. They also are aware of the skills they need to know and able to show.
They took the test and they succeeded. I am proud on my kids as they persisted, listened and behaved in the dojo. They had respect for the teachers and other participants. At the end every one got a badge, a certificate, a sign on their karate passport and an invitation to attend a meeting with a great master. They also got respect from the teacher and the judges.
It seems so obvious that all those items were important like certificate, badge etc. Perhaps it is. Somehow the certificate of my son got lost. It made me proud to see that he did not complain, or else. He felt a bit sorry and still went back to his place. He accepted it as the value of respect was more worth then the certificate, knowing that the teacher told him being a white crane was more valuable. The certificate was just a paper.
This made me think, he might be right. I’m sure he is right. A certificate is just a paper; the respect he earned made him proud and made him to continue. The respect of the teacher made him know that he is allowed to show more skills to others which he was not yet allowed doing.
To me a great lesson I see here is that we are tending to look at our old heroes what they achieved and accept their sounds that we needs certificates. Perhaps we also have to watch out for our young, new heroes and bring everything back in perspective.
After al, a good day that as I learned from my children.
Posted by
Jeroen Rosink
at
8:09 PM
1 comments
Labels: Testing in General, Testing Schools
Sunday, April 4, 2010
EWT12: Mind mapping and testing
Great minds came together on EWT12
This session of European Weekend Testing: EWT12: Mapping the maps involved using mind mapping to inform the test manager about how to reach a certain coverage for the online map functionality: Bing Maps
This seems to be a mission with too much open endings. I keep referring that one of the major goals for me to participate are knowing more passionate testers and learn during the session and also afterwards. I'm not intending to test the software and find all issues I can found. for me it is also not a game which can be won on the number of issues there are found. Or who reached the highest defect ratio within that hour.
Passionate testers this time were:
Anna Baik
Tony Bruce
Markus Gärtner
Jeroen Rosink
For this session we were working with either the tool Freemind (offline tool) or Mindmeister (online tool with the ability of using 3 maps for free)
On Wikipedia a list of mind mapping software is offered which might be worth to check
Lessons Learned
#1 Prepare practice session how to use a mind mapping tool
Even when you have heard about mind mapping and or practice it sort of, when a tool is used not to share your thoughts but also compare thoughts within a limited timeframe, it is good to spend first some time to guide the participants through the tool.
#2 Mind maps differ in usage, colours and notations
As people think differently, the chance mind maps differ is huge. I believe here we have a thin line how to use an approach like mind mapping. We can demand the participants to use certain ways of notations, this will also limit their thoughts. I suggest providing the participant several ways how mind mapping can be used and how icons and colours can be used. Let the participant decide as long as the decision can and is explained.
#3 Mind maps are not a single mean of communication
In my opinion a mind map cannot be used as a single mean of communication. You cannot use it as test basis when the creator is not providing some explanation. It is a model of the mind of the author of that moment under certain conditions. Often is written and spoken that an image tells more then words. In this case I believe the strength lies in the combination, without words the map is almost meaningless
#4 If mind maps are used in testing keep it dynamic
I think mind maps can be used in testing; perhaps as a certain touring scheme. When it is used; it also should be maintained. When it is used in a test project and a certain level of information is presented, it has to become part of the whole team and the whole team can make changes on it. It can be introduced as an extra output of a stand-up meeting?
#5 Heuristics can be used within a map
You might introduce some structure by using heuristics. I used SDFPOT to play with and continue with that route.
#6 Use multiple mind maps within the team
After I compared the different mind maps I noticed we all have different approaches and different ways of details in the map. I like ton use single words, other like to use notations like test cases and also some times routing within the mind is visible. the strength here is that different levels of detailed information is presented. I'm convinced that not restricting people using mapping in notation will strengthen others by offering ideas.
#7 Mind maps provide more information
When I compared the maps of the mind I see more information then only what is written. If you look at how it is written, in which areas more details are provided and were not, you might come up with questions about importance. or why certain decisions are made. For this mind mapping can be a great approach, not only to tell what is done, also to come up with questions why certain decisions are made.
#8 Mind is changing also the maps
I think using maps like this should not be just once. you should generate this type of maps frequently. To gain more information from the maps how you mind/thoughts are evolving versioning should be used on the maps. On frequent basis you have to schedule also meetings to explain and investigate how thoughts were changing.
My thoughts, my map
When you look at the picture, you see a result of my trials using the tool and adapting some structure in the map. You also see I tried to use some icons, what I will do and what I won't do. Also what I have done and have to do. I played with icons were I think issues are as I noticed strange behaviour. I even tried to see of adding a priority to the items is an option.
Using a tool is great as you can adapt your thoughts by re-shaping, re-placing or even removing without generating a mess. On paper you have to be very careful as mistakes cannot easily undone. You are forced to continue with mistakes. For me, there is more to explore how to use this tool.
Process of mind mapping
Here a suggestion how to use mind mapping. I'm sure there are other ways. I'm sure there are better ways. This is how I think it can also be of some usage.
1. Prepare mind mapping introduction session: exchange knowledge about mind mapping, the tool and experience
2. Assign roles, like tester, analyst, test coordinator/manager, developer, user, etc
3. Agree that no map is wrong or is right, accept there will be differences in level of details
4. When necessary agree on usage of certain icons and colours (not all)
5. Define the mission
6. Execute mission within a defined period (1 hour?)
7. Present and explain the mind maps
8. Make adaptations on you mind map based on mind changes due to new input
9. Make agreement how mind map will be used: every day during a period, or was this one time moment?
10. Maintain mind map
11. Gain information about how the "mind" was working related to mood, level of detail (perhaps the mind was up to something), time behaviour, structure,...
I think it can be used for explaining/presenting and obtain information about:
- the route for testing
- the areas for testing
- the test cases
- the coverage of execution
- which decisions are made, what to test and what not to (lesser details is less important?)
- matching each other thoughts about the test goal of that day
- the possible risks
- the black spots in testing
- ....
WeekendTesting
For those who also want to be challenged by challenging yourselves, you might take part on one of the weekend testing sessions and teach yourselves! Don't hesitate to participate!
For more information see:
Website: http://weekendtesting.com/
Or follow them on twitter
Weekend Testing: http://twitter.com/weekendtesting
Europe Weekend Testing: http://twitter.com/europetesters
Posted by
Jeroen Rosink
at
7:56 AM
1 comments
Labels: EWT, mind mapping, Testing in General, Weekend testing
Friday, April 2, 2010
First make value then add more value
After reading another article how to measure more and earlier and by doing that saving money at the end. It seems so obvious that people are right when we establish a review process before we start testing. People are claiming that issue found early in a development phase it is also cheaper to solve instead when it is on a productive system.
I have heard and read that this is a written truth. I won’t object to that statement. What I’m worrying about is that people are using at any time the argument: “it is cheaper to establish a review process, or test earlier so issues are found early and therefore it will be cheaper.”
In my opinion, you have to be careful claiming this statement. You first have to earn money before you can calculate the benefits of finding issues earlier. To do this a system must be in use. You cannot spend what you didn’t earn; you cannot count in terms of cheaper when it will delay productivity and therefore cashing the earnings is also starting later on.
My advice is:
1. When you start a project: also define budget for reviewing/ testing earlier
2. When you start a project and budget is less: make wise decision based perhaps on expected value and risks where focus should lie on: sometimes reviewing/inspections/ testing earlier might be cheaper and beneficial above extensive executing lost of test cases.
3. When project is already started and budget is unlimited: you might consider reviewing/inspections of code, test cases, testing earlier etc.
4. When project is already started and budget is fixed: be careful demanding and initiating activities which cost extra money and time, they delay is not covered in the project plan and earnings might become delayed
5. Consider: When project is ended: check if it was necessary to review and if review would allow you delay of deliverance to production
6. Consider: When system is delivered, check if value is delivered and expected money is earned. Answer also the question: would adding a review process make the earnings higher?
7. Consider: If earnings would be higher due to minimizing the costs, would minimizing also result in lesser time to production?
There are enough arguments to start reviewing, inspections or testing earlier, keep in mind that you first have to add value to the business by the application, and then try to add more value if it cannot be done immediately. Sometimes this means that you have to go to production with known bugs to make money to find and solve more bugs.
Posted by
Jeroen Rosink
at
3:04 PM
2
comments
Labels: improvement, Testing in General