This week I attended a presentation Exploratory Test Automation:Investment Modeling as an Example given by Cem Kaner in the Netherlands as part of the Cem Kaner week
One of the statements he made is avoid commodity. This triggered me to reflect on how I see software testing.
To my opinion Cem Kaner tried to explain to use why commodity is good for those who feel happy to be a tester who have and can use standardized skills and knowledge. Who accepting to be replaced easily and who’s expertise can be cheaply outsourced. Those testers are as valuable for the client as all other testers. The other side is that there are testers who are willing to become better; they want to become valuable for the client. The client experiences the value. I think for this you have to express yourselves, keep learning, and adapting to the environment and work hard for it.
What does this have to do with growing bananas?
As C. Kaner mentioned about commodities:
There are green bananas
and ripe bananas
and rotten bananas
and big bananas
and little bananas.
But by and large,
a banana is a banana.
If you look at the phrase above and replace "bananas" with "testers" then testers are in general commodity testers. Unless they choose not to be a commodity tester. If the choice is made then you have to become aware of yourselves, what do you value?
As with a banana, the shell can be used, only the inside is eaten. Based on that taste the banana is valued.
This made me think a bit further. In general if people are thinking about bananas they visualize the banana including the shell and sometimes with a small piece of the inner stuff visible. People are visualizing a banana eatable when the shell is removed. There is more to imagine. the structure of the inside of the banana is also a result of the outside structure, how it is grown, how fast, with which means and under which conditions.
The same can be the situation for testers, they might be thought under certain conditions. This might not definitely lead to wrong results. They remain testers. I can imagine that for certain test certification might be useful to gain some basic fundamentals about testing. It might be right, it might be wrong. At least a direction can be defined. It is up to the tester if he/she becomes a commodity tester or able to add other specific value to the client. There are more roads which leads to Rome.
If testers are tending to become commodity testers and they might be identified as bananas. As said, bananas shouldn't be judged and testers also not; just because their believe in methods and approaches is different. I think it counts more what a tester is doing to become of more specific value.
Saturday, October 31, 2009
Growing bananas
Posted by
Jeroen Rosink
at
9:27 AM
0
comments
Labels: Testing in General, Testing Schools
Monday, October 5, 2009
Serving the method
In a discussion with a fellow tester we talked about testing effort. The situation was deliverance of certain request for changes. It contained functionality of which the persons who wrote the functional designs claimed it hadn't that much impact. The users who were involved also claimed it can be tested intensively spending several days or just the main things accepting risks.
There were also users involved which had their own history with the system. They wanted to test the system based on their experience from the past.
The fellow tester admits there might be more options to test which the others didn’t think about. He based that idea on the method he has been thought. From theory, he might be right. He also used the arguments based on the method. One of the main arguments was: the quality must be right and optimal. To claim this we should spend a lot of test effort.
In the discussion it was not about who has right and who is wrong it was about the effort needed and if it was valid to use a test method as argument for the direction.
When people are discussing, there are several views. As the fellow tester doesn’t blog I can only present my view.
In my opinion the following points were important
- No discussion was needed as the changes were important, it was forced by top management;
- End date was fixed due to management decision;
- Resources for test execution was available because of importance;
- Resources and quality of documentation to derive test cases using test techniques was not sufficient available;
- System was available;
- Support from development and business was sufficient;
- Solutions would be delivered in time and on time, just before the deadline;
- Time was left to follow phases from TMap, no time left to identify and cover all risks;
- Knowledge of application managers and business was more important to select test cases then the documentation and test techniques;
- Cost of not delivering was larger the solving after implementing on production.
When looking to a method like TMap, it wouldn't be possible to implement a solution on productive system as not enough proven information was provided about the risks. No defined and measured coverage was made visible. Requirements were too poor. No test techniques were selected and used which were mentioned in the method. Still we made it to deliver successfully to productive system.
Posted by
Jeroen Rosink
at
10:00 PM
0
comments
Labels: Testing in General
Sunday, September 20, 2009
Maintain testing knowledge
I recently stopped smoking. It is about 4 weeks ago I didn't smoked any cigarette. During these weeks I noticed my focus switched to other important things besides testing, work and sitting behind the computer writing my blog or participating to forums related to testing. Somehow I needed all the attention to focus what was important at that moment: continue stop smoking.
The worse part of these weeks was the emotions which were involved. I can tell you, those were hard. Sometimes they are still hard. I stopped cold-turkey, which means, no means, just attitude and immediately.
Of course, there are courses which help you stop smoking. there are several methods which help might help. There are tools and medicines which might help stopping. I didn't try them before as I wasn't convinced they would help.
This made me wonder about testing. If in a normal process emotions are involved, why not in testing. I'm certain emotions are there, only can you show me a book related to testing which is dealing with emotions? If books are not writing about them, how can methods be successful? Emotions should not be neglected.
Another thing which slipped my mind, why spend so much time about investigating the proper method, choosing a method and implementing a method. When decision is made, it might const too much time and money to convince people why the chosen method is good and working. Perhaps just starting with testing is already sufficient and having an open mind set for what crosses your road.
I can imagine that when no borders are created by methods, creativity will grow and also ability to adapt. This will help you learn faster and provide means to get to your goal.
Another thing is learned behaviour. Some people say smoking is a bad habit. I did it for 17 years. And in those years I felt quite good about it. Only now I have to learn to live without it. What about testers. If newbie testers are learned with certifications, is this as bad as smoking? For some people it seems bad, for others not. The same with testers who learned certain behaviour in testing. they were thought to think in a certain way and became specialist in that way of thinking. Is that good or bad? Will they have a similar route to go when to stop thinking that way?
Which tools, methods, people are available to help, guide those?
I wonder which people are able to tell they are right and others are wrong. When this happens, also emotions are involved, emotions which are not written in books. What can be learned, and how can testing skills be maintained. I noticed in these four weeks, I neglected some part of learning.
Posted by
Jeroen Rosink
at
9:03 AM
2
comments
Labels: Ideas
Sunday, August 23, 2009
Investigating details, a waste of time?
I had a great holiday this year. With my family we lived in a tent for 2 weeks. Every day was fun and gives me the opportunity to have some quality time with my family. Of course there were some moments which good be better. It was great because the big picture fits.
When sitting in my chair in front of the tent I had some time of thinking. Clearing my head of thoughts I filled them with news ideas. I needed something to do and made some pictures from close distance. This gave me the idea that although the pictures are nice it doesn't tell the whole story of my holiday. This made me wonder why we are focussing on details in testing. Why we are for example insist on unit tests. one of the reasons is to find issues sooner. Should this be done as in a huge system there might too much details. For example, if I shoot every detail of my holiday I had to spend more time on defining details and making the pictures instead of enjoying the holiday. Because of this I would miss a lot of joy.
Below you find some pictures I shoot. With these examples I will try to explain why focussing on details is not always the right way.
Foot-picture:
My son asked me to make a funny picture of his foot. As you see, you can notice some details. Only is this enough, at least you can check that there are no wounds on it. Ask your self, if there was a wound and it should be nursed, what sense would it make? Is it necessary for the whole picture? The grass behind the foot gives a indication that there is more space behind that foot. Currently the foot is the main object. If I would fade-out it would become part of a bigger picture. In that big picture a lot of other things are happening like children playing, parents sitting, tents standing, trees are growing. As a foot is is part of that bigger picture. I would ask you: If the foot has a wound, what influence would that wound have on the bigger picture? Should it be nursed? Was the time spend on this detail valuable for this bigger picture?
The grass-picture:
Looking behind the foot you see grass growing. When you are on a camping site you have lots of grass growing. When I would make pictures of occurring circumstances which it would certainly result in pictures with some pieces of grass. Is it necessary to investigate every time the details of it? With the picture below I could ask several questions about the colour, the structure. How is the single grass halm growing? Are there different types of grass, is the floor covered well? Are there some dangerous insects living over there? Are there laying pieces of glass lying there which might hurt the children? I wonder is it necessary to ask these questions every time to value the bigger picture? Should I spend in testing all the time effort to similar objects only on different places? Should all details be covered?
The wooden stick-pictures:
Even when there is a need for investigating details, is it done right? In the example below there would be a need to investigate the wooden stick. As you see, the wooden stick can be tested in similar ways on both pictures. Only there is a slight difference between the pictures. So when do you know when you are using the correct view? How can you tell from which angle you should test? I you focus only on the stick, what can you tell about the size? Perhaps it is not a stick, instead a tree. Due to the size it might hurt and have impact on my holiday. As you see, to check the size, you have to focus on other details than the stick. You have to look at the position of the stick with respect to its size and the environment. Focusing on the details of the stick will cost too much time and wrong conclusions might be drawn.
The ash-tray picture:
Sometimes the detailed image looks very dirty you hardly can continue watching at it. For instance the code is very sloppy code. You get distracted from the bigger picture. Perhaps this is a result which supports other issues. In the ash-tray you see a lot of dirt. It is unhealthy etc. Still it is functional. The ash is stored in some kind of a container and can do less harm when there is no wind. If the ash-tray was not there, the output of other habits would be stored else were in an uncontrolled environment. Is it necessary to check in detail how it looks like?
Beer-can picture:
This picture is one I like, it shows some important information from the beer-can in detail. Only I know it is a beer-can as I drank used it. You only see the top of it and can perform some test on it. How would you know if you are sufficient? When are details good enough? If these questions cannot be answered what sense does it make to perform detailed tests.
Food-picture:
Sometimes the details look like a mess; you have to focus more to see what is in it. After focussing you will see objects which can lead to further testing. Would it make sense? The picture below is from a dinner we had, although it looks awful it tasted very good. And it was just to combination which made it taste good. The taste was the main object of the food;it should fulfil one of our needs: dinner and we should be able to eat it while it is hot. For those objectives it was not necessary to have each piece lying in a specific order.
Towel-pictures:
In both pictures of the towels you see details about the structure. The first is made from a larger distance then the second one. How would you know that the detail you look at is sufficient? Wasn't it good enough to know that towels were available and trust the purpose of them, drying your hands? Looking at the towels you also see that the sun is shining. Sun was an important factor of success of our holiday. Do we need this picture the draw this conclusion or should I made another picture with the towels?


Conclusion
With this story I didn't want to tell that spending time at details is not useful or unnecessary. It should make you think if it is useful to look at details any time. Is it mandatory to perform unit tests? Of course people might claim that according the Boehm-law, found issues in an early stage is much cheaper to solve. You have to ask your selves every time, is it needed to find issues in those areas? I hope the examples given above made you think a bit about the meaning of details and focusing on details. Of course I could have made other pictures as well about the holiday and also there the question can be asked, is it detailed enough or not. It is this what should be done: keep asking this questions.
Posted by
Jeroen Rosink
at
10:57 AM
0
comments
Labels: Metaphor
Tuesday, August 4, 2009
Magazine: MacroTesting

Posted by
Jeroen Rosink
at
10:40 PM
0
comments
Labels: Magazine
Friday, July 31, 2009
Why only focussing on bugs and quality?
Reading all kind of blogs, books and articles we all keep focussing on bugs and quality. There are persons who claim that finding bugs is the main goal for testers. Other people mentioning that we have to advice on quality.
Focus
Sometimes they write about skills other then testing skills like domain knowledge, technical skills, communication skills and more. And more often these skills are claimed to be needed to find more bugs or get better metrics how to present quality.
There are also people who use the words like "added value for customer" and are trying to proof this based on number of bugs the prevent to go into productive system. The pitfall I see in these approaches is that the view of the tester is narrowed from the start. They focus on how to find bugs fast or with certain coverage; they try to convince the customer about quality of a product. While focusing, they used their best practices and lessons learned based on these principles.
This raises the question by me: Are we doing the right thing and can we do better? Obviously we do, as the customer is happy and the product went live without major problems. Is it?
Doing right?
I often hear that if we have to go from A to B we have to know were we are, were A is and how we can get to A. Then we use our knowledge to get to B. During the journey moving towards B we adapt our approach. Another thing I hear often is: Why to invent the wheel again?
For this we use methods and approaches to start from A. We start testing and find the bugs as without no bugs in other men’s view we are not doing the right thing. because we are always under time pressure we tending to adapt our route to get to B to make sure that "our" process will not be the project failure. So w are not adjusting our approach to add value, we merely adjusting our approach to hide the failure of our lessons learned/best practices. If this is the case, are we doing the right thing? Are we focussing on the items which need focus?
Multiple processes
When hiring testers often are skills asked related to: domain knowledge, branch knowledge, programming knowledge, test tools, certifications about testing, communication skills, knowledge about programming methods and sometimes testing skills (something different then testing certifications). Indirectly they might support the test process as defined. Is this still a guarantee that bugs are found, that quality is proven?
If you answer this question with "Yes": Why is it that the benefits of this knowledge are not measured? As far as I know you have to pay for every bit of extra knowledge/skills. It seems that there must be some indirect value hidden behind this knowledge for the test process.
Re-focussing
This indirect value can only be a result of involvement of other processes. These processes providing information for the test process which is valuable. It is important to know what kind of impact this information has on the test process. For example: if branch knowledge is important, then we not only should define a test process which is using this knowledge, it also has to take care of gaining, interpreting and processing information from and towards the people who are working in that company. This is other information then number of bugs or visualization of quality.
I think we also should try to make the indirect value of information from other processes move to direct value. We should also focus on processes that generate information which has impact on the test process. This might result in manager who are able to value information from those processes and add value to them with respect to bugs and quality. This might lead to a situation that a product is shipped to production with unknown bugs and poor quality in certain areas, although it has more value for the processes when shipping it know then waiting when everything is uncovered and tested.
Testing becomes more
With this idea, testing becomes more then finding bugs or providing information about quality, it also process information from other processes and value the product and skills of team members against those processes. The focus of testing becomes broader then just the test process. the test process shall also value the information with respect to the other processes. Pieces will fall together.
Posted by
Jeroen Rosink
at
8:08 AM
0
comments
Labels: Testing in General
Monday, July 27, 2009
10 Lessons when waiting for a fix
Our dishwasher is broken. It bleeps and light flashes quickly for four times. The water is not pumped away and after trying to reset the machine, it keeps offering water. Don’t do this at home because after a few trials the floor gets wet.
As always, when something breaks the time is never right. With dishwashers, they break when you need them, or not? In this case we noticed it wasn’t working because there were some dishes to clean. When the machine stops, it really stops. To process for a solution might be quite interesting.
When the machine told us there is something wrong I went to the machine and investigate what symptoms there are:
- Water in the machine
- Clean dishes (it stopped at the end of a cycle)
- 4 Short bleeps and blinking light
- Reset button functions
- Restart shows same behaviour after a few minutes
- Water was offered to machine
- No other noises
With this information I went to the internet and performed a search based on the symptoms and brand of the machine. Somehow I didn’t had the information available about the specific type. I found several threads mentioning several options to do:
1. Clean the machine using some special cleaning stuff
2. Hire a mechanic
3. Do it yourself (some “detailed” information how to do this was also offered in terms like: remove screws, check, be careful)
As it was weekend when this behaviour initially started I couldn’t do anything at that moment. This bothered me and made me feel bad since we got used to this delightful machine.
As a good husband I tried to fix it myself the easiest way, telling my wife to clean the machine by buying the tablets which should do the trick. As the machine was still under guarantee she called the store and heard that if the machine is the problem, only the call out charges has to be paid. If the problem is blockage of the filter, then we had to pay also the hourly fee.
After a short discussion with me by wife bought and used the tablets and it worked for a while. Only now it is broken again. Again we tried to clean the machine which didn't work. It seems that the pollution wasn’t the problem after all.
Lesson 1: If the symptoms are gone after trying a solution, this doesn’t mean that it was the proper solution, it can be coincidence.
Together we made a decision to call out for the mechanic. They were responding very quickly, within 3 days, he will come and check under the same conditions. We accepted the risk that it didn't had to do with the machine and we might have to pay also for the hourly fee. We were able to avoid this by calling in a plumber. Only this would also cost money and no solution is guaranteed.
Lesson 2: When making a decision who should help, call in the proper person based on own investigation and others.
Today is the day, the mechanic will come. The waiting is started as the store mentioned that he would here between 8:00 AM and 6:00 PM. "Fortunately" he will give a call in front just 1/2 hour before he would arrive. I have to admit, when the period is there when solutions are entering a certain time window, uncertainty is more annoying when calling in terms of hours then in days. You have to adapt your schedule based on this uncertainty.
Lesson 3: Communicate when an agreement is made on delivering solutions. Time schedules have to become more detailed when the time is right.
While writing, the mechanic called. He will be here within 20 minutes. Now the moment of truth becomes closer, what should we pay after all? And will there be a solution provided? I already prepared myself by being able what I have done to avoid discussion that it is not the machine and that we urgently need a solution for this problem.
Lesson 4: Be prepared when a solution is offered, communication on arguments become more important otherwise you might have to pay for false reasons.
Of course, we have a workaround, doing the dishes manually. Only this is not a proper solution as the dishwasher is using valuable space in the kitchen which will become useless.
Lesson 5: Workarounds should be temporarily, as there are other costs then the usage of missing functionality.
The mechanic just left. As the outcome was not yet clear in the beginning it seems to me important to assist him removing stuff which stood in the way of investigation. I could decide that he can do it himself, only I would spent valuable time from him and since the verdict was not yet made I probably had to pay for it. During the process of investigation and problem solving I was around for assisting and answering questions, trying to avoid not being in the way.
Lesson 6: Provide support which is in your area, avoid discussions and certainly those who area out of your field of expertise.
Within minutes he found the problem. He solved it and also made some adjustments to the "infrastructure" as he couldn't find the cause of it, only noticed that there was something wrong which could be easily fixed which might avoid similar problems in the future.
Lesson 7: If the problem is solved, also take some time for the situation.
After everything was set in the proper place, like the front desk of the dishwasher etc, he performed some small test. he turned on the machine and observed the behaviour. He also listened if it sound right and if the problem actually was gone.
Lesson 8: Let the "problem-shooter" convince himself that the problem is gone. This can be done by demonstration.
Now there was some time to offer him coffee. The mechanic is also human and this was a good opportunity to show respect and recognition. During this few minutes, he was a fast coffee drinker, he explained more about the situation, how it could happen and also a bit about his work.
Lesson 9: Show respect and recognition in a proper way. You perhaps need him again. Don't exaggerate.
At the end, the bill has still to be paid. Fortunately, we had to pay only for the call out charges. The mechanic explained that it felt under the guarantee conditions as he couldn't find the exact cause. He also provided information about situations it wouldn't be part of the guarantee.
Lesson 10: Ask for information about the situation so you are able in the future to narrow your initial decision about what to do.
Posted by
Jeroen Rosink
at
9:40 AM
0
comments
Labels: Metaphor
Sunday, July 26, 2009
What is the leading source?
Ever wondered which data you should rely on? Are we monitoring the test scripts or are we monitoring the issue database? Based on which information are you able to give an advise about quality?
It seems obvious to claim that both sources are important and necessary. Both sources appear on those metrics dashboard often are used. Which means that information shown there is used to provide detailed advice whether to go into production or not.
Only what are you doing when they are not in sync? This should not happen? Of course it does!
Situation A:
All test cases are executed and all issues are closed. Based on which source is advice given? Does it matter? Yes, who gives the guarantee that you found all issues? Which source gives an indication about quality? It seems that the list of pending issues provide sufficient arguments towards management to go live. In this case, the issue database gets the benefit supported by the number and type of tests executed. the issue database is the leading source.
Situation B:
All test cases are executed and some failed, for these issues are registered and still open. It seems that test scripts and issue database are in sync providing the same information.
Are you using information the issue database or the test cases? In this situation it seems that the issue database tells nothing about what you tested. The test scripts should. if al important/critical tests are performed then you might be able to give advice based on this information and identify possible risks when going live with known issues. The test scripts are the leading source.
Situation C:
All test cases are performed and passed; only there are some issues still open in the issue database. Those were found during testing only not related to any test case.
There might be situations when test cases are based on designs and they are not telling everything. During testing new issues are found which are not related to any test case. Another situation can be that issues are found which are also in production. In these cases the test scripts cannot be used as resource to provide advice as according to them, there are no issues . The issue database will be used and based on the impact of open issues an advice will be given. The issue database is the leading source.
Situation D:
All issues are closed and not all test cases are executed. Under time pressure you are not able to execute all test cases. Though, all mandatory cases are executed. Bases on information about the test cases an advice can be given. The test scripts are now the leading source.
Situation E:
All issues are closed only the test scripts are showing that there are issues pending. According to the issue database there is no risk anymore. According to the test scripts there is some risk although all cases are executed; there are issues with pending status in the scripts. An approach would be performing a recheck on the issue database. The question here is, does the issue database provide enough information to change the status in the test script? Or, are you executing that test case again? What is now the leading source? If the issue database contains test results which proofs the issue is solved and the time stamp corresponds with the time stamp in the test script, the issue database is the leading source. if insufficient information is available, the test script becomes the leading source.
Situation F:
Not all issues are closed and not all test cases are executed. This is a tricky one. It might be easy to give a negative advice as not everything is done; test cases and issues are open. What if those are not that important? What is the leading source then? I suggest that both sources can provide some information only not enough to become the leading source. Another source should be contacted. This can be a requirement list, a risk list, the manager, the business. In this case a combination of sources should be used.
Situation G:
No test cases are executed, and some issues are still open. This seems to be odd to go live with this information. What about an update of patches. Sometimes tests are performed without writing them down. In this case the only source you can rely on initially is the issue database in combination with knowledge of people. Let's say that the issue database is the leading source.
Of course there are other situations also. The intention of this post is visualize that based on the situation the source that provides information to make decisions differs. You should be aware that you can go into production based on number of test cases or number of issues with respect to their status.
Posted by
Jeroen Rosink
at
10:07 AM
0
comments
Labels: Metrics, Testing in General
Saturday, July 25, 2009
Bounded by thinking in methods and processes
It is now almost a week ago I had a challenge with Matt Heusser. One of the interesting things I learned is that approaches differ on some points. Over the years I was thought to think in processes like TMap, ISTQB , ITIL, PRINCE2. A good habit of mine is to investigate the borders of those methods.
When I was at school I noticed I drove my teachers and also fellow students crazy when questioning methods. Not to proof they will not work or they are not true. Just to investigate the boundaries of methods with the purpose to know when I cross those boundaries I need to be creative.
During the challenge I forced myself to set aside what I learned and re-shape my knowledge based on the questions I got. This helped me to find some answers and missing others. Of course this is not bad. Only there was on obvious test I forgot. Knowing it is an obvious one I forced myself thinking and start using all kinds of methods approaches etc. I noticed this brought me further from the actual solution. at the end I mentioned it, only as a side note and not as main thing. you might say I was a bit blinded by methods.
I learned that obvious things are often missed.
In the projects I worked in methods were pre-defined. In the Netherlands a method like TMap is very common. It helps you to create a structure in the test process. In almost every project it becomes a main goal to work according this method as it is already sold to create structure. When other actions are needed it is first checked against the method, does it fit in. Sometimes the approach is changed; sometimes the actions are not done because it was not agreed upon and needed as the test plan is leading. This lead to actions are skipped based on "false" arguments.
This brings me back to a phrase a teacher told me once: "Methods are tools to model your situation, keep in mind that models are just a simplification of the reality"
Based on this statement, methods and defined processes as result from methods are fallible. You never can capture reality in a model. You will always miss items. Only afterwards you might notice the importance of missed items.
When obvious things are missed based on using methods and following processes. Then, thinking in methods and processes lead to obvious blindness. The question is how huge wil this obvious blindness be? I'm sure a lot of methodologists are claiming this is untrue. There is space in methods/processes to prevent blindness. Perhaps it is. Only is this for everyone?
If everything can be dealt with by a method what level of experience and skill does one need about that method to prevent the obvious blindness? And is it acceptable to demand those level of skills to anyone? Is this reality?
A new question is: Are we dealing with reality or methods?
What would you do?
1. Accept obvious blindness and deal with it later when it fits the methods/process
2. Reject methods/processes and define your own world
3. Define space in methods/processes to address time to investigate obvious blindness holes
4. Teach project members in methods to the highest detail and claim there cannot be any obvious things which are not overseen
Posted by
Jeroen Rosink
at
9:48 AM
0
comments
Labels: Test Methods, Testing in General
Saturday, July 11, 2009
Whisky or whiskey and testing
Some people call it whisky and some name it whiskey. The notation depends of its origin. In Ireland and the USA they use Whiskey. In Canada and Scotland they name it Whisky.
In the Netherlands we tend to use Whisky.
With this liquor you have differences already in tastes, flavors, ages, colors, experiences. I can imagine that you also have differences in believe and understanding. Most of the people can name at least three different brands of whisky and tell which is better. There are a lesser of them who actually tried that whisky.
You have these kinds of differences also within testing. People think to talk about the same when referring to testing an application only the approach is different. Like with drinking whisky, people have different understanding and believe they talk about the same drink telling it is the best whisky. It is quality. They claim this statement because of their experiences.
If this is true, then quality is just a result of the experience something gives you. This has quite some impact on testing as people tend to get experience by using specific approaches.
For example the view of testing in the USA is a bit different then in Europe (I'm generalizing a bit). In the Netherlands we tend to test according procedures, guides, phases, project plans, steering groups etc like test approaches called TMap and ISTQB. In the USA they tend to test based on technique, their understanding of technique, heuristic methods etc.
Because of these differences the experience will also differ. Therefore the perception of quality will not be equal. As result of this; testing is not equal.
The pitfall here is that we are trying to teach each other about testing, sharing ideas and learn from each other based on different levels of understanding while information might not fit the experience. Wise lessons are misunderstand and misused.
A few days ago I noticed on twitter how a fellow tester was going to enjoy a 16 yr old whisky called Lagavulin. I never heard about this brand and believe him when he claims it is a good, quality whisky. For myself; I like the 18 yr old Highland Park and in certain situations the 12yr and the 15 yr old are also good. It depends on the situation and mood I'm in. This is an example of differences in understanding quality although we both are talking about the same topic, there are differences in experience and understanding. I could try the whisky he is drinking and offer him some of mine to get a better understanding of each other, perhaps when the occasion is there. At this moment I stick to testing.
Knowing about differences in testing I try to understand more about other ways of testing. You can read about it, experience is more valuable. For this I was lucky to visit the Miagi-do of Matthew Heusser as result of my search for a Mentor, Mentor in software testing. During these visits I received a black-belt challenge which I accepted. I will not share the details of this challenge as it will not be a challenge for you any more. That challenge was though. It was a brain teaser and breaker. It was fun to do. I think the strength of the challenge was the capability of Matt to adapt the challenge based on information I gave him.
At the end I earned the brown-belt and I'm proud of it. Not because I got a brown belt, because Matt made me learn more about myself how I approach testing, what differences there are in points of views, were I rely on my own experience and what the pitfalls are when doing this. He also strengthen my idea that there are more views in testing and to understand those I need to learn more. As I knew this already and I am more proud of gaining the brown-belt instead of the black-belt. It makes the brown-belt more valuable because I'm aware of my knowledge, willingness to learn and drive for testing.
Like drinking whisky, testing is also a case of perception. They might differ based on experiences without claiming what is good or bad. For this you have to be able to communicate on proper level of understanding and able to share thoughts and ideas.
Posted by
Jeroen Rosink
at
10:54 PM
0
comments
Labels: Metaphor, Testing in General