Reading all kind of blogs, books and articles we all keep focussing on bugs and quality. There are persons who claim that finding bugs is the main goal for testers. Other people mentioning that we have to advice on quality.
Focus
Sometimes they write about skills other then testing skills like domain knowledge, technical skills, communication skills and more. And more often these skills are claimed to be needed to find more bugs or get better metrics how to present quality.
There are also people who use the words like "added value for customer" and are trying to proof this based on number of bugs the prevent to go into productive system. The pitfall I see in these approaches is that the view of the tester is narrowed from the start. They focus on how to find bugs fast or with certain coverage; they try to convince the customer about quality of a product. While focusing, they used their best practices and lessons learned based on these principles.
This raises the question by me: Are we doing the right thing and can we do better? Obviously we do, as the customer is happy and the product went live without major problems. Is it?
Doing right?
I often hear that if we have to go from A to B we have to know were we are, were A is and how we can get to A. Then we use our knowledge to get to B. During the journey moving towards B we adapt our approach. Another thing I hear often is: Why to invent the wheel again?
For this we use methods and approaches to start from A. We start testing and find the bugs as without no bugs in other men’s view we are not doing the right thing. because we are always under time pressure we tending to adapt our route to get to B to make sure that "our" process will not be the project failure. So w are not adjusting our approach to add value, we merely adjusting our approach to hide the failure of our lessons learned/best practices. If this is the case, are we doing the right thing? Are we focussing on the items which need focus?
Multiple processes
When hiring testers often are skills asked related to: domain knowledge, branch knowledge, programming knowledge, test tools, certifications about testing, communication skills, knowledge about programming methods and sometimes testing skills (something different then testing certifications). Indirectly they might support the test process as defined. Is this still a guarantee that bugs are found, that quality is proven?
If you answer this question with "Yes": Why is it that the benefits of this knowledge are not measured? As far as I know you have to pay for every bit of extra knowledge/skills. It seems that there must be some indirect value hidden behind this knowledge for the test process.
Re-focussing
This indirect value can only be a result of involvement of other processes. These processes providing information for the test process which is valuable. It is important to know what kind of impact this information has on the test process. For example: if branch knowledge is important, then we not only should define a test process which is using this knowledge, it also has to take care of gaining, interpreting and processing information from and towards the people who are working in that company. This is other information then number of bugs or visualization of quality.
I think we also should try to make the indirect value of information from other processes move to direct value. We should also focus on processes that generate information which has impact on the test process. This might result in manager who are able to value information from those processes and add value to them with respect to bugs and quality. This might lead to a situation that a product is shipped to production with unknown bugs and poor quality in certain areas, although it has more value for the processes when shipping it know then waiting when everything is uncovered and tested.
Testing becomes more
With this idea, testing becomes more then finding bugs or providing information about quality, it also process information from other processes and value the product and skills of team members against those processes. The focus of testing becomes broader then just the test process. the test process shall also value the information with respect to the other processes. Pieces will fall together.
Friday, July 31, 2009
Why only focussing on bugs and quality?
Posted by Jeroen Rosink at 8:08 AM 0 comments
Labels: Testing in General
Monday, July 27, 2009
10 Lessons when waiting for a fix
Our dishwasher is broken. It bleeps and light flashes quickly for four times. The water is not pumped away and after trying to reset the machine, it keeps offering water. Don’t do this at home because after a few trials the floor gets wet.
As always, when something breaks the time is never right. With dishwashers, they break when you need them, or not? In this case we noticed it wasn’t working because there were some dishes to clean. When the machine stops, it really stops. To process for a solution might be quite interesting.
When the machine told us there is something wrong I went to the machine and investigate what symptoms there are:
- Water in the machine
- Clean dishes (it stopped at the end of a cycle)
- 4 Short bleeps and blinking light
- Reset button functions
- Restart shows same behaviour after a few minutes
- Water was offered to machine
- No other noises
With this information I went to the internet and performed a search based on the symptoms and brand of the machine. Somehow I didn’t had the information available about the specific type. I found several threads mentioning several options to do:
1. Clean the machine using some special cleaning stuff
2. Hire a mechanic
3. Do it yourself (some “detailed” information how to do this was also offered in terms like: remove screws, check, be careful)
As it was weekend when this behaviour initially started I couldn’t do anything at that moment. This bothered me and made me feel bad since we got used to this delightful machine.
As a good husband I tried to fix it myself the easiest way, telling my wife to clean the machine by buying the tablets which should do the trick. As the machine was still under guarantee she called the store and heard that if the machine is the problem, only the call out charges has to be paid. If the problem is blockage of the filter, then we had to pay also the hourly fee.
After a short discussion with me by wife bought and used the tablets and it worked for a while. Only now it is broken again. Again we tried to clean the machine which didn't work. It seems that the pollution wasn’t the problem after all.
Lesson 1: If the symptoms are gone after trying a solution, this doesn’t mean that it was the proper solution, it can be coincidence.
Together we made a decision to call out for the mechanic. They were responding very quickly, within 3 days, he will come and check under the same conditions. We accepted the risk that it didn't had to do with the machine and we might have to pay also for the hourly fee. We were able to avoid this by calling in a plumber. Only this would also cost money and no solution is guaranteed.
Lesson 2: When making a decision who should help, call in the proper person based on own investigation and others.
Today is the day, the mechanic will come. The waiting is started as the store mentioned that he would here between 8:00 AM and 6:00 PM. "Fortunately" he will give a call in front just 1/2 hour before he would arrive. I have to admit, when the period is there when solutions are entering a certain time window, uncertainty is more annoying when calling in terms of hours then in days. You have to adapt your schedule based on this uncertainty.
Lesson 3: Communicate when an agreement is made on delivering solutions. Time schedules have to become more detailed when the time is right.
While writing, the mechanic called. He will be here within 20 minutes. Now the moment of truth becomes closer, what should we pay after all? And will there be a solution provided? I already prepared myself by being able what I have done to avoid discussion that it is not the machine and that we urgently need a solution for this problem.
Lesson 4: Be prepared when a solution is offered, communication on arguments become more important otherwise you might have to pay for false reasons.
Of course, we have a workaround, doing the dishes manually. Only this is not a proper solution as the dishwasher is using valuable space in the kitchen which will become useless.
Lesson 5: Workarounds should be temporarily, as there are other costs then the usage of missing functionality.
The mechanic just left. As the outcome was not yet clear in the beginning it seems to me important to assist him removing stuff which stood in the way of investigation. I could decide that he can do it himself, only I would spent valuable time from him and since the verdict was not yet made I probably had to pay for it. During the process of investigation and problem solving I was around for assisting and answering questions, trying to avoid not being in the way.
Lesson 6: Provide support which is in your area, avoid discussions and certainly those who area out of your field of expertise.
Within minutes he found the problem. He solved it and also made some adjustments to the "infrastructure" as he couldn't find the cause of it, only noticed that there was something wrong which could be easily fixed which might avoid similar problems in the future.
Lesson 7: If the problem is solved, also take some time for the situation.
After everything was set in the proper place, like the front desk of the dishwasher etc, he performed some small test. he turned on the machine and observed the behaviour. He also listened if it sound right and if the problem actually was gone.
Lesson 8: Let the "problem-shooter" convince himself that the problem is gone. This can be done by demonstration.
Now there was some time to offer him coffee. The mechanic is also human and this was a good opportunity to show respect and recognition. During this few minutes, he was a fast coffee drinker, he explained more about the situation, how it could happen and also a bit about his work.
Lesson 9: Show respect and recognition in a proper way. You perhaps need him again. Don't exaggerate.
At the end, the bill has still to be paid. Fortunately, we had to pay only for the call out charges. The mechanic explained that it felt under the guarantee conditions as he couldn't find the exact cause. He also provided information about situations it wouldn't be part of the guarantee.
Lesson 10: Ask for information about the situation so you are able in the future to narrow your initial decision about what to do.
Posted by Jeroen Rosink at 9:40 AM 0 comments
Labels: Metaphor
Sunday, July 26, 2009
What is the leading source?
Ever wondered which data you should rely on? Are we monitoring the test scripts or are we monitoring the issue database? Based on which information are you able to give an advise about quality?
It seems obvious to claim that both sources are important and necessary. Both sources appear on those metrics dashboard often are used. Which means that information shown there is used to provide detailed advice whether to go into production or not.
Only what are you doing when they are not in sync? This should not happen? Of course it does!
Situation A:
All test cases are executed and all issues are closed. Based on which source is advice given? Does it matter? Yes, who gives the guarantee that you found all issues? Which source gives an indication about quality? It seems that the list of pending issues provide sufficient arguments towards management to go live. In this case, the issue database gets the benefit supported by the number and type of tests executed. the issue database is the leading source.
Situation B:
All test cases are executed and some failed, for these issues are registered and still open. It seems that test scripts and issue database are in sync providing the same information.
Are you using information the issue database or the test cases? In this situation it seems that the issue database tells nothing about what you tested. The test scripts should. if al important/critical tests are performed then you might be able to give advice based on this information and identify possible risks when going live with known issues. The test scripts are the leading source.
Situation C:
All test cases are performed and passed; only there are some issues still open in the issue database. Those were found during testing only not related to any test case.
There might be situations when test cases are based on designs and they are not telling everything. During testing new issues are found which are not related to any test case. Another situation can be that issues are found which are also in production. In these cases the test scripts cannot be used as resource to provide advice as according to them, there are no issues . The issue database will be used and based on the impact of open issues an advice will be given. The issue database is the leading source.
Situation D:
All issues are closed and not all test cases are executed. Under time pressure you are not able to execute all test cases. Though, all mandatory cases are executed. Bases on information about the test cases an advice can be given. The test scripts are now the leading source.
Situation E:
All issues are closed only the test scripts are showing that there are issues pending. According to the issue database there is no risk anymore. According to the test scripts there is some risk although all cases are executed; there are issues with pending status in the scripts. An approach would be performing a recheck on the issue database. The question here is, does the issue database provide enough information to change the status in the test script? Or, are you executing that test case again? What is now the leading source? If the issue database contains test results which proofs the issue is solved and the time stamp corresponds with the time stamp in the test script, the issue database is the leading source. if insufficient information is available, the test script becomes the leading source.
Situation F:
Not all issues are closed and not all test cases are executed. This is a tricky one. It might be easy to give a negative advice as not everything is done; test cases and issues are open. What if those are not that important? What is the leading source then? I suggest that both sources can provide some information only not enough to become the leading source. Another source should be contacted. This can be a requirement list, a risk list, the manager, the business. In this case a combination of sources should be used.
Situation G:
No test cases are executed, and some issues are still open. This seems to be odd to go live with this information. What about an update of patches. Sometimes tests are performed without writing them down. In this case the only source you can rely on initially is the issue database in combination with knowledge of people. Let's say that the issue database is the leading source.
Of course there are other situations also. The intention of this post is visualize that based on the situation the source that provides information to make decisions differs. You should be aware that you can go into production based on number of test cases or number of issues with respect to their status.
Posted by Jeroen Rosink at 10:07 AM 0 comments
Labels: Metrics, Testing in General
Saturday, July 25, 2009
Bounded by thinking in methods and processes
It is now almost a week ago I had a challenge with Matt Heusser. One of the interesting things I learned is that approaches differ on some points. Over the years I was thought to think in processes like TMap, ISTQB , ITIL, PRINCE2. A good habit of mine is to investigate the borders of those methods.
When I was at school I noticed I drove my teachers and also fellow students crazy when questioning methods. Not to proof they will not work or they are not true. Just to investigate the boundaries of methods with the purpose to know when I cross those boundaries I need to be creative.
During the challenge I forced myself to set aside what I learned and re-shape my knowledge based on the questions I got. This helped me to find some answers and missing others. Of course this is not bad. Only there was on obvious test I forgot. Knowing it is an obvious one I forced myself thinking and start using all kinds of methods approaches etc. I noticed this brought me further from the actual solution. at the end I mentioned it, only as a side note and not as main thing. you might say I was a bit blinded by methods.
I learned that obvious things are often missed.
In the projects I worked in methods were pre-defined. In the Netherlands a method like TMap is very common. It helps you to create a structure in the test process. In almost every project it becomes a main goal to work according this method as it is already sold to create structure. When other actions are needed it is first checked against the method, does it fit in. Sometimes the approach is changed; sometimes the actions are not done because it was not agreed upon and needed as the test plan is leading. This lead to actions are skipped based on "false" arguments.
This brings me back to a phrase a teacher told me once: "Methods are tools to model your situation, keep in mind that models are just a simplification of the reality"
Based on this statement, methods and defined processes as result from methods are fallible. You never can capture reality in a model. You will always miss items. Only afterwards you might notice the importance of missed items.
When obvious things are missed based on using methods and following processes. Then, thinking in methods and processes lead to obvious blindness. The question is how huge wil this obvious blindness be? I'm sure a lot of methodologists are claiming this is untrue. There is space in methods/processes to prevent blindness. Perhaps it is. Only is this for everyone?
If everything can be dealt with by a method what level of experience and skill does one need about that method to prevent the obvious blindness? And is it acceptable to demand those level of skills to anyone? Is this reality?
A new question is: Are we dealing with reality or methods?
What would you do?
1. Accept obvious blindness and deal with it later when it fits the methods/process
2. Reject methods/processes and define your own world
3. Define space in methods/processes to address time to investigate obvious blindness holes
4. Teach project members in methods to the highest detail and claim there cannot be any obvious things which are not overseen
Posted by Jeroen Rosink at 9:48 AM 0 comments
Labels: Test Methods, Testing in General
Saturday, July 11, 2009
Whisky or whiskey and testing
Some people call it whisky and some name it whiskey. The notation depends of its origin. In Ireland and the USA they use Whiskey. In Canada and Scotland they name it Whisky.
In the Netherlands we tend to use Whisky.
With this liquor you have differences already in tastes, flavors, ages, colors, experiences. I can imagine that you also have differences in believe and understanding. Most of the people can name at least three different brands of whisky and tell which is better. There are a lesser of them who actually tried that whisky.
You have these kinds of differences also within testing. People think to talk about the same when referring to testing an application only the approach is different. Like with drinking whisky, people have different understanding and believe they talk about the same drink telling it is the best whisky. It is quality. They claim this statement because of their experiences.
If this is true, then quality is just a result of the experience something gives you. This has quite some impact on testing as people tend to get experience by using specific approaches.
For example the view of testing in the USA is a bit different then in Europe (I'm generalizing a bit). In the Netherlands we tend to test according procedures, guides, phases, project plans, steering groups etc like test approaches called TMap and ISTQB. In the USA they tend to test based on technique, their understanding of technique, heuristic methods etc.
Because of these differences the experience will also differ. Therefore the perception of quality will not be equal. As result of this; testing is not equal.
The pitfall here is that we are trying to teach each other about testing, sharing ideas and learn from each other based on different levels of understanding while information might not fit the experience. Wise lessons are misunderstand and misused.
A few days ago I noticed on twitter how a fellow tester was going to enjoy a 16 yr old whisky called Lagavulin. I never heard about this brand and believe him when he claims it is a good, quality whisky. For myself; I like the 18 yr old Highland Park and in certain situations the 12yr and the 15 yr old are also good. It depends on the situation and mood I'm in. This is an example of differences in understanding quality although we both are talking about the same topic, there are differences in experience and understanding. I could try the whisky he is drinking and offer him some of mine to get a better understanding of each other, perhaps when the occasion is there. At this moment I stick to testing.
Knowing about differences in testing I try to understand more about other ways of testing. You can read about it, experience is more valuable. For this I was lucky to visit the Miagi-do of Matthew Heusser as result of my search for a Mentor, Mentor in software testing. During these visits I received a black-belt challenge which I accepted. I will not share the details of this challenge as it will not be a challenge for you any more. That challenge was though. It was a brain teaser and breaker. It was fun to do. I think the strength of the challenge was the capability of Matt to adapt the challenge based on information I gave him.
At the end I earned the brown-belt and I'm proud of it. Not because I got a brown belt, because Matt made me learn more about myself how I approach testing, what differences there are in points of views, were I rely on my own experience and what the pitfalls are when doing this. He also strengthen my idea that there are more views in testing and to understand those I need to learn more. As I knew this already and I am more proud of gaining the brown-belt instead of the black-belt. It makes the brown-belt more valuable because I'm aware of my knowledge, willingness to learn and drive for testing.
Like drinking whisky, testing is also a case of perception. They might differ based on experiences without claiming what is good or bad. For this you have to be able to communicate on proper level of understanding and able to share thoughts and ideas.
Posted by Jeroen Rosink at 10:54 PM 0 comments
Labels: Metaphor, Testing in General
Questions for reviews
Are you also a person who is able to find holes in functional/ technical designs and other documents? When reading a paper do you sometimes have the feeling something is missing?
I was triggered by this question because recently I was able to tell people I missed something in a functional design. Not that it was a bad design, how can it be as developers were able to write a technical design based on demanded functionality. Also were testers able to test the functionality. So was it actually that bad? No, we managed to reach our goal.
Reviews are introduced to avoid errors and changes in a later stage of development. There are lots of books written to explain which type of reviews can be done and also how to do them. I think certain issues in DUR (Documents Under Review) can be avoided also on an earlier stage. In some organizations this is mainly done by using writing guidelines and templates. Only this doesn't guarantee information is missing in those documents.
Before a document is handed over for review you can also perform some initial checks. For this purpose I post here an idea I have about this, perhaps it might help you also.
Obviously a checklist would be appropriate. The disadvantage I see of using a checklist is people stop thinking. Under time pressure they even might answer every check with "YES" so the process can continue.
Instead of using checks you might use questions to gain information about the document and how you managed to create the document. The following points can be used for questioning Functional Designs. I think it also might be used for other documents as well. As often, list can never be complete and should never be used as the complete truth, the following points should be used as a guideline perhaps as a Quick Reference Card.
History
- Is this document adding new value to other written documents related to the same functionality/processes?
- What is the level of knowledge about these documents to others?
- Is a reading instruction needed for those documents?
- Are the requirements discussed with the customer how to write them down in the document?
- Did previous changes on similar functionality lead to discussions/questions/meetings and how can this be avoided in this document?
Document Structure
- Did the writer have to adapt the content to make it fit the template?
- When did this happen: in the beginning of writing the document or later on?
- Where did this happen in the document?
- Did the writer struggle during writing the document to define functionality on the proper place of the document?
- When did this happen: in the beginning of writing the document or later on?
- Where did this happen in the document?
- How did the writer managed to use the structure making the content fit?
Usage of words
- How are used examples supporting the explicitly defined situations?
- How is it made clear that used examples are related to defined situations in other documents?
- If a translation had to be made from one language to another language are the key words of requirements used in the correct context?
- In which way are variables/conditions from the business rules made explicit and was this needed as results of common knowledge or specific knowledge?
- When and were did the writer struggled finding the proper word to describe the functionality and matching the requirement?
Explanation of functionality
- How did the writer made the relation between the functionality and business rules?
- How explained the writer unusual situations of defined functionality?
- What must be done to translate the written functionality in an image like process flow, time axes, etc?
- When and where are other chapters or paragraphs needed to support the explanation of the functionality?
Purpose
- Is there enough information in the document the writer can hand this document over to a colleague and he/she is able to explain it to the developer?
- Is the writer able to explain the relation between business rules and the functionality based on the information in the document?
- What arguments can be thought of to support the size of the document with respect to the impact and value of the requirement?
As often, more questions can be asked/defined. In my opinion the questions mentioned above can be used to question your document. I tried to write them in a way you are triggered to ask more questions. I believe when you are not triggered to ask more questions you might spend more time on your document and with your colleagues talking about the functionality.
Posted by Jeroen Rosink at 10:18 AM 2 comments
Labels: Ideas, Testing in General
Wednesday, July 8, 2009
Improving seems so obvious
I already mentioned this before, as tester we tend to continuously improve our work, make our life easier and hoping to help the customer with those improvements.
In Quality Matters issue Q1-2009 I already wrote about avoiding improvements in the article called: "Be innovative! Stop improving your test process"
The basic idea of avoiding improvements is because it might hurt other processes. It might benefit your process as you get in control, only when you ask other project members to help you like better technical design, someone has to write them. Often these are the persons who are currently needed to deliver the code you need.
Recently I was in the fortunate situation a tester told me that things could be improved. An interesting discussion took place
Q: Why improvement is needed the initial answer was to optimize the process?
A: By optimization the speed of execution of certain tests will be increased
Q: Why speed should be increased as you managed everything in time?
A: We can perform more tests and also perform some regression tests as functionality was added during the acceptance phase.
Q: What is your idea about the needed improvements?
A: We could do some clustering of test cases, we can change the templates, we could communicate better, daily kick-off meetings will help, data preparation should start earlier, maintaining metrics can be optimized....
It seems he had some thought about this and obvious those thoughts seem to be valid. Are they? I told him that perhaps improvements can be done and I asked him to write those improvements down and explain them to me. I asked him also to think about what the benefits are, the disadvantages, how much it will cost and beside what value will it have. After a day I received a small document with suggested changes, very good structured. Although the document mentioned in a clear state within the context as he saw his activities and how it can be improved it was still missing something. I was written for an "ideal" world. His "ideal" world. Together we went through this document and start talking about the necessity for those improvements and also how will it value the project/business and does his project context match with the overall project context.
It was an interesting conversation as I learned more from the suffer he faced during performing the tests. I also was triggered again that providing only one solution is never enough. It remembered me the words a teacher told us: "Always provide 3 options, let the people decide. One option is no option."
At the end of the meeting we came up with the following options:
1. Do nothing
2. Do something which is not optimal
3. Do the optimal solution
This reminded me about an posting I made here in the past: July 7th 2008 I posted an article here The power of Three about the situation that there are always at least three options.
In this case it would help if three options were offered also. Only just providing more options is not sufficient. Those options should also be placed in the proper context. It should not only help you and benefit your team; it should also be valuable for the customer.
As in my article in Quality Matters I explained about internal and external improvements, this with respect to direct and indirect improvements. I believe you have to be careful with improving. If you can do better, why not start. If you ask others to do better, consider there time and resources are also valuable. If you are asking them to change or do more it will cost something on other areas. If people are spending lesser time on those areas they become worse. So it is important to place this also in your defined context. What is valuable, their current activities for the organization or you new activities?
So again, here is also the power of three involved:
1. Does it help you?
2. Does it help the project?
3. Does it help the organization?
Add for these questions the value towards the organization. I would suggest translate the benefits and disadvantages into value to the product, organization, team.
At the end of the meeting we came to the conclusion that although most of the suggestions are valid as can be, it would cost more on other levels. Does this mean that those suggestions for improvement are useless? I certainly would say no. When people are able to come up with improvements, there must be signals that something is not going smooth. These signals should not be ignored. As they are all written down it will help when improvements are done on a structured way, they can become part of a total improvement plan. Or they can be used as signals that we should think about improvements.
Posted by Jeroen Rosink at 6:48 PM 0 comments
Labels: improvement, Testing in General
Sunday, July 5, 2009
Swine Flu: adapt your processes or ignorance and denial?
Over the last days I was wondering why organizations are not busy adapting their development/test processes to continue when the "Swine-flu" explodes. Is this ignorance or denial? Am I the only one who is worrying about this?
Currently the prediction in the Netherlands is that this flu will exploded in the last months of this year. According to this article: (translate from Dutch using Google translation service)
Mexican flu peak may be in August ; thousands of persons will catch this flu in august per day, the English government expects in Britain a over 100.000 of patients per day.
Im sure there are more figures which are more accurate on the internet.
I also heard that in the Netherlands they expect between 20% and 30% of the people which will become ill. When this happens also the schools might be closed. Imagine the impact of this.
Currently ICT companies are dealing with the current recession with a certain number of employees on the bench. I cannot imagine that people are able to point them as victims of the flu, everyone which is on an assignment is allowed to become not ill.
Let's assume that 20% of the people are becoming ill.
Due to a general figure, this also counts for about 20% of the project members. This will reduce skills within the project team and result in delay of deliverance of functionality and also solutions.
Next; when schools are closed, and let's assume that about 30% of the families both parents are working, someone has to take care of the children, resources are again reduced. What do you think when this hits again 10% of your project members?
What happens when your project resources are decreased with about 30%? What is the impact for you? And what happens if you are also depending on other parties which supplies you support on environment, tools, food etc?
Will it be on time to act when it becomes August?
Currently we already facing a recession. I believe an global outbreak on a more heavily level as it is now will empower the recession more. So more people will face uncertain conditions, more organizations will act on the next level of recession.
Are you already adapting your development processes based on this situation? Will their come a shift in processes?
Perhaps the world Matthew Heusser invented becomes true and needed: The Boutique Tester
Perhaps the isolated world I was thinking of becomes true: Testing in an Isolated World
Are you already start thinking about possible solutions? At least I'm already doing this, now wondering how we can take it to the next level. I hope my predictions don’t come out and we all work nice together.
Posted by Jeroen Rosink at 10:03 AM 0 comments
Labels: Ideas
Friday, July 3, 2009
Magazine: Quality Matters: Skill Based Testing
Today I'm proud to mention that my second article is published in free testing magazin Quality-Matters
This time a story about Skill Based Testing: The Life Cycle of a Tester.
I already wrote on this on this subject
December 8th 2008: Life Cycle of a Tester
July 1st 2009: Testing in an Isolated World
Posted by Jeroen Rosink at 6:21 PM 0 comments
Labels: Magazine
Wednesday, July 1, 2009
Testing in an Isolated World
As one of my favorite quotes comes from John Maynard Keynes (English economist, journalist, and financier, 1883-1946): "Ideas shape the course of history!"
Matthew Heusser has an idea. On his blog he posted an interesting concept The Boutique Tester: A tester as a craftsperson who adds value to the customer through evolution. He stated: "To compete as a craftsperson, the tester role will have to evolve. He'll have to be smarter, sharper, faster. In the boutique world, he will have to explain his services to people who are skeptical of such services and believe they can do it themselves."
Matt is "inventing" a future. He hopes there is place for the boutique tester and craftsmanship returns. He also thinks there is no room for it.
So I will try to help him. For this I am creating the idea because testers are needed due to an isolated world. This seems quite contradictive to globalizing of development and testing. We continue outsourcing, on- and off-shoring, etc. Mainly this is how organizations are processes; this results that also the activities are moved. Only this is not their main objective, it is a result.
The isolated world I'm talking about can be caused by global circumstances and the business needs to continue. For example: when the pandemia like the "swine-flu" is getting worse and people have to stay at home, is there any option for businesses just to stop producing? Of course there are other companies who are willing to take this task and claiming they are able to continue deliverance. Are they honest about this?
This week I read in the newspaper if the Dutch government didn't buy vaccines against the flu and people are not using the vaccines there is a probably chance of >30% of the people who will get the flu. I'm sure your boss cannot select people who might have the flu and who not because it will danger the productivity.
Perhaps in certain situations activities can be handed over easily to other people. I don't think it is completely possible in software testing. So there must be people who are skilled and created an environment where they are able to do the job and able to help customers although the "environment" is not available.
James Bach also replied on this idea on Have Internet, Will Test he gives a good example that is already possible. As he lives on an island, it would be cheaper if he could express his craftsmanship from there. I think he would be able to get the same results if the necessary tools are available.
Another idea I had for (this type of) tester(s) is to accept that not all knowledge and skills might be available in just one person. Defining the need for value is important, you might pick your tester based on the needed value, accept there is some kind of life cycle also for testers.
A posting I wrote on December 8th, 2008, Life Cycle of a Tester
Thinking about such a world is one thing, creating it on paper is another. Still the world is not yet there. Some people might turn to Second Life and use this world to communicate. Virtualization might be one option.
Here some points I could think about when creating the world; you have to:
1. Find a reason for existence
2. Find a way to communicate
3. Find/create the necessary tools.
4. Find people who are able and willing to live in this world
5. Find other people and make people find you
6. Create mutual trust
7. Proof added value
8. Keep communicating and preserve transparency
9. Find time
10. ... (10 is a nice number, as most of the lists ending with 10 items, I leave this open for other thoughts)
Most of the points are already there. You just have to prepare and exploit them.
Like 1: a pandemia is a valid reason, business must go on and people are not that yet able to communicate and work with innovative technologies
2. As we can communicate with phone, video, twitter, there seems also tools which enables you to share ideas, language, pictures desktops etc.
3. If you are missing tools, find them, the internet is full of it, ask colleagues, or ask the boutique developer.
4. Share the idea and promote you world
5. There are already people who are visible on the internet, are you also? When it happens, this might be a very important source of communication
6. You have to be able to show your integrity, it is not about the money, it is about the value.
7. Be able to tell what you have done for the customer, not which activities, what made the product better acceptable for the customer
8. Be able to express what you are doing, thinking and need. Don't confuse this with nice fancy words; in this world there is lesser time to convince people.
9. Spend the time the customer needs and you need.
10. ....
Think about how it would be if you are forced to live in an isolated world. Why not be able to add value from your type of world to others.
Posted by Jeroen Rosink at 9:10 PM 0 comments
Labels: Ideas, Testing in General