Although it is my holiday I cannot help keep thinking about testing. Perhaps it is good to continue thinking about testing even when you don't have to work. Currently my thoughts are about the projects from the last years. Somehow during the last days of the year people over think their good, bad and ugly moments.
In this year I was able to attend several conferences like Software & Systems Quality Conference in Germany, Dusseldorf;EuroSTAR 2008 in The Netherlands, The Hague; TestNet Najaarsevent in The Netherlands, Nieuwegein. The main subject of those conferences was about the future of software testing. And that future can be challenging by using Agile approaches, Model Based Testing, more tools, Business Chain Testing, Security Testing, new test methods and much more.
During this year I also read more weblogs like those from James Bach, Paul Gerrard, Michael Bolton, Matt Heusser, Elisabeth Hendrickson and many more weblogs. I also spend time reading books related to software testing like: Testing SAP Solutions, Tmap Next, TMap Next BDTM, The Complexity of chain testing, Software testen in Nederland, SmarTest, Business Process Validation, Kwaliteit in ontwikkeling, Testen 2.0. And some are still lying there to be read.
Occasionally I spend time on the web searching for more information as well in magazines, articles as submitted videos.
Did this all make me a better tester? Do I want to be a better tester? Did it help? I like the word I learned in the workshop: Certified Scrum Master: "It depends".
And that dependency made me keep thinking about software testing although it is my holiday. Only this time not for new ideas. I made me think about this year. In the year 2008 I posted some articles on my blog to write down some of my thoughts which might help me and others to think in another perspective about software testing. Below you see a short retrospective of my year 2008.
One of the thoughts I had was using the congruence model related to open system thinking in identifying other risks and objectives in the test process. I described this model on macro, meso and micro level. Although it is not yet completely finished it led to the following postings.
Open System Thinking and Software Testing (1)
Open System Thinking and Software Testing (2)
Open System Thinking and Software Testing (3)
Open System Thinking and Software Testing (4)
Open System Thinking and Software Testing (5)
Open System Thinking and Software Testing (6)
Open System Thinking and Software Testing (7)
Open System Thinking and Software Testing (8)
As I like to play with other models to see if it can be used in the testing profession I also wrote posting like using the ServQual model: Service Quality Model (SQM) and Software Testing
Or summarizing the impact of development models in test process: Development Models and the impact on Testing.
Even the Cultural Union I used in a article: Success of improving your Test Process depends on culture. One of the recent postings was using the mining method: Rooms and pillar approach in software testing: Room and Pillar approach for software testing
I also used this weblog to express some of my ideas related to a new quality attribute called: "Ecoliability" :
The green part of development and software testing: "Ecologiability"
Introduction of the quality attribute: Ecoliability
How to deal with "Ecoliability"
or terms like Request For Knowledge (RFK):
Request For Knowledge (RFK)
The RFK process explained
RFK and knowledge processing
or Organizational Testing Board (OTB), and some other life cycles
Software testing: An organizational approach (1)
Life Cycle of a Tester
Product User Life Cycle (PULC)
And for some fun I tried to use some metaphors to explain a bit of testing in postings like:
Beware of the auto-pilot
Software testing and Astronomy
Mikado Management or Test Management
Software testing compared with making music
Testing your software like building a house
Tetris as Test Management tool
Flying in a hot air balloon
Shuffleboard
Did I like to write those postings? YES! And I will continue for the next year. Perhaps other people are willing to help me in the year 2009 by responding on my postings and discussing the items. I hope that I can continue learning and expressing my ideas to shape my thoughts. For those who read my blog occasionally I hope you enjoyed those postings. This will be my last posting this year. I wish you all the best for 2009!
Saturday, December 27, 2008
Keep thinking about testing
Posted by Jeroen Rosink at 11:00 AM 0 comments
Labels: Testing in General
Wednesday, December 24, 2008
Open source book: Software Testing Guide Book
Usually you see information in PDF format so it is sometimes hard to reuse. Accidentally I ran in a project were writers from over the world are creating a Software Testing Guide Book
Download the Open Office Document version here.
I have to admit I did not read the whole document and therefore I'm not able to say I support the content of this document. I think it is a project worth to mention.
Posted by Jeroen Rosink at 5:50 AM 0 comments
Labels: Books
Tuesday, December 23, 2008
Magazine: Professional Tester is Back!
One of the first magazine related to software testing I subscribed to was Professional Tester.
I enjoyed reading this magazine. Only after some while I didn't receive any issues.
I noticed that the website still exists so now and then I take a look at this site. to my surprise and joy I noticed the site has changed and there was a banner: Professional Tester is back!
Currently they didn't publish the editorial calendar. So I will keep monitoring this site waiting for the next issue.
Posted by Jeroen Rosink at 6:23 AM 1 comments
Monday, December 22, 2008
Magazine: What is testing.com
During a short search on the internet I ran into this magazine: WhatIsTesting.com
The first issue was launched in April 2008. And after the free subscription I noticed that there are not yet other issues released.
I'm not quite sure what the value is of this magazine as on the same site there is a link to that other magazine: Experience Tester which already published their 4th issue.
Perhaps they combined forces; otherwise it might be another Magazine: What is testing.commagazine for us testers to get information from.
Posted by Jeroen Rosink at 9:33 PM 0 comments
If something cannot be tested
So now and then I hear some sounds from testers or business when I ask them to test something that they are not able to test that something.
Somehow this sounds very strange to me. I believe that everything which can be build, can also be tested. If the business were able to find an error and the developer is able to solve that error then we should be able to test the solution. So who is right and who is wrong?
I believe in my testers. My testers are not unwilling to test. They are just not able to perform the tests. And I just want to know why.
Some reasons they gave are:
1. The test system doesn't contain that situation and I'm not able to recreate that situation;
2. I don't have sufficient rights to create useful data for the test;
3. The solution is too technical and therefore it is hard to understand the problem;
4. We don't have that problem anymore;
5. I don't have time to test and this is a minor issue;
6. I know that the solution is not what I need and therefore I'm not spending any time for testing on it;
7. This problem is not my problem; I'm not able to help.
8. Who are you?
I also believe that I should convince them to test, support them to test and during testing, facilitate them to test and make the business accept the risk when they are not able to test.
If they are not able to create the data, I should see if others are able. Mainly it is because of restrictions in the system. If giving them authorizations to the system to provide useful data without messing up existing data, then this should be the solution.
If they don't have the necessary technical knowledge needed to test this, I should help them with a person who has that knowledge and teach them or help them during the test.
If they know that the given solution is not what they need, I can bring them in contact with the developer and I also have to identify how this misfit occurred.
If they don't have the time, or it is not that important anymore I should search for other ways. A solution migh be embedding the test in another test case.
If they don't know me, I might have the wrong person and should look for another tester.
Even after all kinds of actions the tester might not be able to check whether the problem is solved. Does this mean that they should not test? NO! They should at least check if the functionality which is connected to the solution is still working. Should I close the ticket after those tests? NO, not immediately, first I have the business to accept the risk that the problem might not be solved and convince them that at least the proposed solution will not harm other processes.
Posted by Jeroen Rosink at 6:42 AM 0 comments
Labels: Process, Testing in General
Sunday, December 21, 2008
Article: The Economic Impacts of Inadequate Infrastructure for Software Testing
Some now and then I find on the internet an interesting article. This is one of them:
Planning Report 02-3: The Economic Impacts of Inadequate Infrastructure for Software Testing
Prepared by: RTI for National Institute of Standards & Technology Program Office Strategic Planning and Economic Analysis Group, May 2002
This document gives some nice examples and depth on software testing.
Posted by Jeroen Rosink at 10:28 AM 0 comments
Labels: Articles
Service Quality Model (SQM) and Software Testing
ServQual model
One of the models I cannot forget is the ServQual-model (*1) . It has been thought to me on school were the focus lies more on measuring the quality in providing services. In my opinion one of the strengths of this model is visualizing the possible Gap’s which might danger the quality of delivered services.
Wikipedia: SERVQUAL-model : "It measures the gap between customer expectations and experience."
The Five Gap’s explained
GAP 1 – difference between consumer expectations or quality determinants, and management’s perception of such consumer expectations
GAP 2 – difference between management’s perceived quality determinants and service specifications (i.e., the critical-to-quality specifications)
GAP 3 – difference between quality specifications and actual service delivery
GAP 4 – difference between actual service delivery and the company’s external communications about services delivery (e.g., word of mouth, past experience, promises, reputation, standard of care)
GAP 5 – Difference between expected service and perceived service
The difference between perceived service and expected service is a function of four different gaps GAP5 = f{GAP1, GAP2, GAP3, GAP4}
ServQual model in Software testing
To me this seems similar what we are doing during testing. Normally in software testing we refer to development models to fit our process to the development process. If we approach software testing also as a service and tune the model a bit towards system delivery we might come up with a model like shown below.
The five Gap’s for software testing
Gap 1: User Expectations - Management Perceptions Gap
Gap 2: Management Perceptions - System Quality Specifications Gap
Gap 3: System Quality Specifications - System Delivery Gap
Gap 4: System Delivery - External Communications Gap
Gap 5: Expected functionality- Perceived functionality Gap (or the System Performance Gap)
Challenging the Gap’s
As in software development we have already methods and tools to measure these gap's and control those. So there is nothing new here. What might be new is that instead of keeping the information within the project you bring them to the business. You make your project more transparent by translating development into provide service from the project.
Assuming that these Gaps’s cannot be avoided, transparency can help to increase the level of acceptance by the business.
Examples of activities measuring the Gap's
Gap1: As you see that the distance between the start and ending point is large customer involvement should be allowed and increased. This will help to identify differences in expectations of users and project management. Also a process of defining SMART requirements which fit's to the development approach should be established.
Gap2: Measuring the translation of requirements into written functionality should be one of the basic activities of testing. Only most of the time we avoid the reviewing process of requirements and designs. If these processes are in place we log issues for it when there is no match or functionality is unambiguous.
Gap3: Measuring requirements to delivered functionality is on basic activity of software testing. Here we actually executing test cases on the system to check whether it match the written expectations.
Gap4: This is one of the gaps we usually forget to manage. Only this might be one of the important ones when you check the relations to other activities in the model.
Gap5: Often we see this gap be controlled by performing the user acceptance test. In my opinion these activities will be strongly influenced by other activities from the model. Therefore this might not be sufficient.
Lessons we can learn from this model
1. Business involvement is very important and should exist during the project
2. Requirements should be reviewed and omissions should be communicated not only to project members. Also if some requirements will not be met, inform the business to change their expectations.
3. If issues are found, be honest about this towards the business. Explain what you are going to do about this and when. Every system has its failures, business can understand this. You might use the approach to solve this to build on the confidence of the users. They will be able to match the gained functionality towards their needs. It will also help to manage their perception about received functionality and by being transparent, their trust might grow and they might forget about negative experiences from the past.
4. Transparency should be their. Don't undervalue the power of word-of-mouth communication when users are involved in the project and they are not being heard.
5. Keep track of changes in functionality and also of communication. Perhaps start also maniging your workarounds (see: Start managing your Workaround's!)
For sure there are other things we can learn. Perhaps you can come up with some things. Just keep in mind that this model has its strengths and weaknesses.
(*1) Parasuraman, A. Valerie A. Zeithaml en Leonard L. Berry (1985) "A Conceptual Model of Service Quality and Its Implications for Future Research", Journal of Marketing, volume 49, Number 3 (Fall 1985), p41-50
Posted by Jeroen Rosink at 9:29 AM 0 comments
Labels: Ideas, Testing in General
Saturday, December 20, 2008
Are you ready to deliver?
I don't think this situation is different from others. There is a business who wants a solution, there is a manager who needs the business having this solution, there is another manager who is responsible to get the good solution in production, there are business users who are made responsible for test execution and there is you. You are responsible for the test process.
What are you going to do when the solution must be tonight in production?
Some people are spending time to identify the possible risks. Others spend time to convince certain managers that it won't happen. Perhaps there are testers who just start testing.
I'm sure that all situations will fail. Normally we start identifying risks related to what should not happen in production and we will try to measure that those situations will not occur. This process of identification takes too much time of the 4 hours you have left. You will have the risk identified and just a couple of hours to check if those risks are covered. And afterwards you didn't fill in the complete picture although you created the expectation that after testing all risks are measured and identified.
Convincing the managers that you are not able to test would another approach. Only this will not help the business.
Start testing saves some time, only how will you know you did the right thing?
A previous test manager of mine told me that if business is not able to tell what should work or what risk should be avoided, ask them which errors they don't want to see.
I can imagine that you identify together with the business what value the new solution will have.
For example:
- The new screen enables the user to approach information faster which helps the user the process the customer information easier and therefore the service level increases.
I can imagine that translating this into errors the business wouldn't like is hard. You might enter the zone of negative testing.
You might start with the primary functions in the process like:
- Printing an annual bill;
- Sending a confirmation letter;
- Correct storage of data;
- ...
You might think of all kinds of variations of printing a bill. What about proofing that the user is able to print the annual bill for that client and is not able to disturb printing using that screen for other clients?
This could be translated in errors to avoid.
I don't want to see that:
- The annual bill for this group of clients is not printed
- The confirmation letters are sent more then 1 time
- The confirmation letter is send to incorrect people
- The data which is changed or created cannot be seen after returning after a next login
- The user spend more time to identify what is happening on the new screen
Will this avoid all risks? I don't think so. Will this help? I believe so as you minimized the number of test cases just by defining together with the business what should not happen. I strongly believe that in a half an hour discussion the business can tell you which disasters should not happen as they already have experience with it. It will be harder to identify all items which should work.
Will this lead to a solid advice on quality? Sure it will not. It will give the business an indication about the quality. If you tell them you are not able to cover all points within the given timeframe to make a solid advice but you might give them indication what might happen and based on what this indication is made, they might be able to accept this approach. And afterwards they are able to tell if they will accept the risks when you are finished. This might lead to a situation were you are finished within time and the business has tghere solution.
I suggest that a quick action force is available when things are put into production and monitoring if the errors defined are not happening.
Posted by Jeroen Rosink at 7:54 AM 0 comments
Labels: Testing in General
Thursday, December 18, 2008
Beware of the auto-pilot
Today I drove to my work and during this trip I received a text message from a friend warning me for a traffic jam. Based on this information I decided to take another road to avoid this traffic jam. Instead of sending him a message back I took the phone and called him to thank him for this warning. The coincidence was that just 500 meters before I had to drive straight forward I choose the turn I usually do. Just after 25 meters driving I noticed the mistake I made. Only there was no way turning back. I had to continue and was stopped by the traffic jam of eleven kilometers. This cost me about 40% delay.
The best part was that it made me think about this situation. I realized that this also could happen to us. When we are testing systems we know very well, we also turn on our auto-pilot. The good part is that in normal situation we are able to perform out test much faster then we did initially. The worse part of this will be when we get distracted we also might take the usual route instead the intended route based on new information.
I think the lesson learned would be to focus even harder on decision points in the system under test when things are getting a routine.
Posted by Jeroen Rosink at 8:13 PM 0 comments
Labels: Metaphor
Tuesday, December 16, 2008
Testing Games
This morning started good. I posted an article about a possible relation between Rooms and Pillar Mining and Software testing. I found some time to play a game on the Wii and now having a short break. With my thoughts partly on testing and partly on playing games, this though brought me to Google on games related to software testing. Here some I found:
Bug Fix Bingo by K. J. Ross & Associates
Planning poker by Mountain Goat Software
Non-approved test methods by Charles K. Kincaid
Perhaps you can extend that list of test methods.
While writing this blog I hoped I would find more examples. Unfortunately, I didn't. So here some thoughts of mine to use when you want to do something with your time only there is nothing left: Held a competition with your colleagues to check:
1. who is able to find a bug in already tested software;
2. who is able to perform as fastest a number of test cases from beginning till end
3. who found the most issues and present it in a fancy chart
4. who is able to construct a data set in such a situation that your colleague must use functionality to correct the data and be able to finish the test case.
5. Make funny photo's of your colleagues impersonate test methods
Posted by Jeroen Rosink at 6:38 AM 0 comments
Labels: Fun
Monday, December 15, 2008
Article: Acceptance Test Driven Development (ATDD)
Elisabeth Hendrickson’s posted on her weblog Test Obsessed An article related to Acceptance Test Driven Development (ATDD): an Overview
Here she links to an article she wrote about this topic which she presented at STANZ 2008 and STARWest 2008. Driving Development with Tests: ATDD and TDD In my opinion a good overview and example how this could work.
Posted by Jeroen Rosink at 9:50 AM 0 comments
Labels: Articles
Sunday, December 14, 2008
Room and Pillar approach for software testing
A journey underground
In the beginning of this month I went with a group of colleagues to "the caves" of Maastricht in The Netherlands. During this trip in the dungeons of the earth I learned that this was a result of mining and not a natural phenomenon. What I found impressive was the way they cut the stones out of the earth. To me it looks a bit like "negative building", you built something by taking away sources.
They start to work on the top and dig their way down below. Only it was not just start digging, they first search for the flint. If they found it they knew what level the ceiling would be. This was an approach they used for centuries. In that time you were owner of the ground below your property. This means that boundaries were set based on the size of property above ground.
As there were more people who were mining in that area, the chance to meet your neighbor underground was accepted. To prevent that others are digging in your area marks were set on the walls. As the mine extend the risk for collapsing increased. To avoid this they came up with a method to dig enough stone away and still maintain the supporting structure. This was done by leaving pieces of stone to act as pillars. The result was that you have now a area with pillars and rooms you can safely walk through from one side to another side.
What is Room and Pillar mining?
According to Wikipedia: Room and pillar: "Room and pillar is a mining system in which the mined material is extracted across a horizontal plane while leaving "pillars" of untouched material to support the overburden leaving open areas or "rooms" underground. It is usually used for relatively flat-lying deposits, such as those that follow a particular stratum"
William A Hustrulid and Richard L. Bullock explained the method as: "If one pillar fails and surrounding pillars are unable to support the area previously supported by the failed pillar they may in turn fail. This could lead to the collapse of the whole mine. To prevent this the mine is divided up into areas or panels" Note:*1
Mining and software testing
Somehow I see some similarity to this kind of mining and software testing. We also are trying to dig our way through the system. In my opinion we are also performing some negative building. Although we are not building the software ourselves, we try to find errors which will lead to changes in building. Also we are starting from the surface and going downwards in the system until we are called to stop. If we look at the explanation of Hustrulid and Bullock we also might phase a collapse of the system if we tested to deep or wrong. For example we perform wrong actions which leads to stopping is for further testing of that test case or data might get corrupted.
Pillar and Rooms in Organizations
Perhaps this is a bit far-fetched; processes in organizations can also be seen as pillars and rooms. If you take for granted that organizations are performing activities and those activities are assigned to processes you can create a similar map like shown in the figure below.
In level 1 you can define the departments. In level 2 the main processes and in Level 3 their sub-processes. Each block in a level is a room where activities are performed and the information is handed over to another room. To transfer these information agreements on communication and format must be defined. These can be seen as the pillars for the organization.
Mapping your test cases
To find your way through the mines you also need a map and a guide. To find you way to the system and check if the system supporting the business processes you can use a similar map of processes.
This map can be used to make the path of test scenarios visible. Those paths can be used to present certain coverage of the system towards the business processes. Initially you can show that based on the selected test cases you cover every business process at least once. Another benefit is the creation of transparency that the system is able to process data to support the processes.
Secondary use of this map is to can be as a storyboard. Let the business explain what they are doing in their processes, what their understanding is about the impact of their changes in data for next processes and also prioritize their activities. These priorities can be used to add depth in the test cases.
Test coordinator as guide
Even if you have a map of the mine, it will always be hard to find your way through the system. To speed up you journey through those mines to make sure you see the important things and find your way back to the exit it is recommended to get a guide who leads you. To lead you through the system you can assign this task to the test coordinator. He uses the map to lead you through the system and based on this map he explains in terms (processes and activities) you know what it mentionable to pay attention to. As for a guide in the mines the gas lamp, an additional flash light and his experience of the environment helps to make the journey pleasurable. The test coordinator should bring in his experience of the system, the test process and other additional tools to make it worthwhile.
Creating test depth in the room
I already mentioned to define test cases based on using every process at least once. This can be done using test cases for proving the system is supporting level 2 processes. I suggest going at least through every Level 3 processes.
As one of the main borders are activities and every process can consist out of more then one activity you can use the prioritization to decide if a certain path should be performed multiple times. Or combine activities as you are staying in a room for a longer period. As example a process can exist on: creating, maintaining and deleting data.
You could derive here three similar paths for it. You also can use it in one path. If you are using it in a same flow you have to be sure that the next process is not of the same priority and also the mutation of data has a minor effect on that process.
Mining the Business Process Chain Testing
In my opinion the approach of pillars and rooms is very useful to define a set of test cases to support Business Process Chain Testing (BPCT). You approach the system from a business point of view and measure if the system is supporting the organizational processes. Based on importance and risks you can define the depth of test cases. Also using a structure as pillars and rooms you are able to tell the business what kind of test approach you have and how you make sure that important things are done. You also are able based on the activities which have to be performed which resource you need from the business to perform the test cases.
Note: *1 William A Hustrulid and Richard L. Bullock (2001). Underground Mining Methods: Engineering Fundamentals and International Case Studies. Society of Mining Engineers, p 493-494. ISBN 0873351932.
Posted by Jeroen Rosink at 10:35 AM 0 comments
Monday, December 8, 2008
Life Cycle of a Tester
Somehow we seem to think that for every project a tester should be available. Somehow we expect that that tester has all the experience and skills. Somehow we expect that tester to be the sheep with the five legs (I'm wondering if this is also an English expression)
I think you need to select the tester you needed for that particular moment. With this statement I assume that there is next to a product life cycle, a development life cycle also a tester's life cycle.
The basic idea for this thought is:
There are not always those testers available who can help you for a whole project. You have to make concessions about the skills of hired testers. This involves a certain risk which can be avoided by education or hiring more testers.
You can also accept the life cycle of a tester and admit that a tester should be available just for a period of time. Some simple steps could be:
1. Divide your test process in several phases;
2. Define for every phase the skills and expertise you need;
3. Prepare a time frame when you need those skills;
4. Start searching for those skills;
5. Find the testers who are able and willing to help you;
6. Adapt a process of monitoring the needed skills and needed testers.
7. Prepare some overlap and knowledge sharing
Benefits:
As you are able to find those different testers you are able to ask/ pay different tariffs. Another benefit is that you have the tester you need for that certain moment. You save time in education and you don't make concessions in quality.
I can imagine that you have several processes like:
1. Test management
2. Test preparation
3. Test specification
4. Test Execution
For these processes you might have different persons with different skills.
Of course there are other "somehow’s". There are other processes and there are some times not the testers available. It is just a thought. Feel free to leave a comment, perhaps we can share thoughts.
Posted by Jeroen Rosink at 8:26 PM 0 comments
Labels: Ideas, Testing in General
Wednesday, December 3, 2008
Link: Testing challenges by Matt Heusser
On a previous post I already mentioned about some good questions asked by Michael Bolton on Matt Heussers weblog: Creative Chaos.
Matthew create a testing challenge which is worth to pay attention to.
For more info see:
A new testing challenge
New testing challenge - II
New testing challenge - III
New testing challenge - IV
New testing challenge - V
Perhaps you can put some though in the basket?
Posted by Jeroen Rosink at 7:59 PM 0 comments
Labels: Fun, Testing in General
Sunday, November 30, 2008
Recession in projects
Somehow it is not going that well in the global economy. Some people blame it on the financial institutes. Some groups are referring to shareholders who only want to make profit without accepting the risks. I can imagine that there is also a minority who think this is the result of globalization. And I'm afraid that there are also people who keep asking questions like: Is something wrong? What is happening? What are you afraid of? Did I miss something?
Of course I cannot deny something is happening. Only it happens all the time, everywhere. It also happens in our projects.
We also do have stakeholders who asking for profit and don't accept risks. We also have different groups who are playing different tunes. We also rely on our issue registration tool as it was our personal Wall-street. And if something goes wrong we also spend more time on finding the person to blame instead of working together towards a solution.
On global level in the financial institutes we see combining forces to stop to trend getting worse. Will this workout? I'm not sure. At least some decisions are made.
And I like it when decisions are made. Only then you can judge if they were valid or invalid. In both situations you can continue working on those decisions. We have to keep in mind that beyond every decision a risk is hiding. We either should try to control that risk or accept that risk.
Still there is one risk I think we should not accept. The risk of the voice of strong people who are setting the tone of feeling of a situation.
I might be wrong on this when I saying that there are people who are telling that it can only go bad from this point on. If those voices are too solid, it will go wrong. or what about voices who are claiming it can only goes better. If we ignore the signals it might even go worse then it already is.
The same can happen in projects. It can go well or it can go the other side around. I think if there is a euphoric tone of noise we need people who are judging that feeling. They should not be ignored and blamed for their opposite sound. They should be heard. Or our goal should be a recession in our test process to clear the road for improvements as signals become more obvious.
On the other side related to negative signals should be supported by the benefits of a project.
I can imagine that if we keep relying on metrics which are supporting those feelings and ignore the judgments based on experience we might ignoring risks. Ignoring these risks withhold you from accepting or declining those risks.
Initially this sounds good as a decision is made. And making decisions make the project move to some point. If this is done we should be aware that we are accepting the unknown unknowns. As long as we are aware of this we can adapt the context of our thinking towards the project and improve the next time by monitoring which unknowns become known and what is the impact of it.
I suggest we have to beware of noises in projects, avoid a recession, make decisions even if we don't know which unknown risks there are, stop blaming people of not doing there job right and use that energy to move the project move in a controlled direction.
Posted by Jeroen Rosink at 8:09 AM 0 comments
Labels: Metaphor
Saturday, November 29, 2008
Open System Thinking and Software Testing (8)
This is a continuation of the posting in the category: Open System Thinking and Software Testing. For the previous post you might check out: Open System Thinking and Software Testing (7)
For defining the items and investigation of the relations of those items to each other I'm still working on Micro Level: Test Project. (See Open System Thinking and Software Testing (1) )
As mentioned in the previous posting the same steps can be followed.
1. Define the general meanings of the categories on meso level;
2. Identify the items per category: Goals, Technology, Culture and Structure;
3. Fill in the quadrants;
4. Define how the items are weakening or supporting each other;
5. Defining the sentences how this empowering/weakening is done;
6. Defining possible solutions how to monitor or to define new improvement suggestions
To define the focus of general meanings of categories you have to consider the basis of an organization: Vision, Mission, Strategy and Objectives.
Based on a vision of one or more persons a mission is defined. That mission is the goal of an organization. To comply to this mission a strategy is developed. To meet that strategy objectives are defined. If one of these objectives are not yet met often they are mentioned in business cases which will be the basis of a project.
I think we need to make a decision. To use this model you can use two approaches.
1. Define the basic context (general meanings) and after performing the defined steps you create heuristics which you want to act on
2. Start defining heuristics and use them as basis to perform the steps.
For more information related to heuristics I want to refer to documentation and weblogs from James Bach, Michael Bolton, Cem Kaner and Brett Pettichord
Documentation:
Brett Pettichord: Schools of Software Testing
The Seven Basic Principles of the Context-Driven School
James Bach, Rapid Software Testing
James Bach, Rapid Software Testing Appendices
Posted by Jeroen Rosink at 10:08 AM 0 comments
Labels: Ideas, Open System Thinking and Testing
What questions to ask when starting testing?
This morning I ran into a A New Testing Challenge related to software testing on the weblog from Matt Heusser: Creative Chaos.
In his log he challenge testers to solve a problem related to testing based on types of products and some laws which are existing. Of course this is a nice challenge to work with on a Saturday morning when the kids are asleep. Only one of the best parts of this posting is the comment Michael Bolton gave. Instead of solving the problem he asked if it is allowed to ask some questions.
To me those questions can be used every time before starting testing.
The questions Michael Bolton Asked are:
0) Is it okay if I ask you some questions? Assuming Yes,
1) Do you want a quick, deep, or a practical answer to the question, "How would you test this?"
2) Has anyone else tested this?
3) What's my timeframe?
4) Is it Sunday? When will it next be Sunday?
5) What are, in your estimation, the most serious risks?
6) What resources are available to me?
7) Who else can I talk to about this? Clerks? Customers?
8) If I see violations of laws other than the ones you've set out, are you interested?
9) What are my references for correct prices, categories, sale items, and so on?
10) What do you want my report to look like?
What I liked about these questions is the reflection of test process. With 10 questions Michael Bolton defines the boundaries how much effort and depth should be taken into consideration. Whether there is already experience about the product available. What values the customer based on avoiding certain risks. How the customer expects the tester to act.
In my opinion one of the strengths of these questions is that every answer from the customer can trigger you to ask another question.
I can imagine that based on these 10 questions and some supporting questions you are able to get enough information within a hour.
Posted by Jeroen Rosink at 8:54 AM 2 comments
Labels: Testing in General
Monday, November 17, 2008
Roles in Software Testing
Perhaps I'm the only one who's is wondering about the reason why we are making testing so complex. When ever a new approach is defined, also new roles are introduced. When new books are written, new roles are explained. Even when people are describing themselves on a network-site like LinkedIn, they express them selves in different roles.
When I started in the software testing business I was aware of the following roles:
- Test Analyst
- Test coordinator
- Test manager
- Test tool specialist
Over the years I also noticed that in TestFrame you had roles like:
- Navigator
- Analyst
Other roles I was aware of are:
- Test consultant
- Test Professional
- Test tool consultant
- Test tool developer
- Test team leader
In TMap next they introduced:
- Test project administrator
- Test Infrastructure coordinator
In a recently followed presentation about Model Based Testing they introduced the role:
- Test constructor
And there are roles all over the place like:
- Agile Tester
- Security Tester
- Usability Tester
- Test Oracle
- Software Tester
- Requirement Tester
- Test Architect
All these roles made me a bit confused. How can I call myself the best?
It sounds to me a bit stupid when I introduce myself as:
Jeroen Rosink: Test coordinator, agile tester, test architect, test consultant, test analyst, Software tester, Test Idiot.
I think other people already lost me. Perhaps we might have to go back to the basics.
1. Tell people what you are: e.g. Software tester
If there is room for further clarification you can continue explaining:
2. What your specialties are: e.g. Coordination in Agile project
3. What your knowledge is about: e.g. Test strategies, Test techniques, Functional testing
4. Skills: e.g. TMap, ISTQB, Context Driven Testing, Embedded environments, Web based applications, aso.
Only now I wonder if going back to these basics is enough, sufficient? Does it tells enough? Does it bring us what we need, or better what the customer needs?
If we go along this way we are on the edge of introducing more certifications as every one wants to be recognized by its specialism. At least it might simplify the way you can introduce your selves. It could be: Jeroen Rosink: 10 out of 21 certifications (assuming that there is already this many certifications related to software testing).
Only does this number tell anything? Imagine you have to explain what you do for living to an outsider. I always tell proudly that I'm a software tester and I test software. Although people don't understand that, I'm sure they can picture it. At least it is better understood then Test Analyst or Test constructor.
So perhaps you, reader can explain it to me why are we intending to make things so difficult? Or do you have other examples about roles in our field of expertise?
Posted by Jeroen Rosink at 9:05 PM 0 comments
Labels: Process, Testing in General
Thursday, November 13, 2008
Yesterday, I met some of my heroes
On November 12th 2008 I attended the EuroStar 2008 conference.
Of course I went to visit some presentations, gain some information by those stands. Also important was to see other or former colleagues.
As some or perhaps most know the Netherlands is a small country. When it is rainy, which is currently in this time of year, traffic jams are things you can count on. I did the same. Only I should have counted further then just an half an hour.
I calculated 1/2 hour additional time through traffic jams, 5 minutes registration, 15 minutes for a coffee break including showing my face to the stand of my employer Squerist and then straight ahead to the first keynote.
Only I needed all this calculated time to arrive at the location. And on the quest for some coffee I came in the room where the keynote was already started.
The keynote given by Randall Rice went about Trends That May Shape Software Testing which gave me some ideas to think about how the future can be related to SOA, Energy etc.
Almost at the end (I skipped the outsourcing part) my need for coffee beat the interest for outsourcing. Only the coffee bar was closed so I had to get some from the restaurant. (I expected that coffee is for free the whole day, what was I wrong)
Drinking my coffee within 3 minutes I walked to the Stand of Squerist. On the way to it I saw one familiar face. Egbert Bouman, author of a book related to software testing: SmarTest. I this book because it presents another approach on software testing were Business is taken a central place and compares their approach with TMap, ISTQB, TestFrame and TestGoal by not telling they are right or wrong. He explains how they do it. He was the first hero I met that day. Why is he one of my heroes: He has something to say and is willing to share it with others, listening to you and also keep recognizing you after you have talked to him.
Arriving atthe stand I saw there some of my colleagues, when they read this, they will also know that I collected them as my personal heroes as they are giving me the chance to become what I like to be :) (I'm thinking this is getting a bit slobbery) So I ran to the presentation about model based testing presented by Elise Greveraars: Tester Needed? No Thanks, We Use MBT!
It was quite a good presentation, only I have some doubts/questions now on Model Based Testing. Why do we need another new role for a tester? Why is Model Based Testing an answer if we can't use models which are there to create code from because if they containing errors we are creating those errors also in our test cases? Why do we always need tools? Why not start thinking first and keep it simple so every one can understand it.
After this presentation there was a short break. I went again to the dungeons of the building. Yes, literally to the dungeons as there was the exhibitor section. And there can be fun in the dungeons, as there were some former colleagues I worked with and some more new people to talk to. Even a long lost former colleague who went to Finland to find his luck. (Rolf, it was good to see you again!)
I had to hurry to be on time for a workshop given by Michael Bolton: Heuristics: Solving Problems Rapidly
He is good!!!! As he writes on his blog about the "Heuristics Art Show, EuroSTAR 2008" It was a show. And I'm glad I didn't miss that show. Michael has the gift to make the crowd feel happy by given examples and by his enthusiasm. He has to gift to exploit the happy feeling to interaction and understanding. At least he provided me with some guiding directions to think, read, and discuss more about heuristics. Michael: thank you for this show. At that point you already were ahead for becoming one of my heroes.
I intended to have a meeting during lunch with him. Somehow I wasn't the only one in the building. So I didn't skip lunch and eat my sandwich alone :)
During lunch I spoke shortly with him only he wanted to attend another presentation with the promise we would meet eachother lateron. I forgot that I planned to go to the presentation from Graham Freeburn: Make Your Testing Smarter - Know Your Context! And also forgot to go.
So I spend my time to meet other people and also a person I met also earlier this year, Derk-Jan De Grood. He is author of a book called TestGoal. This is another great book which is also available in English. It is not offering a new method or technique, it provides guidelines how testers can give contribution to their job and a framework to structure it. (I couldn’t resist to looking at the demo and will mail you about my findings)
After a while he came down the stairs, entering the dungeon together with Graham Freeburn and another person I forgot about his name. Michael and also Graham took the time to explain the meaning of certification. Why testing the tester is more useful. How to ask correct questions. Giving us a context to think in.
Graham, thanks for that hour. (or was it more). To you, Mr. Michael Bolton, thanks for taking the time to gently push me in a direction to think more and further about software testing and their context. I recommend also others to monitor his blog
Almost at the end of the day, no time to get home as there certainly would be traffic jams all over the country, I attended the book presentation from related to Agile testing by Anko Tijman and Eric Jimmink: Test2.0. Besides the presentation there was also food. Plain good "Snert" with bread.
Posted by Jeroen Rosink at 10:06 PM 0 comments
Labels: Conferences
Monday, November 10, 2008
"Schools of testing" are evolving
The first time I read about "schools of testing" I posted already an comment about it
Wednesday, February 20th 2008: Schools of testing, can you decide?
Sunday, February 24th 2008: Testing Schools and Organizational Schools
To understand this topic you might view the presentation of Bret Pettichord: Schools of Software Testing
Currently the discussion is still going on about this topic on several weblogs:
Cem Kaner:
Friday, December 22nd, 2006: Schools of software testing
Paul Gerrard:
Thursday, February 21st 2008: School's Out!
Monday, February 25th 2008: Clients, Contexts and Schools
Tuesday, November 4th 2008, Schools of Testing - Go Away
Thursday November 6th 2008: Labels, Stereotypes and Schools
Friday, November 7 th2008: I'm Not Ready for School... Yet
James Bach:
Wednesday, February 20th 2008: The Gerrard School of Testing
Wednesday, November 5th 2008 : Schools of Testing… Here to Stay.
Michael Bolton:
Thursday, November 06th, 2008: Schools can go away... when we all think alike
Monday, November 10th 2008
In very general terms these articles are about the need for schools for testing. Do they exist? Do we need them? Or can we solve it using the correct heuristics and axioms?
In my posting related to organizational schools I already mentioned a history about these schools. These schools also evolved over time. We kept learning and adapting the current situation and found new approaches. This didn't meant that the other approach was wrong or useless. I think there was another situation which needed another solution.
In history we see more of these examples. You can take examples from psychology (History of psychology), sociology (History of sociology) and also political List: forms of governments.
So my thought is that we cannot avoid "schools of testing". It is in human nature to create groups.
From Wikipedia:
Group:
In sociology, a group can be defined as two or more humans that interact with one another, accept expectations and obligations as members of the group, and share a common identity
The main reason of creation of groups is the need for interaction and share a common identity.
I think we as testers didn't do anything else over the last decades. We also want to interact with other people. Not only people part from our own group. Also communicate with people from other groups like developers, managers, users etc.
As those other groups have also their own behavior; testers adapted their approach towards those groups based on their needs.
In some organizations an Quality approach is more appropriate then an Agile approach.
To support these approaches we have methods to support them like: TMap, TestFrame, ISTQB and SCRUM.
This raises the following question to me: Are we capable to support an organization properly when we are experts in one of these methods and the need is for another "school approach"?
This made me think about were to position the Context Driven approach. As far as I understand they look at the context which is best suitable for the customer within defined boundaries.
As it might be impossible to be an expert in all methods; thinking within the good context might be an answer. The organization might have chosen for a method. Because this method is carried by several people, read group, they all want to make it a success. Using this as context you would be able to be successful although you are not an expert in Agile or TMap or that method. You are able to adapt yourselves to the need of the organization. Perhaps I'm wrong here.
To get this picture more clear for me I hope I will meet next Wednesday on EuroStar Michael Bolton who might explain it to me.
Posted by Jeroen Rosink at 8:28 PM 0 comments
Labels: Testing Schools
Tuesday, November 4, 2008
SCRUM: definition "DONE" and the influence of a tester
What I hear and read about the impact of SCRUM for testing differs from success stories to fear when using SCRUM. When using SCRUM a tester might have to change his perception of testing. In these kinds of projects requirements are not that solid. Documentation is not that final. How are we able to test? How can we influence as tester a development process like this?
There are some ways we can walk, here some of my ideas:
I think the power lies in the concept that the team is responsible for the delivery of a potential shippable product. The first thing which comes up in my mind is: Make sure you are on that team. Become also on of the responsible persons. Don't be the outsider test coordinator who sit down and wait until everything is delivered.
When you are on the team you can discuss about one of the concepts of SCRUM: the definition "DONE". If it is important that documentation is available and also according certain standards you have to bring this up for discussion and see if it fits in the goals of the organization that time is spend for good documentation. The same you can do with requirements.
I'm sure that there are more opportunities to help the team, the organization and yourselves. I suggest start with these 2 basic steps first.
Posted by Jeroen Rosink at 6:02 AM 1 comments
Labels: Agile, SCRUM, Test Methods
Monday, November 3, 2008
How to deal with "Ecoliability"
Do you like the idea?
I recently started to write about "Ecoliability" as a new quality attribute. Although I'm certainly not the person who is able to dictate what is right or what is wrong; at least I can use my weblog to share my thoughts. Perhaps there are others around the world who also likes the idea of this new quality attribute.
Why the need for this attribute?
As in previous posting I already referred to the trend to focus on data storages to save energy and special forums to explain the benefits of virtual machines. See:
04--7-2008: The green part of development and software testing: "Ecologiability"
11-02-2008: Introduction of the quality attribute: "Ecoliability"
Currently environmental awareness is getting already much attention. Discussions are held how to store data or how to use machines efficiently. Why not make it also important and explicitly measure the outcome of it by assigning those figures to an individual attribute?
Why not using an existing Quality Attribute?
In my opinion you could capture the urge for this green part under several other attributes like:
Efficiency: Time behavior and Resource Utilization: e.g. how much CPU do you need?
Portability: Replacebility: e.g. how easy can an old machine be replaced?
Only is this enough? If we capture the sense of ECO under those attributes, they will just become a part of the whole picture. If we make a single attribute for it: we can measure it as part of the requirements. It might help making decisions and also make it visible that an organization cares. As an example an organization can demand that the new application should reduce the data storage by 10%.
What to measure?
I think there are enough objects we can measure. Only we have to measure with sense . And as technology is evolving all the time and Ecological common sense also I suggest making this attribute the first dynamic Quality Attribute. Only it should be dynamic within borders. Otherwise we get a situation as we have now with the name Agile development. As long as it fits in our picture we call it Agile.
As said before objects to measure can be:
- batch jobs are scheduled on a need for information basis, not only because we just like them to run
- data is stored because we need that data now and not possible in a unknown future
- database tables are defined based on need not on easy of usages or as a workaround
- environments can be mirrored in an virtual environment
I can imagine after making this explicitly mentioned in requirements they would look like:
- Data storage is minimized by 10%
- CPU usage is reduced by 15% of machine "XYZ"
- The environment should exist out of maximum of x-machines
- Backups are created based on business critical data only
Which Possible Dynamic Borders?
Perhaps the borders can be related to:
- Infrastructure: e.g. which machine are we using and how are the connections defined?
- Datastorage: e.g. what is the lifecycle of data? What is the level of redundancy?
- Functionality: e.g. how does the functionality support the infrastructure and data storage borders?
When to measure?
I think it can be measured in all stages from Unit test towards user acceptance testing.
The question is: when do we start measuring?
I dare you reader to think with me how this could work.
Posted by Jeroen Rosink at 9:12 PM 0 comments
Labels: Ecoliability, Ideas, Testing in General
Sunday, November 2, 2008
Introduction of the quality attribute: Ecoliability
In a previous post 7 April 2008 The green part of development and software testing: "Ecologiability" I already spoke about introducing a new quality attribute called: "Ecologiability"
In this post I want to change the term as non-native-English-speaker I have difficulties to pronounce this word. I would introduce "Ecoliability"
I still believe that in software testing we should consider this also as a area to focus on as recently more articles are written about green computing. Servers which are using lesser energy. Choices to use virtual environments etc.
http://www.greenercomputing.com/current
There are even discussions about introducing a badge for power efficient servers.
Energy Star for green servers to come this year
If there is discussion about the green part of applications, then it should also be measured. If it can be measured and it is, why not make a quality attribute for it.
You can use this attribute for testing infrastructure and perhaps in the nearby future also applications are designed to use lesser resources of those applications.
Therefore I suggest the idea: If we want to be greener, name it and make it measurable. An approach for this is defining this quality attribute: "Ecoliability"
I can imagine that there are arguments to measure this in other quality attributes like: Efficiency. To measure it in this attribute it just becomes a part of an attribute. I think our environment is more important to be just a part of something. Let us make it explicitly!
Posted by Jeroen Rosink at 8:29 AM 0 comments
Labels: Ecoliability, Ideas, Testing in General
Saturday, November 1, 2008
Destructive vs Non-destructive testing
Do you have a destructive mind?
Now and then I have discussions about the purpose of software testing. And mainly I got the feedback that testing is all about finding defects. It is not just finding defects on a controlled way. It is finding defects in every corner. Without a found issue you didn't test well enough.
It is more the game if you have a mind that enables you to find those defects which "destructs" the application and/or process. The conclusion of this way of testing is: If a defects is found you cannot use the application.
Proof that it works
Another way of testing can be a part to proof that the functionality works. In this approach you are not focused on finding defects, the focus lies on proofing that in that part of functionality using it under those conditions no defects are found. The conclusion of this approach of testing: If no defects are found you can use the application under those conditions.
Which one is wrong?
The definition of testing according to the glossary of ISTQB of testing is:
Testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Based on this definition there is room for both:
1. demonstrate fit for purpose: proof it works
2. detect defects: find those defects
In my opinion demonstrate the fit for purpose can only be done by non-destructive testing. Using a destructive approach you more likely proofing it doesn't fit. To detect defects can be done by both approaches.
Nondestructive testing vs destructive testing
In Wikipedia these definitions are also mentioned. It is not really focused on software testing.
Nondestructive testing
Destructive testing
In the last explanation conditions are given when to use a destructive testing approach:
Destructive testing is most suitable, and economic, for objects which will be mass produced, as the cost of destroying a small number of specimens is negligible. It is usually not economic to do destructive testing where only one or very few items are to be produced (for example, in the case of a building).
Is it economical to destruct your application?
It depends.
In the above mentioned definition you have to decide if there are more pieces or just a few and if it is economically allowed.
Example: if you are buying a car with the purpose to drive from A to B. Are you a driver who tries to drive 120 km/hour using only the first gear? As you already know that this will stress the engine. Only is that car build to drive in this way? I don't think so. You try to use the car as you want to use it and check if it behaves as expected.
Will a destructive approach help you buying the car? At least after this approach no-one will buy the car for that price.
Example: If you want to buy a tent one of the purposes is that it gives a shelter in certain conditions. You don't bring a bucket of water with you to proof it is water resistant. You trust that it is as this is one of the main functions of a tent. What you can do with your means is pulling the frame to simulate a "storm". If it stands you will be happy. If not you continue searching for another tent.
Will a non-destructive approach help you here? Yes: as you can check for space, color and usability.
Will a destructive approach help you here: Yes, only you are using controlled force based on expectations you might describe to that tent.
In software development, we often don't have more pieces, though we are able to create more pieces, or to correct the damage. Only this cost extra money.
Should we avoid a destructive approach of testing?
As destructions cost money we should decide in front if we have the budget to recover the damage. And in this case it is not only recover it during development (project); also to recover while we are already in production.
When used good and wise a non-destructive approach already provided enough information that the chance of errors in production is limited when using properly or as defined. Therefore a destructive approach is not really necessary. Normally in test processes the time is limited and therefore the chance that everything is tested as defined is minimized. This might be the main reason also to use a destructive approach.
If destructive is done using certain amount of force we should use it controlled, or not at all.
Approaches
As the goal and perception of testing differs you might explicitly mention your approach in the test plan. Based on this approach you can define you strategy. Here some approaches:
1. testing using a non-destructive approach
2. testing using a destructive approach
3. testing mainly focused on non-destructive testing, parallel a destructive test focus
4. testing using a non-destructive approach, and when a certain coverage is not reached switch to destructive to proof although the current positive results the system is not yet mature
5. test using a destructive approach and when a certain maturity of the system is reached, start using a non-destructive approach.
Based on the chosen approach you can decide which techniques fitting best and which metrics you are steering on.
Which one to use?
As for each approach there are arguments to use. I think it is important to inform the organization which approach you want to use based on arguments. Not because you like that approach.
Posted by Jeroen Rosink at 8:26 AM 0 comments
Labels: Metaphor, Testing in General
Saturday, October 25, 2008
If tools are the answer for everything
Often managers are claiming they want to automate the testing process or the test execution.
Lately on seminars or conferences tool suppliers are showing their solution to all kind of answers.
Sometimes the testers are able to convince that using tools can only be fruitful when the process is in place.
Occasionally I wonder does the cost pays it benefits.
Are tools indeed the answer for everything?
When I started working in the software testing business, there were only a handful of test tools available. In certain situations the actually worked. In most cases they became shelf-ware.
If you start searching for tools to support software testing now, then you need more hands to count them. This raises a question to me: Is there no tool which can help us out? Normally I would say: it depends. In this case I'm more tending to the statement: If there are so many tools available, then there is possible a market for it. And if there is a market for it, it has to be based on the principle that "My tool is better then yours". Only other tool supplier would tell the same about you own tool. Based on the number, the chance you find a tool fits your needs is minimized. As by the number the chance on distraction will increase. This distraction might result in a wrong tool selection; which results in creating shelf-ware. Or results in uncontrolled tool implementation.
There is no way back
Perhaps I already mentioned this statement before, only I keep asking this question to myself every time the issue for automation is asked. A former teacher told me that "once you start automating there is no way back."
If you select a tool and start using it; you should have defined a plan for it about the ROI's. If you managed to embed the tool successful in you organization for that moment. You get limited in your flexibility to improve your processes as they have to fit in your tool choises.
Because of this, the used tools might withholding you to improve further.
An option for the future?
What I would like to see is that tools become more flexible. perhaps more SAAS-likely.
Give the organization the option to choose for:
- which UI to use
- which functions to use
- make the application able to use every function you need.
Possible benefits
As organization don't have the need for all functions in current test tools, the tools can be cheaper. The customer pays only for those functions he actual need. Also the need for functions can be incorporated in a improvement plan. this helps also to plan the ROI over the years. more important, the customer can decide what the life-time is for those certain functions and if the process of testing is changing, they are able to use the same tools with different functions.
Perhaps tool vendors should answer the question how they are able to support benefits in the future instead the current need. How they are working on supporting an organization not only to use their tools; also other solutions. As Tools should give an answer to a need and support an organization with obtaining their goals, not only the current processes.
Now: Tool Vendors: Support us with tools able to give us the ability to choose for functionality whether it is your functionality or others to make us succesful using our solution.
Posted by Jeroen Rosink at 10:45 AM 0 comments
Labels: Tools
Saturday, October 4, 2008
Ignore priorities: Transport Driven Testing
I think we are used to work with priorities related to what to test or what not to test. Based on priorities we also demand deliverance of functionalities by development.
Business priorities are also used to schedule which functionalities should be delivered by development first to make sure that those items are delivered first which gives the business optimal gains.
Sometimes also functionality is delivered because it was easy to fix.
In SAP you use the term "Transports" instead of terms like: "deployment", "delivery", "shipment", etc
To control and schedule transports in SAP you can use SAP Solution Manager. A benefit here is that based on a status you are able import your transports from the test environment to the production environment.
To build a solution in a transport you have to attach them to Solution Manager tickets. A ticket can contain one or more transport numbers. With these transports you made modifications to SAP objects. Each newer ticket might contain modifications on the same objects. a newer version of those changed objects exists now in the library.
Here comes the difficulty:
Imagine that ticket "A" contains a small modification which had lower business priority only it was quick to fix. And ticket "E" contains a high priority solution.
What if there is no tester available to test a ticket called "A" and you are already testing a ticket called "E" which came in later in the transport list and contains modifications on the same objects? If you confirmed ticket E is correct and it will be shipped using an emergency transport to production and later on you import the ticket "A", then the newer solution of objects are overwritten by the older version. This might result in a situation that the solution in ticket "E" doesn't apply anymore.
To prevent this to happen you have to force the tester also to test ticket "A" before an import of objects is made. You are now in a situation where imports are no longer based on business need; it is based on transport need. In this case a ticket is defining the priority for testing. With other words: the location of a transport in the transport list defines which ticket has the highest priority for testing and no longer the business need.
In SAP Solution Manager you have the ability to use the Import All List to monitor which transports should be tested first. Based on the status of the ticket like "Tested, OK" you can define which transports can be imported without major risks. If there are transports which don't have this status, you can place a STOPMARK before that transport. All transports after that STOPMARK have to be imported on manual request. This introduce the risk that when during the next test phase those transports are desired by business and have to be imported using an emergency transport you might have to import other objects as well.
In this situation, the release content is based on business priority, you have to make sure you don't get the easy things also in which are easy to fix and hard to get a tester for. When the release content is fixed, you have to test not only on complexity and business gain; you also have to test to make sure that the STOPMARK is set as low as it can be. This means that you have to work according FIFO (First in-First out), first import the objects which are entered first on the list as the next one might contain a newer version of the object.
This means you have to force the test execution based on the transport list, so you might call it transport driven testing.
Posted by Jeroen Rosink at 8:35 AM 0 comments
Labels: Process
Sunday, September 14, 2008
Magazine: Software Test & Performance
During searching on the internet for information I ran into this magazine: Software Test & Performance. Though I somewere I already heard about it but forgot it over the time. Although the magazine contains quite some advertisements it also contains some good articles. And that is what it is all about.
This current issue can be downloaded from:
http://www.stpmag.com/retrieve/stp-0809.htm
This magazine can be ordered and a printed copy is send to your adress or you can download the magazine in PDF-format for FREE.
One of their good services is that they also made previous copies available for download.
Posted by Jeroen Rosink at 8:59 AM 0 comments
Monday, September 8, 2008
SmarTEST a newer version
It is already some weeks ago I read a new book related to software testing called: SmarTEST. This book is written by Egbert Bouman from Valori. Though it is written in Dutch I think it is worth to translate it in English as well.
One of the strengths of this book that it is based on a business approach were 3 models are leading in the IPS-model (Information, Processes, Systems):
- For Information the model: IDQ (Information and Data Quality) is used
- For Processes: POQ (Processes and Organization)is used
- For Systems: ISO9126 is used.
Based on these 3 models their approach is explained. Some times as and addition to already known methods in the Netherlands Like: TMap, TestFrame, TestGoal and ISTQB. In other situations it continues as a new view to our business of software testing.
Another strength is the overview like stakeholders and possible acceptance criteria. Or like the difference between terminology between TMap, ISTQB and SmarTEST.
Though the business-view approach is the basis of the book in my opinion. They might have spent more attention towards the full integration of the quality models and their impact on software testing. Still a experienced software tester might be able to play with them their selves.
Does this make the book bad? I strongly say NO! It will be an addition to your collection as it gives you enough food to think about. Like: How many W-models are there and which W-model is good enough. What is your perception of using Agile as development method? Can other methods be a child of this method or are they all part of the Iterative (Evolutionary) approach?
Do we need more books related to software testing? I think so as our world is changing the approaches also changing. Do we need this book? If you are able to read Dutch books I recommend this book.
for more information see: their site: http://www.smartest.nl/
details:
Price: € 39,95
Book, paperback / 280 Pages/ Dutch
Academic Service / 2e edition/ Published 2008
ISBN-13: 9789012125970
Posted by Jeroen Rosink at 8:07 PM 0 comments
Labels: Books
Wednesday, August 27, 2008
Reverse Business Intelligence testing
Currently Business Intelligence is quite hot. Lately there are some shifts in BI-specialized companies as shown in the following article
IBM, Cognos, and the End of Best-of-Breed
Perhaps this will make the live of a tester easier as components will be more integrated. On the other side it might challenge us more in the nearby future as those products might be integrated and placed on the market too swift which might lead in only proven technology when organizations are using the total solutions as they are offered.
What I learned is that organizations always need their own customizations. Sometimes they are forced by habits how the business works; sometimes they are forced by law.
Of course there are several ways of testing these systems. Here an approach which might help to identify if the reporting tool is able to create necessary reports based on existing data. (Keep in mind that this is just a simple description of a possible approach.)
Start testing based on the information needed. And work through the system backwards to identify your starting point.
This might be reached by the following steps:
1. Identify the KPI's
2. Identify the reports these KPI's are involved
3. Prioritize these KPI's
4. Start with the most important KPI
5. Identify the data these KPI consist of
6. Identify the source of those data
7. Identify the quality (appearance) of already existing data related to these KPI's
8. Create a test data set based on the appearance of this data
9. Run the reports
The challenge here is when errors occur, it is hard to say were the error lies, is it in the reporting tool, in the data warehouse or in the source data. At least we know that there are errors which give wrong information about the KPI's. If the location of the error is found and it is related to the (historical) data, then it might be better to correct that data instead of continuing with the next KPI. As those error also might influence the other KPI's.
If there are no unacceptable errors you might continue with the next KPI.
Though this approach will not cover any errors based on data created by the system. For this we might have to extend the test approach by the following steps:
1. Identify the functions of the main system which creates, alters this data
2. Identify the data model and architecture related to these data and functions
3. Define you test cases based on these functions, with this you make sure that data creation is of a certain quality
And continue with testing the next KPI.
Posted by Jeroen Rosink at 7:33 PM 0 comments
Labels: Ideas
Monday, August 25, 2008
How much don't we test?
Every time I get involved in a test project there are questions like: How much test cases do you have? Do we test enough? Why do you need that much time for testing? You have to test everything!
To me, those questions are always asked to soon. At the end of a project I can tell them for sure how many test case I have. It is just counting those cases. If we did enough could be answered if the organization is still happy after going life. Why I need time for testing can be identified based on the decisions made about the quantity of available Money, the needed Quality and the given Time.
And as always it should be:
- Money: Cheaper
- Quality: Better
- Time: Faster
What kind of information would it give in front if we answer the questions about number of test cases, and if we tested enough, and why we need that much time?
I think a more interesting question to answer is what we did not or will not test. If we provide that in terms the business knows they are able to pick one of those three options.
Posted by Jeroen Rosink at 7:24 PM 0 comments
Labels: Testing in General
Sunday, August 24, 2008
What planet are you from?
Did you ever come into a situation when someone starts talking about testing you asked yourselves: "What planet are you from?" Or did someone else had this question about you?
Perhaps you have the same feeling when you are reading my postings on this blog.
Still this is quite an interesting question. As starting wondering about planets you accept the idea about universe. As in a universe you have more planets. And perhaps I'm just sitting on another planet you are living on. Still this doesn't mean that we don't have any interaction or need for each other. For example the moon has some role making the earth rotate in a steady way.
Wikipedia says the following about Universe: "The Universe is most commonly defined as everything that physically exists: the entirety of space and time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term "universe" may be used in slightly different contextual senses, denoting such concepts as the cosmos, the world or Nature"
In projects there is sometimes need for testers, test coordinators, test management. And often they apply for those people with general terms like: The person should be skilled in a test method like TMap. The person should be ISTQB certified. The person should be a team player. And so on.
As test professionals we pretend to be experts based on our knowledge and experience in a certain area. Sometimes, perhaps sometimes too often, other project members don't understand what we are talking about. Though they know we exist and we have to make also our contribution towards the project. It is our task and duty to make them understand how we can influence the project success.
This can be a hard task as our position in projects might differ. Sometimes we are on another location, far away from project management and developers. Sometimes we are working next to each other. In all situations we influence each others activities. There for it is important we know what relation we have towards each other. We have to describe the project universe and explain our roles and positions. Perhaps more important, we don't need to understand each other in full detail, we have to accept the existence of each other and the need for success.
Let us be the moon and the project be the earth. Without us the earth would not rotate in a stable position. It is not necessary for the earth how we do it, as long as we can convince them we are in control.
Perhaps the answer to the question: "What planet are you from?" is no longer important. We don't need to understand each other in full detail as long as our position is clear. If they consist that you answer this question you can always say: I'm not from another planet. I'm just part of your universe. I'm the moon who shines bright white light and makes sure you planet is able to rotate steady.
Posted by Jeroen Rosink at 9:38 AM 0 comments
Saturday, August 23, 2008
RFK and knowledge processing
After writing these blogs: The RFK process explained and Request For Knowledge (RFK) somehow I got the feeling that I never could be the first of this concept. This triggered me to search for the word I was thinking I invented it: Request For Knowledge.
Fortunately for me: I had many hits and I came up to several interesting documents. I think I can say fortunately as these articles and presentations confirmed to me that here is another challenge for us testers or even business analysts to continue working on my initial idea. The challenges to collect information in a structured way were we might define the process to get grip on risks based on initial missing knowledge.
Although there is lots of information here some of those I found.
On Indentifying Knowledge Processing Requirements by Alain Léger, Lyndon J.B. Nixon and Pavel Shvaiko, May 2005
In short, presents a first step towards a successful transfer of knowledge-based technology from academia to industry.
I think the relation towards us testers here can be the usages of use cases to identify knowledge requirements.
Some Applications of Conceptual Knowledge Processing by Prof. Dr. Gerd Stumme
In this presentation an short overview is given how ontology’s are related to tasks and examples.
What I liked here was the introduction of the term: Ontology.
Ontology: "An Ontology is a formal specification of a shared conceptualisation of a domain of interest." T. Gruber, 1993.
Ontologies support a.o. the following tasks of knowledge management
• Acquiring Knowledge
• Organizing Knowledge
• Retrieving Knowledge
BOVIS LEND LEASE ikonnect Facilitated Knowledge Sharing by http://www.knowledgestreet.com/ September 2005
Some of the interesting points here is they specified several roles as Seekers and Facilitators.
Another interesting posting I found was the iWorkshop on [a-i-a.com] iWorkshop™ Knowledge Management and Collaborative Work
If we think a defined RFK process is needed to get more insight in the risks towards the software then knowledge processing might be a way to go. To me this seems to be a very complex process. Though this should not withhold us from collecting and translating the information in possible risks.
As in one of my articles I showed an example how RFK's on one area might impact the testing effort on another area. To improve this process, it perhaps might help to work on a certain ontology. As the moments of tasks mentioned can influence the way we have to define our strategy. We have to learn to adapt instead of forcing our process towards the organization.
Related to roles we can take, I think we will be as well seekers as facilitators.
I know I'm not unique with the term RFK. I believe there are some gains to get, focusing on this process also. It might help us define improvement suggestions towards the organizations. As well in front of a project as well during evaluation of the project.
Posted by Jeroen Rosink at 8:59 AM 0 comments
Labels: Process
Sunday, August 17, 2008
The RFK process explained
In an earlier post I submitted the idea of RFK's (Request For Knowledge). The basic idea of monitoring the process of RFK's is measuring were formal requests are made and how and when they are answered.
In software development processes there are some number of methods and techniques to deal with a part of this like inspections, reviews and walkthrough. Based on their character, formal/informal, sometimes notes are made if it is a failure in documentation or a when it is a question is something is unclear. Basically this covers the monitoring of knowledge request for a part when documentation or requirements are already available.
In my opinion you can start the RFK process much earlier. You can start when the project is initiated. I mention "can start" intentionally. As I strongly believe you should not change you development process to this RFK-process. It should be otherwise around. You should define you RFK-process based on your development process and also your organization.
Basically the steps for the RFK process can be defined as following:
1. Define a Knowledge-need-landscape;
2. Name the development method;
3. Identify participants and stakeholders;
4. Define the areas were information is needed related to the system
5. Define the risks and impact of missing knowledge within the method;
6. Define the phases of monitoring the RFK's;
7. Select your toolset how to monitor RFK's;
8. Report on Impact and status of RFK's;
9. Get agreement how to deal with this impact;
10. Adapt your processes based on made decisions related to RFK's;
1. Define a Knowledge-need-landscape
For adapting your RFK-process to the organization you have to identify the locations where a possible need for information exists. To get a clear picture of this it is important you know how your development process will be. Often you will see that a specific development method is chosen.
2. Name the development method
If project management did not named the development method you have raised you first item for the RFK-list. In my opinion it is important to make a statement of the chosen development method. As every method has its strengths and weaknesses. These strengths and weaknesses have to be translated by the impact on knowledge gathering. Often you will see that the weaknesses of a method are translated in other steps which make the method a customization. This will consist in newer possibilities for information needs.
3. Identify participants and stakeholders
If the method is know you will be able to identify the participants of the project. Participants are potential persons to ask for information. There are also stakeholders. If you have identified these you are for a huge part able to tell were information should come from. Only keep in mind, also stakeholders can ask for information.
It is important that mutual agreements are made to provide information, how it should be delivered and when.
4. Define the areas were information is needed related to the system
Based on the RFK-landscape you are able to tell were information is needed. You might start mapping this to the requirements of the new system. The importance of requirements can also be used to classify the importance of RFK. A possible method to do this is using the MoSCoW-method. Next to linking them to requirements you have to link them to the processes and sub processes of the system as I mentioned in my previous post, for some processes they are not that important, for others it is most important.
5. Define the risks and impact of missing knowledge within the method
It is essential to define the risks and impact of missing knowledge. This will help you define the acceptance criteria. If some risks are unacceptable, strong agreements have to be made here and also monitoring should be more in place.
6. Define the phases of monitoring the RFK's
Not all projects need full monitoring. You have to adapt it on the organization as well. If an organization decides not to spend a lot of time on the requirements phase to get them defined in detail, you might decide to monitor not during this phase on requirements, only in the next phase when requirements become detailed enough. What you might think about is that besides developing the system, often a parallel project is started to create manuals and courses. You might also take this into consideration as the information request should match the RFK's in the development cycle. Perhaps new RFK's are initiated. These can be asked to the system under construction as well.
7. Select your toolset how to monitor RFK's
The toolset is used to register the RFK's. This can be done in a home made tool, as I don't think there are already tools to be used for RFK's. You also might create a simple list using a spreadsheet tool. It is important that the tool is flexible as the need for information based on RFK's might grow during the project.
Another tool might be a flip-over on the wall were you place sticky notes for every request.
8. Report on Impact and status of RFK's
Besides of registering RFK's you also have to report on these. And more important, people have to act on the information you provided.
9. Get agreement how to deal with this impact
To make people act on provided information it is important that you have defined the roles and responsibilities for acting on RFK's impact analysis. If this is not done the impact on not acting might have negative influence on the project, you should be able to point people on their responsibilities.
10. Adapt your processes based on made decisions related to RFK's
When RFK's are raised they can be neglected or granted. Either way this has impact on all processes in a development cycle. For instance, if RFK is granted, this leads to newer information. Related to the testing process you have to identify what impact it has on your test cases. If a certain amount of RFK's are rose for a specific area you have to check what your test effort is on that area. If it was low, perhaps you have to increase it as that area is a potential risk of errors as information was not complete or changed over the time.
This also means that you might have to change your acceptance criteria during the testing phases.
When I start writing this article I thought about painting some kind of picture how this process would look like. Only this would reduce the adaptability to all kinds of development methods. Therefore I kept it with defining some steps. I hope these give you some help and convinced you (perhaps just for some small bits) that also RFK's are important.
Posted by Jeroen Rosink at 10:21 AM 0 comments