Showing posts with label Process. Show all posts
Showing posts with label Process. Show all posts

Monday, May 10, 2010

Rorschach, the power of visualization and software testing?

Introduction
I blogged about my experience in Weekendtesting were I used Astra Site Manager creating a map WTANZ02: Same Language, different sites and places. In that post Shrini Kulkarni challenged me to expand on how to use this as test strategy.

When you look at the images posted there, you might notice that the images look a bit like spots/stains.

Rorschach test
When thinking about spots/stains and deriving information from it reminds me immediately on the Rorschach test.


From Wikipedia: Rorschach test: "also known as the Rorschach inkblot test or simply the Inkblot test) is a psychological test in which subjects' perceptions of inkblots are recorded and then analyzed using psychological interpretation, complex scientifically derived algorithms, or both."


Below you see an example of a Rorschach image. Are you able to read this picture? Are you able to assign functionality to areas? Do you see bugs?

Image saved from wikipedia http://en.wikipedia.org/wiki/File:Rorschach_blot_01.jpg

Primarily based on the perception of these spots the user is asked what and how he experience this and why. What does the spot tells you.

Testing spots

Below you see the 2 images I obtained from "testing" the 2 websites as stated in the challenge from WTANZ02: Same Language, different sites and places.


Just tell me: what do you see?

Image 1


Image 2

It depends how you look at the images, you might identify some shapes. Perhaps you only see dots or animals. Perhaps you see bugs.


The strategy
Defining a strategy is a challenge itself. Writing about it and sharing your idea is even more a challenge. Writing about it and trying to come with a Heuristic is more challenging for me as this is quite new to me. So bare with me, support me and make me teach you as I can learn from you.


First steps
I suggest first to define the approach based on patters. Ask what the image itself can tell you and what information do you need to define the approach.


Imaging: Create a map of the website/ functionality to define a certain landscape
Defocus: Don’t approach the image as a system, approach it as a painting, approach it different, what else do you see? Use your imagination.
Interpret: Are you able to tell a story about what you see (colours, lines, drawings, etc.) and argument it?
Density: is there a structure available representing the first impression you had?


Next steps
After you got a main overview about what the system could look like you might play with the following components.


Complexity: Is there some kind of structure? Are there lots of nodes and are you distracted by it?
Number of objects: Are there too much objects visible you are not able to zoom in without missing details?
Environments: Can the map also be used to identify other systems/ secure areas?
Risk Areas: Are you able to point areas of risks in the map based on "important" functionality?
Process: Is there a order available in the structure which also might support any process?


Other steps
Looking to the previous actions, I hope to provide some additional ideas how images from a website structure can support defining an test approach. I believe looking on a different way to images or structures you might come up with other concepts and thoughts which supports your test approach. The next step could be adapting the newly gained view into your test process. Based on this information you can define alternative test cases or perhaps product risks analysis.

It might help to get back some creativity back in testing.

Tuesday, February 3, 2009

Classification of the "Unknown Unknowns"

In my previous post Schedule the Unknown Unknowns I talked about the good post and classification by Catherine Powell about possible problems from systems in the test schedule: Problem Types. She wrote a follow-up on this: Uncertain of Your Uncertainty. She asked for ideas on this topic. This triggered me to dive in the subject of "Unknown Unkowns".

Although there are a lot of articles which can be found on the internet the one from Roger Nunn: The "Unknown Unknowns" of Plain English helped me to get a bit of light in the darkness. He posted there the "Rumsfeld's categories: the states of knowledge during communication"

1. Known knowns:
* Things I assume you know - so I do not need to mention them.
2. Known unknowns:
* Things I know you do not know - so perhaps I will tell you, if I want you to know.
* Things you know I do not know - but do you want me to know?
3. Unknown unknowns:
* Things I do not know that you do not know or that you need to know - so I do not think of telling you.
* Things I am not aware of myself - although it might be useful for you to know.
* Things you do not tell me, because you are not aware of them yourself - although I might need to know.

I think the main topic in his article is "Communication". He refers to Rumpsfeld: "Rumsfeld's winning enigma uses combinations of only two very common words that would fit into anyone's list of plain English lexis."

An article from C. David Brown, Ph.D. , PE: Testing Unmanned Systems triggered me to think in 2 types of systems: manned systems and unmanned systems. Why is this important? Acknowledging the existence of unmanned systems can help you not to forget also to those systems questions. Although I never worked with unmanned systems; I worked in projects with modules which were autonomously working parts in the whole systems. So there is a huge chance that it contains my "unknown unknowns".

C. David Brown classified Unmanned systems can generally be classified into three categories: - remotely operated;
- fully autonomous;
- leader-follower.

Combining these both stories you have to deal with human communication and system communication to find out about the "Unknown Unknowns". I think it should be possible to capture those uncertainties in a time schedule by identifying which questions on what moment to whom/what to ask for which purpose. Of course this is so obvious. Still it is hard to tell how to fill in those gaps.

What often I read on James Bach's and Michael Bolton's weblogs is to question the system, to question the business. To test the tester instead of certify the tester. If I'm correct these questions are raised to obtain mutual understanding and exchanging knowledge. The pitfall here is that questions are not asked because of one of the above 3 reasons mentioned in the "Unknown Unknowns".

Perhaps the nature of the system can help us here and also complexity.
If a system/function is running fully autonomous the chance of asking the system is much harder, the chance of getting answers from the system is even harder. Human interaction has to take place which is introducing another variant of those 3 unknown unknowns.

Perhaps you can continue now in at least 3 directions:
1. Keep asking the system and business, continue learning from each other and start adapting your test process based on this uncertainty;
2. Define an approach were you are not measuring how many time is already used, start measuring how many time you need to get things Done (a SCRUM approach could help)
3. Another option would be reviewing. I think the reviewing can decrease the occurrence of level of unknown unknowns as the individual expert is no longer the only persons who ask questions, also others. With reviews you might build up a system of numbers which can help you to define on which areas you have lesser knowledge by the number of questions raised or major issues found.

In the first 2 options, I can imagine that it will be hard to take this into your time schedule. If you will use the last option you might take some numbers from the Boehm curve. and play with it From the book (Dutch) "Reviews in de praktijk by J.J. Cannegieter, E. van Veenendaal, E. Van de Vliet and M. van der Zwan" a table they defined a simplified model of the Boehm-law:

Phase Simplified Boehm-law
Requirements 1
Functional Specifications 2
Technical Specification 4
Realization 8
Unit/Module test 16
System/Acceptance test 32
Production 64

Now you can start playing with these figures, keep in mind that this will not lead to reliable figures. At least some figures can be useful instead of no figures.

First: flip them around
Phase Simplified Boehm-law
Requirements 64
Functional Specifications 32
Technical Specification 16
Realization 8
Unit/Module test 4
System/Acceptance test 2
Production 1

Second: if 1 = 1 hour then you might calculate 20% for available unknown unknowns. Here you will have in the requirements phase and addition to the time schedule of 64 * 20% = 12,8 hrs

I think you can use such kind of calculation as in the beginning of a project the chance for unknown unknowns is much higher then at the end.

In this situation you accept that every unknown unknown will have the same impact.
You can add a third step to it:
Third: Add classification based on MoSCoW principle
Must haves = 100%,
Should Haves = 80%
Could Haves = 40%
Would Haves: = 20%

When having 10 M, 4 S, 25 C and 25 W this will lead in requirements phase to:
10*(64*20%))*100%= 128 hrs*100% + 4*(64*20%))*80%= 40.96 hrs + 25*(64*20%)*40%= 128 hrs + 25*(64*20%)*20%= total: 360.96 hrs

Fourth: Here are still a lot of assumptions made. Based on History and complexity the percentages should be adapted. The initial 20% should be defined based on historical figures. The percentages used in the MoSCoW classification can be defined on technical complexity combined with available information. I assume if a lot of questions are raised the chance something is missing will be higher if reviews are executed in a proven state.

Fifth: The numbers used from the simplified Boehm-law can be adapted based on level of clear communication between people. Perhaps the level of interactions between people can be used to minimize the chance that information is not shared in an early stage.

Sunday, February 1, 2009

Schedule the Unknown Unknowns

Catherine Powell posted a very good and clear article about dealing with possible problems from systems in the test schedule: Problem Types.

She classifies them into:
- Fully understood problems;
- Partially understood problems that can be worked in parallel;
- Partially understood problems that are linear.

What I missed here is considering the risk of problems which we are not yet aware of: the so called unknown unkowns.

Donald Rumsfeld said: "There are known knowns. There are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. There are things we do not know we don’t know."

I think Catherine covered with her classification almost the whole saying of D. Rumpsfeld:
* Fully understood problems -> things we know that we know
* Partially understood problems -> the known unknowns and the now unknowns

As she mentioned that this can be used to refine the time schedule, the known knowns are already in place, and the fully and partially understood problems have to be considered to estimate some additional time for it.

The unknown unknowns I still miss in the classification. If I understood her correctly she gives the advice to keep investigating and then add them to one of the classifications.

I can imagine that in the time schedule you already reserve time for this. For time reservation you need some solid ground. As we are talking about the unknown unknowns this is certainly missing. Perhaps you can justify that solid ground by investigating how complex the system is and how many questions you still might ask to the system. Based on this you can visualize the field of uncertainty.

Explicitly reserving time for this can help project management to identify the risk that the schedule will exceed. The big challenge is not only to move the unknown unkowns towards the unknown knows section; it will also be a challenge to make this activity accepted by project management.

Thursday, January 29, 2009

What we can learn from history?

When I read books or articles related to software testing they are mostly writing about an approach how the testing process should be. Others are explaining how techniques should be used, how test processes should be improved and some how tools can be embedded/used. They are mostly approaching the profession of testing from their point of view, from the inside of testing. I can imagine this happens because testers have most influence on just their area.

What I always try to do is checking if there are other topics to combine in our profession. The purpose for this is not to deepen my knowledge, also extending my knowledge. During the last weeks I keep wondering if there is something missing and is there a reason for it.

This triggered me to see if I can find some dates and events from history until now and check if there is some relation between them.

In the topics below I will try to see whether there are patterns in history which can be linked to software testing. For this I search for dates related to sciences, computing, development methods, testing era and management.

History of our society
When I take a look at our history I found these figures at Wikipedia about some of our milestones which build our society:
History of mathematics starts at ca 1900 BC
Philosophy first noted at ca circa 585 BC
Sociology introduced the term "sociology" in 1780 AD
Psychology as an independent experimental field of study began in 1879 AD, though the use of psychological experimentation dates back to Alhazen's Book of Optics in 1021,



I think you can classify this timeline in how we human approach our environment as following:
Mathematics is a technical approach, trying to investigate how items are working and how we can improve those objects to improve our living.
Philosophy taught us that there must be value of using objects. Based on the value there must be a dependency between people as it didn't work always. The culture and social being of people has also an impact and contribution to usage of objects and its value. I want to capture this in the term Sociology. After a period the Psychology was introduced to understand how the way of thinking of human kind has its impact on using objects within a mindset of thinking by groups.

Ages of computing
You see a partially similar timeline also in inventions and usage of computers and developments. In the figure below I captured some events from Timelines of computing from Wikipedia *1: I think which are important and symbolize this process.



Perhaps I'm making a wrong assumption, when combining the periods of human skills in relation to the speed of invention in the computing area then I see a similarity. First we tend to approach computing the technical way. After a period it becomes a part of our way of thinking. Then technology becomes a part of our social being and currently we are thinking how technologies can support our way of thinking and how our way of thinking can be adapted in technologies.

Development methods
It seems that computing is quite old, the first humanoid robot was created in ca 250 BC *6.
The first programmable humanoid robot was Al-Jazari's programmable Automata (Humanoid Robot) in 1206. In those days we were not working with development methods like Waterfall, SCRUM, RUP or Agile.



It surprised me that development methods which empowers the business involvement exist already for some time and still we tending to use the more waterfall approaches and thinking.

Software testing area
When talking about software development nowadays you also think about testing that software. D. Gelperin and B. Hetzel *3 classified the phases of software testing as follows:
Until 1956 : was the debugging oriented period
From 1957-1978: the demonstration oriented period where debugging and testing was distinguished now
between 1979-1982: the destruction oriented period, where the goal was to find errors
1983-1987 : the evaluation oriented period
From 1988: prevention oriented period



Unfortunately this timeline might not represent the current status. I think the next cycle in testing could be called "Adaptation" and based on the dates from DSDM, SCRUM, Agile Manifesto I would say that this area started on ca 1995. I would make this assumption as within those methods business involvement started to grow.



If you compare this timeline with the development timeline you will see that most of the development methods are "invented" in the prevention area. Comparing this with the actual inventions made you will see that also most of the inventions done which are impacting our current test environment are also done in this period.

History of management
Visualizing the history of management you will notice that human interaction is already important since the 1930's. *8



If I'm correct you see here that in management human attention and involvement was already an topic since 1930's. In software development it started much later, I would say that from 1983: evaluation period, it became more important in software testing. This made me think if we are still working behind the facts and we should make the next step in software testing. The next step could be instead of only improving our test methods and best practices also learn how we can learn from the past, what we can learn from other processes.

In organizations there are still influences noticeable from the other management schools. As these influences will also have its effect on culture it therefore will have its effect on projects. If organizations are a mix from management schools and styles then software development is also. Therefore it will be hard to use only one testing approach. It will become important to adapt your approach to the business. This adaptation can be done focusing on value for business instead of functionality for the business.

In a previous posting I already made a link between Organizational schools and testing schools: 2008-02-24: Testing Schools and Organizational Schools *4. In this article I tried to visualize that we should not ignore that testing schools might exist and we have to evolve also. Accepting a testing school structure can help to identify your position and check if the thinking fits in the project.

I believe if value is involved we should not only measure the economic gains, it should also fit in the organizational culture.

Organizational culture
Charles Handy *2 classified 4 organizational cultures:
- Power Culture: This concentrates power among a few;
- Role Culture: People have clearly delegated authorities within a highly defined structure;
- Task Culture: Teams are formed to solve particular problems;
- Person Culture: All individuals believe themselves superior to the organization.

When defining the strategy for testing you have to consider the type of culture also. I can imagine that introducing an Agile approach in an organization with a "power culture" will have some struggle to get commitment, involvement and understanding from the business as the business is here not all users, it is just those few people with power.

System approach or contingency approach
One nice short example I found on Systems Approach *7. Here the difference is made that a systems approach deals with an organization as a black box. The organization is more a static environment. In this approach all kind of test methods might fit in well as the borders are well defined. The focus lies more on efficiency and effectiveness.

The contingency approach sees an organization more as a dynamic environment, here it depends. The dependencies are undefined and still to be explored. I like to see organizations and therefore projects more as a organic being. This believe empowers me to think that there is more we have to focus on then only improving our current way of working, we also have to explore our boundaries and keep asking what can we learn from others and how can we adapt.

The next steps
It would be wrong to start ignoring developments which can be done from out a testing perspective. What I tried to visualize in this blog is to open eyes for other approaches. Perhaps some understanding can be found in our history. Sometimes ideas seem to be wrong as they don't match the frame of reference of the tester. If adaptation to situation is expected and answers are sought in a technical area also try to consider the impact on business, management and cultural areas as well.

I believe we have still some fields to explore in software testing and not always a new test method, development method or technical solutions are the only answer. They might solve problems temporarily and might be suitable for the current project. We might still learn from our past and from our guru's.

*1. Timeline of computing 2400 BC–1949: http://en.wikipedia.org/wiki/Timeline_of_computing_2400_BC%E2%80%931949 Timeline of computing 1950–1979: http://en.wikipedia.org/wiki/Timeline_of_computing_1950%E2%80%931979
Timeline of computing 1980–1989:
http://en.wikipedia.org/wiki/Timeline_of_computing_1980%E2%80%931989
Timeline of computing 1990–present: http://en.wikipedia.org/wiki/Timeline_of_computing_1990%E2%80%93present
*2. Handy, C.B. (1985) Understanding Organizations, 3rd Edn, Harmondsworth, Penguin Books
*3. D. Gelperin, B. Hetzel: The Growth of Software Testing. CACM, Vol. 31, No. 6, 1988, ISSN 0001-0782.
*4. Rosink. J (2008): Testing Schools and Organizational Schools: http://testconsultant.blogspot.com/2008/02/testing-schools-and-organizational.html
*5. Software development methodology: http://en.wikipedia.org/wiki/System_Development_Methodology
*6 Humanoid robot: Timeline of developments : The Lie Zi described an automaton. ca 250 BC
*7 Systems Approach: http://www.mgmtguru.com/mgt301/301_Lecture1Page14.htm
*8. History of Management theory: http://web.njit.edu/~turoff/coursenotes/IS679/679newset2/sld040.htm

Saturday, January 10, 2009

It is snowing (:) decisions

During the last week it snowed in the Netherlands. While most of the roads are covered with snow I had to go to my work. I already knew that this will be a task which involves some risks because the weather news already warned me for this situation.

Normally there are several roads to take to get from my home to my working place. The decision to choose one of the options is based on available traffic on those roads, current traffic jams and possible chance on traffic jams. These possibilities are also based on the moment in time and the possible delays those options will give.

In this situation I had to calculate also the possibility those roads are cleaned from snow. In those days I had to choose for the main road as the risks for incidents would be much smaller. Alternative roads were no acceptable option in this case.

This is similar for system development. Under certain conditions it is better to use only main functionalities to get from the start of the system to the end of the system. Alternative scenarios should be avoided. For testing this implies only test those situations which can be defined as intended usage.

Before I stepped in my car and drove away I had to do some preparations. First make my windows snow-free. I also noticed that the street I'm living in was not cleaned. There were some decisions I made before I started the journey to go to work.
1. Do I have to go to work at that moment and take some risks?
2. Am I able to drive?
3. Which risks are there in the area I can oversee?
4. What actions can I do?

ad1: Yes, there was a production release scheduled and there were still some tests open which might be performed so production release could continue
ad2: Yes, I have a driving license, only no winter tires under my car and I could turn the car's engine on
ad3: there is snow on my car; if it can be removed I would have clear vision. There was snow in the street; if I was able to cross that area and managed to reach the street with lesser/no snow I minimized the risk for accidents. There were cars standing in my street which might by possible targets to hit due to bad weather circumstances; If I hit those the journey would be stopped or delayed based on damage.
ad4: I had to: turn on my cars engine, remove the snow from my car, drive safely to avoid the nearby objects, reached the first situation of go/no go (the snow free street)

You can compare these actions with an intake test; am I able to start testing and reach the defined goal.
At this moment I made important decisions to start driving using the main roads and defined a go/no go moment.

As I managed to reach the moment on which I decided to continue I knew that I had to calculate delay. As under normal situation I would reach a velocity of average 70 km/hour I noticed after using 1/5th of the main roads my average velocity was just 32 km/hour.

Gaining this information I introduced a new go/no-go moment. I asked myself if it was still possible to make the production release available although I will have a delay? Will I have enough time left to finish testing as due to the delay the testing window minimized? Will others who's support I need for testing, be available because the made the same decision to go to work?
Is it all worth to get there because I also have to go back at the end of the day? Can I speed up to increase my average velocity? Is there other information about traffic jam, weather conditions and road conditions available?

I first started with gaining information; I listen to the radio, checked the internet for traffic jams and called a colleague which would be heading the same direction.

At that moment I accepted the risks that no people and no were available. I didn't choose to speed up as the chance to be involved in an accident will be larger due to weather and road conditions in combination with the tool I have: a car without winter tires.

In terms of software testing you can translate this to the possibility to reach the goal within minimized time and clarifying risks for that moment and making those risks acceptable.

After I made that decision I watched more carefully I kept to the defined travel schedule. I acted on every situation. If I felt my car was slipping a bit I decreased the speed. If there were better conditions on the road I choose to drive the left lane instead of the right lane. You have to know that driving the right lane is introducing additional risks as most of the cars are driving on the left side of the road and the level of snow is lesser then on the right side. I accepted that risk. Knowing that the chance for accidents is lesser when keep paying attention and using controlled speed level. Also knowing that when something happens help would be available fast enough to be able to reach my goal at work.

Almost reaching my work I came in an area were the roads were not cleaned yet. I suddenly realized that I already crossed the point of no return. I started listening more carefully to the car, focused more on the road and the conditions and lowered my speed to almost 10 km/hour knowing that this will reduce the average velocity. I also paid more attention to the circumstances as the weather forecast for the following days would be similar and I had to reproduce the steps I had taken to reach my goal.

Finally I was sitting behind my desk and realized I performed a test process during those exiting hours. In this case the process was not covering all circumstances. As I only used one road to get to work. I didn't monitor the outcome of the other roads I could have taken to get to work. What I did was:
1. Identifying the goal: go from A to B;
2. Indentifying the need: release to production;
3. Identifying the risks and defined the first go/no-go moment: continue based on intended usage;
4. Monitor the chosen route and acting on changing circumstances: Am I still acting on intended usage;
5. Defined another go/no-go moment: am I able to reach my goal based on other conditions;
6. Cross the point of no return and remember the actions to make the defined route reproducible.

The final outcome was a proven and defined route which made me able to use my environment (roads) to obtain my goal: get from A to B. I think this can also be done in software testing: You don't have to cover everything. You don't even want to know what kind of coverage you have. If you are able to use the system based on intended usage and you accept the risks that you didn't test everything you only have to make sure that users are using the system based on the intended usage. Tell them what they are allowed to do and what they shouldn't use. And in case users are not listening make sure that there is a quick action force available to recover damage. Is this a nice way of testing? It depends based on the circumstances you are into. At least a Goal Driven Testing approach might work for you.

Monday, December 22, 2008

If something cannot be tested

So now and then I hear some sounds from testers or business when I ask them to test something that they are not able to test that something.

Somehow this sounds very strange to me. I believe that everything which can be build, can also be tested. If the business were able to find an error and the developer is able to solve that error then we should be able to test the solution. So who is right and who is wrong?

I believe in my testers. My testers are not unwilling to test. They are just not able to perform the tests. And I just want to know why.

Some reasons they gave are:
1. The test system doesn't contain that situation and I'm not able to recreate that situation;
2. I don't have sufficient rights to create useful data for the test;
3. The solution is too technical and therefore it is hard to understand the problem;
4. We don't have that problem anymore;
5. I don't have time to test and this is a minor issue;
6. I know that the solution is not what I need and therefore I'm not spending any time for testing on it;
7. This problem is not my problem; I'm not able to help.
8. Who are you?

I also believe that I should convince them to test, support them to test and during testing, facilitate them to test and make the business accept the risk when they are not able to test.

If they are not able to create the data, I should see if others are able. Mainly it is because of restrictions in the system. If giving them authorizations to the system to provide useful data without messing up existing data, then this should be the solution.
If they don't have the necessary technical knowledge needed to test this, I should help them with a person who has that knowledge and teach them or help them during the test.
If they know that the given solution is not what they need, I can bring them in contact with the developer and I also have to identify how this misfit occurred.
If they don't have the time, or it is not that important anymore I should search for other ways. A solution migh be embedding the test in another test case.
If they don't know me, I might have the wrong person and should look for another tester.

Even after all kinds of actions the tester might not be able to check whether the problem is solved. Does this mean that they should not test? NO! They should at least check if the functionality which is connected to the solution is still working. Should I close the ticket after those tests? NO, not immediately, first I have the business to accept the risk that the problem might not be solved and convince them that at least the proposed solution will not harm other processes.

Monday, November 17, 2008

Roles in Software Testing

Perhaps I'm the only one who's is wondering about the reason why we are making testing so complex. When ever a new approach is defined, also new roles are introduced. When new books are written, new roles are explained. Even when people are describing themselves on a network-site like LinkedIn, they express them selves in different roles.

When I started in the software testing business I was aware of the following roles:
- Test Analyst
- Test coordinator
- Test manager
- Test tool specialist

Over the years I also noticed that in TestFrame you had roles like:
- Navigator
- Analyst

Other roles I was aware of are:
- Test consultant
- Test Professional
- Test tool consultant
- Test tool developer
- Test team leader

In TMap next they introduced:
- Test project administrator
- Test Infrastructure coordinator

In a recently followed presentation about Model Based Testing they introduced the role:
- Test constructor

And there are roles all over the place like:
- Agile Tester
- Security Tester
- Usability Tester
- Test Oracle
- Software Tester
- Requirement Tester
- Test Architect

All these roles made me a bit confused. How can I call myself the best?
It sounds to me a bit stupid when I introduce myself as:
Jeroen Rosink: Test coordinator, agile tester, test architect, test consultant, test analyst, Software tester, Test Idiot.

I think other people already lost me. Perhaps we might have to go back to the basics.
1. Tell people what you are: e.g. Software tester
If there is room for further clarification you can continue explaining:

2. What your specialties are: e.g. Coordination in Agile project
3. What your knowledge is about: e.g. Test strategies, Test techniques, Functional testing
4. Skills: e.g. TMap, ISTQB, Context Driven Testing, Embedded environments, Web based applications, aso.

Only now I wonder if going back to these basics is enough, sufficient? Does it tells enough? Does it bring us what we need, or better what the customer needs?

If we go along this way we are on the edge of introducing more certifications as every one wants to be recognized by its specialism. At least it might simplify the way you can introduce your selves. It could be: Jeroen Rosink: 10 out of 21 certifications (assuming that there is already this many certifications related to software testing).

Only does this number tell anything? Imagine you have to explain what you do for living to an outsider. I always tell proudly that I'm a software tester and I test software. Although people don't understand that, I'm sure they can picture it. At least it is better understood then Test Analyst or Test constructor.

So perhaps you, reader can explain it to me why are we intending to make things so difficult? Or do you have other examples about roles in our field of expertise?

Saturday, October 4, 2008

Ignore priorities: Transport Driven Testing

I think we are used to work with priorities related to what to test or what not to test. Based on priorities we also demand deliverance of functionalities by development.
Business priorities are also used to schedule which functionalities should be delivered by development first to make sure that those items are delivered first which gives the business optimal gains.

Sometimes also functionality is delivered because it was easy to fix.

In SAP you use the term "Transports" instead of terms like: "deployment", "delivery", "shipment", etc

To control and schedule transports in SAP you can use SAP Solution Manager. A benefit here is that based on a status you are able import your transports from the test environment to the production environment.

To build a solution in a transport you have to attach them to Solution Manager tickets. A ticket can contain one or more transport numbers. With these transports you made modifications to SAP objects. Each newer ticket might contain modifications on the same objects. a newer version of those changed objects exists now in the library.

Here comes the difficulty:
Imagine that ticket "A" contains a small modification which had lower business priority only it was quick to fix. And ticket "E" contains a high priority solution.

What if there is no tester available to test a ticket called "A" and you are already testing a ticket called "E" which came in later in the transport list and contains modifications on the same objects? If you confirmed ticket E is correct and it will be shipped using an emergency transport to production and later on you import the ticket "A", then the newer solution of objects are overwritten by the older version. This might result in a situation that the solution in ticket "E" doesn't apply anymore.

To prevent this to happen you have to force the tester also to test ticket "A" before an import of objects is made. You are now in a situation where imports are no longer based on business need; it is based on transport need. In this case a ticket is defining the priority for testing. With other words: the location of a transport in the transport list defines which ticket has the highest priority for testing and no longer the business need.

In SAP Solution Manager you have the ability to use the Import All List to monitor which transports should be tested first. Based on the status of the ticket like "Tested, OK" you can define which transports can be imported without major risks. If there are transports which don't have this status, you can place a STOPMARK before that transport. All transports after that STOPMARK have to be imported on manual request. This introduce the risk that when during the next test phase those transports are desired by business and have to be imported using an emergency transport you might have to import other objects as well.

In this situation, the release content is based on business priority, you have to make sure you don't get the easy things also in which are easy to fix and hard to get a tester for. When the release content is fixed, you have to test not only on complexity and business gain; you also have to test to make sure that the STOPMARK is set as low as it can be. This means that you have to work according FIFO (First in-First out), first import the objects which are entered first on the list as the next one might contain a newer version of the object.
This means you have to force the test execution based on the transport list, so you might call it transport driven testing.

Sunday, August 24, 2008

What planet are you from?

Did you ever come into a situation when someone starts talking about testing you asked yourselves: "What planet are you from?" Or did someone else had this question about you?

Perhaps you have the same feeling when you are reading my postings on this blog.

Still this is quite an interesting question. As starting wondering about planets you accept the idea about universe. As in a universe you have more planets. And perhaps I'm just sitting on another planet you are living on. Still this doesn't mean that we don't have any interaction or need for each other. For example the moon has some role making the earth rotate in a steady way.

Wikipedia says the following about Universe: "The Universe is most commonly defined as everything that physically exists: the entirety of space and time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term "universe" may be used in slightly different contextual senses, denoting such concepts as the cosmos, the world or Nature"

In projects there is sometimes need for testers, test coordinators, test management. And often they apply for those people with general terms like: The person should be skilled in a test method like TMap. The person should be ISTQB certified. The person should be a team player. And so on.

As test professionals we pretend to be experts based on our knowledge and experience in a certain area. Sometimes, perhaps sometimes too often, other project members don't understand what we are talking about. Though they know we exist and we have to make also our contribution towards the project. It is our task and duty to make them understand how we can influence the project success.

This can be a hard task as our position in projects might differ. Sometimes we are on another location, far away from project management and developers. Sometimes we are working next to each other. In all situations we influence each others activities. There for it is important we know what relation we have towards each other. We have to describe the project universe and explain our roles and positions. Perhaps more important, we don't need to understand each other in full detail, we have to accept the existence of each other and the need for success.

Let us be the moon and the project be the earth. Without us the earth would not rotate in a stable position. It is not necessary for the earth how we do it, as long as we can convince them we are in control.

Perhaps the answer to the question: "What planet are you from?" is no longer important. We don't need to understand each other in full detail as long as our position is clear. If they consist that you answer this question you can always say: I'm not from another planet. I'm just part of your universe. I'm the moon who shines bright white light and makes sure you planet is able to rotate steady.

Saturday, August 23, 2008

RFK and knowledge processing

After writing these blogs: The RFK process explained and Request For Knowledge (RFK) somehow I got the feeling that I never could be the first of this concept. This triggered me to search for the word I was thinking I invented it: Request For Knowledge.

Fortunately for me: I had many hits and I came up to several interesting documents. I think I can say fortunately as these articles and presentations confirmed to me that here is another challenge for us testers or even business analysts to continue working on my initial idea. The challenges to collect information in a structured way were we might define the process to get grip on risks based on initial missing knowledge.

Although there is lots of information here some of those I found.

On Indentifying Knowledge Processing Requirements by Alain Léger, Lyndon J.B. Nixon and Pavel Shvaiko, May 2005
In short, presents a first step towards a successful transfer of knowledge-based technology from academia to industry.
I think the relation towards us testers here can be the usages of use cases to identify knowledge requirements.

Some Applications of Conceptual Knowledge Processing by Prof. Dr. Gerd Stumme
In this presentation an short overview is given how ontology’s are related to tasks and examples.

What I liked here was the introduction of the term: Ontology.

Ontology: "An Ontology is a formal specification of a shared conceptualisation of a domain of interest." T. Gruber, 1993.

Ontologies support a.o. the following tasks of knowledge management
• Acquiring Knowledge
• Organizing Knowledge
• Retrieving Knowledge

BOVIS LEND LEASE ikonnect Facilitated Knowledge Sharing by http://www.knowledgestreet.com/ September 2005

Some of the interesting points here is they specified several roles as Seekers and Facilitators.

Another interesting posting I found was the iWorkshop on [a-i-a.com] iWorkshop™ Knowledge Management and Collaborative Work

If we think a defined RFK process is needed to get more insight in the risks towards the software then knowledge processing might be a way to go. To me this seems to be a very complex process. Though this should not withhold us from collecting and translating the information in possible risks.

As in one of my articles I showed an example how RFK's on one area might impact the testing effort on another area. To improve this process, it perhaps might help to work on a certain ontology. As the moments of tasks mentioned can influence the way we have to define our strategy. We have to learn to adapt instead of forcing our process towards the organization.

Related to roles we can take, I think we will be as well seekers as facilitators.

I know I'm not unique with the term RFK. I believe there are some gains to get, focusing on this process also. It might help us define improvement suggestions towards the organizations. As well in front of a project as well during evaluation of the project.

Sunday, August 17, 2008

The RFK process explained

In an earlier post I submitted the idea of RFK's (Request For Knowledge). The basic idea of monitoring the process of RFK's is measuring were formal requests are made and how and when they are answered.

In software development processes there are some number of methods and techniques to deal with a part of this like inspections, reviews and walkthrough. Based on their character, formal/informal, sometimes notes are made if it is a failure in documentation or a when it is a question is something is unclear. Basically this covers the monitoring of knowledge request for a part when documentation or requirements are already available.

In my opinion you can start the RFK process much earlier. You can start when the project is initiated. I mention "can start" intentionally. As I strongly believe you should not change you development process to this RFK-process. It should be otherwise around. You should define you RFK-process based on your development process and also your organization.

Basically the steps for the RFK process can be defined as following:
1. Define a Knowledge-need-landscape;
2. Name the development method;
3. Identify participants and stakeholders;
4. Define the areas were information is needed related to the system
5. Define the risks and impact of missing knowledge within the method;
6. Define the phases of monitoring the RFK's;
7. Select your toolset how to monitor RFK's;
8. Report on Impact and status of RFK's;
9. Get agreement how to deal with this impact;
10. Adapt your processes based on made decisions related to RFK's;

1. Define a Knowledge-need-landscape
For adapting your RFK-process to the organization you have to identify the locations where a possible need for information exists. To get a clear picture of this it is important you know how your development process will be. Often you will see that a specific development method is chosen.

2. Name the development method
If project management did not named the development method you have raised you first item for the RFK-list. In my opinion it is important to make a statement of the chosen development method. As every method has its strengths and weaknesses. These strengths and weaknesses have to be translated by the impact on knowledge gathering. Often you will see that the weaknesses of a method are translated in other steps which make the method a customization. This will consist in newer possibilities for information needs.

3. Identify participants and stakeholders
If the method is know you will be able to identify the participants of the project. Participants are potential persons to ask for information. There are also stakeholders. If you have identified these you are for a huge part able to tell were information should come from. Only keep in mind, also stakeholders can ask for information.
It is important that mutual agreements are made to provide information, how it should be delivered and when.

4. Define the areas were information is needed related to the system
Based on the RFK-landscape you are able to tell were information is needed. You might start mapping this to the requirements of the new system. The importance of requirements can also be used to classify the importance of RFK. A possible method to do this is using the MoSCoW-method. Next to linking them to requirements you have to link them to the processes and sub processes of the system as I mentioned in my previous post, for some processes they are not that important, for others it is most important.

5. Define the risks and impact of missing knowledge within the method
It is essential to define the risks and impact of missing knowledge. This will help you define the acceptance criteria. If some risks are unacceptable, strong agreements have to be made here and also monitoring should be more in place.

6. Define the phases of monitoring the RFK's
Not all projects need full monitoring. You have to adapt it on the organization as well. If an organization decides not to spend a lot of time on the requirements phase to get them defined in detail, you might decide to monitor not during this phase on requirements, only in the next phase when requirements become detailed enough. What you might think about is that besides developing the system, often a parallel project is started to create manuals and courses. You might also take this into consideration as the information request should match the RFK's in the development cycle. Perhaps new RFK's are initiated. These can be asked to the system under construction as well.

7. Select your toolset how to monitor RFK's
The toolset is used to register the RFK's. This can be done in a home made tool, as I don't think there are already tools to be used for RFK's. You also might create a simple list using a spreadsheet tool. It is important that the tool is flexible as the need for information based on RFK's might grow during the project.
Another tool might be a flip-over on the wall were you place sticky notes for every request.

8. Report on Impact and status of RFK's
Besides of registering RFK's you also have to report on these. And more important, people have to act on the information you provided.

9. Get agreement how to deal with this impact
To make people act on provided information it is important that you have defined the roles and responsibilities for acting on RFK's impact analysis. If this is not done the impact on not acting might have negative influence on the project, you should be able to point people on their responsibilities.

10. Adapt your processes based on made decisions related to RFK's
When RFK's are raised they can be neglected or granted. Either way this has impact on all processes in a development cycle. For instance, if RFK is granted, this leads to newer information. Related to the testing process you have to identify what impact it has on your test cases. If a certain amount of RFK's are rose for a specific area you have to check what your test effort is on that area. If it was low, perhaps you have to increase it as that area is a potential risk of errors as information was not complete or changed over the time.
This also means that you might have to change your acceptance criteria during the testing phases.

When I start writing this article I thought about painting some kind of picture how this process would look like. Only this would reduce the adaptability to all kinds of development methods. Therefore I kept it with defining some steps. I hope these give you some help and convinced you (perhaps just for some small bits) that also RFK's are important.

Sunday, August 10, 2008

A project model from Pettigrew in testing

One of my former teachers though me a simple model about the success of a project. I think this can also be used for the testing process.

1 = Experience the need (as well the internal as external needs)
2 = Clear vision
3 = Capability for change
4 = Good start, (Executable, plan of approach)

I shall not explain much now on this model. Just take your time and think your recent project over. See how they fit in this model. If you saw one of these 4 situations occur, you might now be able to find the actual cause of this result.

For the future, it might help you identifying much faster the direction of your testing process. And perhaps you have still some time to behave such to fill in the question marks prevent your process is getting out of control.

Success of improving your Test Process depends on culture

Have you ever been in a situation were you know based on facts and methods how you want to improve your test process and still it fails?
Have you been in such a situation were you did everything right and still you did not gained the expected result?
Did you had that feeling afterwards that the organization was not yet mature enough to deal with your improvement suggestions?

Although there are enough challenges you have to concur, I think one of the important things is the culture of en organization. And I think this is a hard one to beat because organizations are based on human activities and humans are thinking and acting different. Still it should not be impossible as organizations consist of humans who think the have a common bounding. This common bounding should be based on something.

Geert Hofstede defined a model sometimes called the Union Model.


Symbols: These can be words, statures, images or other important signs which express what is important and meaningful for an organization. Examples: Language, Humor, clothing, Logo etc. .

Every organization is using them intentionally or unintentionally.

Heroes: In organizations heroes are normally role models. These are persons who people think are important for the organization. This can be based on their behavior, their contribution to the organization or their profession. Normally these are the founders or other important leading persons. Often their are more heroes in an organization, as well as small as big heroes.

Rituals: These are social actions or habits which are found important in an organization. These rituals show often some kind of pattern. Examples: way of greeting, eating and drinking together, Friday-drink, monthly meetings, way using communication like email, phone, etc..

Values: Are those things people approve and trying to comply with like: loyalty, solidarity.

It is hard to measure deeper values like fundamental principles. They are often dealing with visions and presuppositions how it is actually working. If there is consensus within the organization about the fundamental principles no discussions have to be held and they also give direction to possible solutions.

As in an union, you have to remove shell by shell to get to the center.
Sometimes in software testing you are a pioneer. Then it is hard to get examples for these shells. Though you have a world of experience you can take examples out of. Perhaps here we also make a mistake. We take examples from other projects, books and trying to place them in the organization you are working for. When you are aware you are doing this, please then be also careful, because now you are trying unintentionally to change the culture. I think if you are dealing with organizational aspects of culture, then do in intentionally.

If you relate this to testing then the first shell can be easily defined.

As Symbol you can take the name of the test method like: TMap. Or terms like: Test script, test case, test plan, ISTQB-certification. (I tried to find an example of Humor, though our job is a lot of fun, and humor is there enough, it is hard to give examples. Perhaps this one might work: "It is not a bug, it is a self developed feature!")

About heroes: you might take your test manager as hero. Though we have some good names in the world also like those who wrote books and expressing our profession all over the world like J. Bach, P. Gerrard and more.

When coming to rituals and values, you get more to the hart of the organization. Still this will be also the area you have to pay attention to as they will influence for a great deal your success for improving the test process.

I think it will be a waste of time if you force an organization to improve if your improvement suggestions don't fit into the culture. There are lot of books written what should be improved and how to do this. Only did they also mention to pay attention for culture? Or was the goal which fits the means?

I think when improvements have to be done we as testers have to pay attention to the culture of the organization. We might start identifying the shells of the cultural union. We have to try to translate those improvements in terms as they could be part of the content of these shells.

Saturday, August 9, 2008

Development Models and the impact on Testing

During my career as tester I learned that development methods have impact on testing. Also when sometimes a tester is working based on a test method it has impact on the success of the development method.

In books related to software testing often an approach is defined which is based no the Waterfall method. In the newer books they adjust that method to make it fit for incremental and iterative methods. Sometime this is done by only mentioning that it also works for other approaches.

In this Blog I'm trying to give some information about 3 development streams. It might not be completely perfect but I think it might be useful for the readers to give a picture how I see things. While I already know I want to write, I also know that this article will be much longer then I normally write. To miss information I decided not to split this topic in several other postings. Therefore I will give you some table of content:
- 3 approaches of development
- How development methods are linked to these models
- Short overview benefits and disadvantages per development method
- Overview how other models are related to these methods
- Impact towards software testing.

3 approaches of development
In general you have 3 development approaches. Waterfall, Incremental and Iterative (Evolutionary).
For more information you might check out the links which refer to description on Wikipedia.

In short these approaches can be described as following:
Waterfall approach: The whole system will be implemented at once;
Incremental approach: The system is divide is several steps, and is implemented in that order. Once a step is implemented you can not go back to a previous step to change functionality or do some "refactoring";
Iterative approach (Evolutionary): This is an approach were you continue development towards a final system. You are able to go back and change the code of the previous iteration. every iteration delivers a better system, even without adding new functionality.



How development methods are linked to these models
Often several types of development are called to be a variant on Agile Development, I rather prefer to keep speaking in terms of approaches. In this case Agile belongs to the more Evolutionary approach. Sometimes an approach belongs based on the actual implementation to either Incremental or Iterative. In that case you should not only look at the name of the development method like RUP or DSDM. You also have to investigate how that method is implemented. Perhaps the following picture gives some overview.


Short overview benefits and disadvantages per development method
In the last few years we tending to neglect Waterfall-approaches and claiming that methods as RUP, XP and Agile are always better. Only in some situations it is better to play save then take the risk of using methods you don't control on a level which is necessary. Mostly systems which has a long life-time like aerospace systems, life care systems or systems where you are a strong market leader and those don't change that much. I could be wise to use for these systems a waterfall approach. In the figures below an overview is given about the benefits and disadvantages of the 3 approaches. I know they are not complete. Still it can be handled as a guideline.







Overview how other models are related to these methods
To understand the impact of these approaches on Software Testing you have to understand how an approach effects the organization. To get a short overview of this I took several other models and views which can represent an organization.
Models:
- Organizational Model: Documentation about the organizational processes, procedures and activities;
- Information Model: Documentation how the organizational processes are supported/executed;
- Data Model: Documentation about the kind and location of information used;
- Process Model: Documentation how data is processed to support the pervious models .

Views:
- Conceptual/Logical View: High level view of the organization;
- Physical/Technical View: View of the architectural and informational representation of the system;
- Implementation View: How the system should be implemented in the organization.

In the tables below I tried to translate the development approaches towards the impact on documentation using several other models and views. To visualize this I used several colors.

When a certain cell is colored Green, then during the project this will not change much after it is finished. When the color is Yellow, there is a small chance it will change. The color Orange represents an obvious chance of change and the color Red: there will be changes.

The situation here is that when a chance occurs/ is made. The "defined" steady objects might also change when it is a result of new awareness. When this happens the test cases will change because they are built based on this documentation. Because changing documentation in previous steps takes time it gives an impression how steady the development process will be and therefore the test process. On the other hand it also give a feeling for ability to adapt your test process on those changes.

Views for a Waterfall-approach


Views for a Incremental-approach

Views for a Iterative-approach



Impact towards software testing.
Below you find a short overview of the characteristics of the development approach, their impact on the project development process and the impact on the test process. Keep in mind that these items are not complete. It is just mentioned to serve as a guideline. There is always a dependency towards the actual implementation of the development method.



I hope this blog gives you more feeling how development processes are impacting the test process. Bare with me when I made some miss interpretations. It came never to my mind to make a statement about which approach is better then the other. It should help you as a guideline to identify impact on your own process and therefore helping to identifying risks and controlling those.

Sunday, August 3, 2008

Request For Knowledge (RFK)

Terms like RFC or CR are well known in the development processes. They often are raised during the testing phase and some new features have to be developed. Most of the time this has its impact on the testing process. It is hard to test them within the same time period of the testing phase.

Mainly these are created based on new knowledge of the capabilities of the technology or system or even process. Prevention of this can be done by performing formal reviews or inspections. The outcome is registered in some way. Though these are not directly linked in testing strategies were additional risk might be involved.

For example: If formal reviews are executed, errors are identified and registered in a review log. The information about how many defects in documentation were found in certain areas are not used as they already solved those issues. So the risk for errors in the system is reduced. Only the change for RFC's is not implicitly reduced.

Perhaps we should introduce the term Request For Knowledge (RFK) and also make a formal process of it.

If I have to think about a definition for this it could be something like this:
A RFK is a formal request for additional information about a system, architecture, process, data or requirement.

The process could be something like this:
1. Create and maintain a RFK backlog; every person of the project can add items to it.
2. Assign a RFK backlog owner;
3. Start monitoring during requirement-specification phase
4. Every RFK should have a status, owner and result
5. For every phase transition, also handover the RFK-backlog.
6. Before the handover, identify the impact of outstanding RFK's on the next phase
7. Take counter actions for that impact.

The reason to do this is creating a process to get more and new information which could help reduce RFC's which have to be delivered tight away during testing phase.
I think when we start monitoring every area / subsystem/ process where questions are raised can give us information where functionality might change.

If in a period of time certain areas keeps requesting for knowledge then the chance is huge that the functionality doesn't meet the actual desired requirements. So the chance is also huge RFC's are introduced on a later stage.
On the other hand, if certain areas don’t have RFK's, perhaps this is a signal that the requirements are also not defined thoroughly.

If you prioritize the areas and combine it with number of RFK's you are able to identify risk areas earlier.



If we assume that based on above table "Process A" had a number of RFK's for "Sub System 2" which was unexpected high. And we see that "Process A" is also used in "Sub System 1". The impact of changes in "Sub System 2" when information is retrieved during testing phase brings high potential risk for "Sub System 1". If we have this information in front of starting testing, the test strategy can be adopted to cover this risk.

These figures can also be used identify business involvement. In projects sometimes certain department give full involvement right from the beginning. Certain departments start involving at the end of the project and the chance of raising RFC's is increased.
These figures can also be used if certain area's have a lot of RFK's, the users are instructed comprehensively by more detailed manuals and training as the chance they miss-use the system is increasing.

Perhaps you, reader, can come up with other usages of the RFK's.

Monday, July 7, 2008

The power of Three

If you take some time and step aside from your daily activity you might have noticed that there is some kind of structure in the activities you are doing and also in the environment you’re in. If you have enough imagination you might come back every time on the number 3.

In old days of school I learned the words "Trias Politica". Doing some wiki-search learned me that the original thoughts about this come from Charles de Secondat, baron de Montesquieu he described in Montesquieu's tripartite system "division of political power among an executive, a legislature, and a judiciary. He based this model on the British constitutional system, in which he perceived a separation of powers among the monarch, Parliament, and the courts of law."

Of course you might come up with more examples which extend the number 3. When thinking you can come up with examples like these:
a. test manager - test coordinator - tester
b. time - money - quality
c. system under test - test environment - testware
d. requirement - functionality - test script
e. errors - faults - failures
f. development - testing - implementation
g. plan - do - check
h. junior - medior - senior
i. test preparation - test specification - test execution
j. ....

Looking at these items you will see that you might have to deal with it during your job. If you can only figure 2 items you might spend some time to identify the third item. If you come you come up with more items it might trigger you to separate those until you have those items which are in perspective to each other. To get this you might keep in mind which belongs to these: executive, legislature, judiciary.

I think this is important as these items will influence you daily work. They can support your activities or endanger the things you are doing. I think it is important to know who your “allies” are and who your “foes” are. I think it is important to know what you miss or perhaps were you made a "wrong" linkage to one of the 3 items of the "Trias Policita".

This might help you better to understand your work, your role and your activities. Based on this you can improve your selves and the process.

Of course there are more things to say about this topic. Perhaps you think it doesn't work in your situation. I like to hear about it and learn from you fellow testers.

Saturday, June 14, 2008

Which errors do you accept?

You should test everything! I don't want any errors in production! The system should be a high quality system! And it all should be done within time and budget.

These are some phrases I hear often when it comes to software testing. Only when asking the questions: "What is everything?", "What is a high quality system?", "Within which timeframe you don't want see any errors?" Often the answer is: "Just like I said, just do it!"

Often these are signals people don't exactly know what they want. Still, expectations are in these circumstances high related to testing. They expect you are the solid rock to make sure they didn't miss anything. Only in these conditions it is hard to test.

I think the question asked is wrong. Perhaps it helps asking: "Which errors do you not want to see in production?"
The answer to this question should be translated to risks based on the functionality of the system. And before testing, ask the organization on what level they are willing to accept those risks. This might help to define the test strategy.

Perhaps these answers might be given:
1. "I have not enough knowledge about the errors I don't want to see!"
2. "These specific errors I don't want to see."
3. "Any error is wrong."
4. "You are the expert! You have to tell me."
5. "Don't ask me what is acceptable to go wrong, give me information what goes right!"
6. "Sorry, I don't have the time to answer your question."
7. "Who are you?"
8. "I do not speak Dutch!"
9. ....

If you take a look at these answers you see that in most cases (except answer 2) information is missing. If information is missing how are you able to test properly?

If it is clear what errors are not acceptable, you might consider to proof that those errors are not happen in the environment, instead of testing everything.
If you are in the lead to give information, you might based on functionality and complexity define a strategy where complex functions/modules with high business gain are tested thoroughly and others with lesser effort. And make it visible if it fits the timeframe of the project. Of course it doesn't. You know can ask the organization what risks they accept, what should be tested lesser.

It might help if you make it visible what impact functionality has on the testing process. Perhaps the figure below can help you to explain. I have to mention it is not complete. And off course you have to customize it to make it fit in your situation.


Instead of using Technical Impact, you might think of classifying errors not to be seen on the system. You might use: "Maturity of Identified Errors". Only in this case: If errors can be identied specifically you classify them as high and otherwise as low.

I expect that if the organization is not able to identify not wanted errors, and they see what you are able to test (Error Guessing) they might consider spending some time for identification.

Please don't see this approach as the perfect tool. Use it as a "talking picture"

Perhaps this helps you out.

The impact of an obvious small change

Some days ago I heard a commercial on the radio about the new Mercedes. In this commercial they talked about the wish of the driver that a certain flap should be used less often. To do this Mercedes adapted their car on a certain 18 points. First I thought it was about some very technical requirement the driver had. At the end of the commercial it was all about the flap above the rear tire: The gasoline flap.

For me this was a good example how a requirement can be stated in simple terms only the impact of that wish has huge technical impact to the car.

Somehow I got a déjà-vu moment. How often during testing new requirements are introduced by the business while your almost at the end of testing? Isn’t the business not claiming that it should be very simple to build and still bring the system to production on time?

I don't expect that Mercedes tested their car just by measuring the times the flap was used. Is it not more obvious that they tested all those 18 changes singular and integrated? I think so, because they were able to translate the requirement to the initial problem. It was much easier for them to minimize the usage of that flap by gluing it to the car. Or, build a timer on the flap to prevent opening to often. Or, hide the flap. The initial requirement would be solved this way. The flap can not be used that often or entirely by the driver.

In projects such requirements are often introduced. It could be some additional field on the screen or on a report. And most often the business insists that those fields are build. For them it seems very simple to build. Just drag another field to the screen and the issue is solved. Only they are most of the time not informed about the technical impact and certainly not about the impact for the testing process.

Imagine that a certain field needs other data. Queries have to be rebuilt. Or even worse, a new table has to be introduced. In such situation you don't only look if the field is on the screen and contains some data. You also want to test what other impact has the introduction of this feature.

If organizations have a well defined process, they are writing a Change Request. Based on this CR a technical impact is determined. And if the impact is not that huge decision is made if the risk of failure is acceptable against the business value. What I see is that often the test impact is forgotten in those impact analyses. Because of this missing information the business decide to build the CR.

After the CR is build the testing part starts. Here some points I think what can go wrong. Perhaps you see other points or even experience other points. I am very curious to those.

1. In impact analysis no attention was paid towards the test impact, this might result in delay of the project as testing takes more time then expected. "It was just a small change! Why need so much time for testing?";
2. If impact analysis is only done in relation to that certain function/module and not towards the impact in relation with other main functions/modules, the selection of regression test cases might be wrong. The change of only testing the gasoline flap is bigger then testing those 18 changes in the system. New errors might be introduced;
3. If the technical impact is low, and also the business gain is just medium, or even low. Why bother the change of introduction of new failures. It might help also to consider here the impact on testing;
4. If there is no good documentation about the system. Technical impact is decided based on lines of code. How can you tell what the integrated impact would be? And also important, how are you able to select the proper test cases?;
5. If business gain is high and technical impact is medium/high and time is short, testing is mostly done in a bad, quick manner. How do you know which risks to accept by introducing this feature to production? Under pressure, often a basic test set is chosen. And often this seems to be too less for measuring those accepted risks;
6. If impact analysis is talking about 18 changes, how strong is your development process that there are not somewhere unwritten changes made to make those 18 work? If you don't know about other small changes or code workarounds, how can the tester know about those?;
7. Another situation sometime happen is that business doesn't know about the impact on their processes of those changes. Development doesn't know in front about the technical impact. And still the tester gained the order to test everything thoroughly. Is this an order you can give the tester in these circumstances?;
8. What about the situation when only one test environment is available, you testing a huge impact change and during testing a production failure is introduced to test. This might lead to a situation that the production failure must be delivered with that rarely tested change towards production. Make sure you have multiple test environments or you are in control of your environments.
9....


As mentioned earlier, this list is definitely not complete. I hope this gives you a picture that a small change for business is not obviously a small change on testing. I think it is important for all parties involved not to make assumptions to quickly and keep communicating and respect each others arguments. Perhaps we should stop making decisions based on time pressure. Making decisions based on arguments might be much saver and even cheaper.

Sunday, June 1, 2008

Software testing: An organizational approach (1)

After being involved in several projects over the years I noticed that most of the improvement actions related to software testing are triggered by the testing process it selves. Those improvements are done implicitly or explicitly. Implicitly is done based on the skills of the tester by introducing techniques, strategies to improve the current test process. Explicitly is most of the times done after evaluating the test process and give advice for the next test project.

I think it is not strange that improvements are triggered by the test process, as the influence of the tester is already there. Sometimes, it is only reaching the project it selves.

I noticed that sometimes it is forgotten to look at the impact of those improvements on other processes or the organization. Imagine that you ask as tester you need better documentation. You even claim that you cannot start testing before you have that documentation. This might result in extra work for the developer or business to deliver you request. Only this will result in extending the time of the project as the developer cannot code during writing documentation, or if the developer already started and business comes up with new requirements the project will also delay. In my opinion it is not wrong to ask for better documentation, only keep in mind that the reason for doing this is improving your test process and the organization accepts the impact of it.

In a previous writing I wrote about the focus of testing. See: Do we Test wrong?
The intention of this writing was to pay attention that testing is a process which is a part of the organization. Testing should not be a stand alone goal.

If we look at the test process from an organization view you perhaps might identify the following approached related to software testing, the organization has:
1. one overall approach for software testing;
2. for every project a made to fit approach;
3. for general projects a overall approach and for small projects a made to fit approach;
4. no specific approach for projects, there is continuance of an approach for next releases;
5. every release has its own approach;
6. software testing is not embedded in the organization.

If you also take those views from a development point of view you might find out that the chosen development method might be different also. Assume there are different development methods in the organization like: the basic development method is RUP and some small projects tending to use an Agile approach. You can imagine that this has some impact on the test approach.

And if that test approach is improved only to support that certain test project, the advantages might be within that project, the disadvantages for the organization might be more expensive then those improvements are meant to support.

In most of the organizations a test manager is assigned to monitor all those processes. The question is as always: are choices made correctly?
Perhaps another mean can be introduced: The Organizational Testing Board (I will call it OTB). This should be a staffing function were impact of improvements are determined based on direct and indirect costs and benefits. Considering those costs and benefits the organizational view related to business processes, development methods and test strategies are involved in a improvement plan which is supported by the organization on first hand instead of supporting the individual test process.

If an OTB is established, improvement suggestions are initial comming for the individual test process. They will decide based on costs and benefits for the organization if it is allowed to continue with that improvement. While the decision will be based on the short term and long term plan of (test) process improvement considering the place of chosen development methods in combination with current business processes adapted to the test process and skills of the testers.

Saturday, May 24, 2008

Mikado Management or Test Management

Ever been in a situation were you did not have much control over your test environment? Have you been in such a situation the developers deployed their solutions whenever they thought it was ready? Or have you been in a situation when you came back at work new deployments have been done?

To me it seems that those kinds of situations all previous work you have done on testing was useless when your goal is prove quality instead of finding as much issues in code as there are.

You might compare it with the game: Mikado


In this game every stick has its own color and every color has its own value. When starting you bundle the sticks and let them cautious fall. And your aim is to get the most valuable sticks without disturbing the other sticks. If you translate this to testing every functionality has its own value and you want to see how it works and how it is integrated. Does it disturb other functionality when using it?

Only what happens when new functionality or solutions are offered? All sticks are picked up again and newly setup. The whole stick-pile is transformed and you have to start again. This result into a new situation, sticks which had in earlier phase no direct interaction with other sticks might have now.

Only do you have the time to start testing all things again? Did you handle some kind of regression testing? I think if you keep continuing testing without any additional effort you performing some kind of Mikado Management. You are not aiming for gaining as much points in your time window, you aiming for getting as much sticks in you time window. Your test strategy shifted from proofing quality of main and import/risk full functionality towards finding as much issues in the top shelf of your system.

To deal with situations as this perhaps the following actions can be performed:
1. I would start making sure that you have control of you system under test, you give approval when code can shift in your environment. A disadvantage of this is that some necessary code is not there for you to test
2. Another option is Continuous Integration: it allows you all those transports and delivery time is decreased, only when in this continuous integration also automatic quality measurement is done by the development part, or better: the team, the sticks have not been picked up again, there is prove about the impact of the new code on the system.
3. Or define a set of regression test and after every delivery moment you execute those test set. You can decide based on technical impact of new functionality or solution if you run it immediately or after several deliveries. Or when deliveries are done on daily basis, you can start in the morning with executing that basic set of test cases. Keep in mind that these sets can be dynamically based on the area where solutions are provided and time is under pressure.
4. Another option could be automating your regression set. You can start the regression set when ever a deployment is done.

I think the best situation here could be having the continuous integration mind set. Where it might be an optimal situation where also the regression set of functional test is embedded in this integration and also automatically executed.

Only not every organization or project is yet mature to deal with such an approach. In such a case I suggest to prevent Mikado Management, and keep control or gain control over your test environment. If you have to decide what the quality of the system is, you should be responsible for the state of the system. Therefore you decide when deployments are done. Important in such a case is that management accepts the risk that the test process can extend due to deliver later in time.

And for those dare devils, if you can not get control over the test environment and transports are made when ever. Take the Mikado Game with you and start playing that game instead of testing. Think about what sense does it make if you keep continue testing and you don't get time of performing some regression testing? You loose the picture of quality. In that case playing Mikado you have at least some fun. Playing it with your manager might help make them understand the situation you are into.