Saturday, January 31, 2009

Shooting with hail or just one test case?

Almost every one wants to setup a controlled test process.

It is if we are playing the song "Proud Mary" by Ike & Tina Turner: "Take the beginning of this song and do it easy. Then we're gonna do the finish rough."

We always start simple with a controlled set of test cases. Often these test cases are based on the outcome of using test techniques. So far so good.

Still we might become lazy over time. If errors are found in production, we re-test the solutions and add the test case to the test set. There is less time to question the system and our test process what went wrong and what can go better.

In manual testing after some short time you will see that keep adding those test cases to the test set will make our process "rough" as we don't have time to execute all test cases or select a common sense set of cases. Business is already relying on us and demanding that all is done.

In automated testing this time pressure is almost minimized. We can keep adding new cases, let the number exceed over the thousands and still continue and deliver.

This reminds me about the phrase: "A bird in the hand is worth two in the bush"
If we accept the meaning of this phrase we are we not able to continue controlling that bird in our hand and instead sewing our own bush?
It is easier to take care of a bird we know then nurse the birds in the bush. If we feel we need another bird, just let the bird free and pick another out of the bush. Not just the first one which fell in your arms, go into the bush and see which sounds the best to your needs.

Keep it simple and easy, enjoy the roughness and prevent to get stucked on numbers. Don't create a process where you are shooting with hail on the system.

Friday, January 30, 2009

Just one development method

Over the years I learned that in most companies a development method is chosen. In the Netherlands it depends in which area you work. Most of the organizations I saw or heard about classified RUP as development method. In SAP related projects they claimed they were working according ASAP.

How often is not asked to the tester if he/she has experience with that certain development method? At least it is a pre when you have some experience. The conclusion is often made that if a person has experience working with a certain development method he/she is able to fit that experience towards testing in those projects.

Besides that experience of development method also experience is asked about testing methods and the ability to adapt that method to those development methods. In plain theory it is all possible and is also written in several books.

Most of the time those organizations are not able to use the development method as it was intended. And they also did not write down or monitor which adaptations they have made. The risk here is that it will be even harder to make the test method fit to the process. I think we should never forget that testing is just a part of the development process and if that process is not clear then the chance that the test process is optimized is very small and improvements can only be made towards the testing process. A strong link in a weak chain is created.

I wonder why organizations keep sticking to one development method although they already know it doesn't fit for all projects and still demanding to work accordingly as it is default standard.

What I suggest to evaluate in your organization what need you have and how that need can be turned into value. Sometimes you have large projects were RUP can work very well. Sometimes you have smaller projects were an Agile approach fits better. The main thing is to define multiple approaches and select them based on the benefits and check if the disadvantages are acceptable. Also keep in mind if the chosen approach fits in the business culture and supports the organizations vision, strategy and goals.

If the development method is clear a test method can be fit much better and the benefits of both approaches can be reached and the disadvantages controlled better.

Some bugs missed

In testing we are trying to avoid bugs to be shipped to production environment. As tool we use test cases. The good part of using test cases you can control the depth of testing. If a tester is less experience you can write detailed test scripts. If the tester is very experienced and skilled you can do without some details. Using these test scripts some nice metrics can be created like: how many of what in which time period and is it according plan, schedule and expectations.

A bad thing is that we often instruct the lesser experienced testers to test only those things which are written down. And we give the same instruction also to the experienced testers as the time schedule must not slip.

What I saw is that for small parts this is not yet an issue as test cases are defined based on using test techniques and this gives us the ability of measuring some coverage. In large tests like chain tests a risk occurs that defects are missed as the focus is defined on high level to perform certain actions. Obviously those defects don’t hurt the system and process immediately and therefore accepted as intended functionality.

If the testers are also the key-users there is a chance that this "wrong" behavior is explained to other users and also documented in work instructions and manuals. People will continue working with the process on production ignoring these errors. Sometimes it you will see that unintentional these actions will become part of a workaround without knowing it is a workaround. It can also happen that those errors keep unnoticed.

Since testers try to re-use testware the next time testers will perform the same actions. During their usage on production their knowledge of the system is increased.
In the past I saw situations that this gained knowledge lead to finding the errors during testing which were "previously" not there.

This will lead to a discussion if the functionality is as expected as in the past it worked that way, so why should it be now an error? Should it be registered as an error in the "new" functionality or should it be logged as a production error?

Logging it as an error in the test environment it will have impact on the advice for release. If it is a production error it should be decided when to solve it. A situation is created if it is allowed to disturb the project or keep disturbing the production.

What will you decide and for what reason?

Thursday, January 29, 2009

What we can learn from history?

When I read books or articles related to software testing they are mostly writing about an approach how the testing process should be. Others are explaining how techniques should be used, how test processes should be improved and some how tools can be embedded/used. They are mostly approaching the profession of testing from their point of view, from the inside of testing. I can imagine this happens because testers have most influence on just their area.

What I always try to do is checking if there are other topics to combine in our profession. The purpose for this is not to deepen my knowledge, also extending my knowledge. During the last weeks I keep wondering if there is something missing and is there a reason for it.

This triggered me to see if I can find some dates and events from history until now and check if there is some relation between them.

In the topics below I will try to see whether there are patterns in history which can be linked to software testing. For this I search for dates related to sciences, computing, development methods, testing era and management.

History of our society
When I take a look at our history I found these figures at Wikipedia about some of our milestones which build our society:
History of mathematics starts at ca 1900 BC
Philosophy first noted at ca circa 585 BC
Sociology introduced the term "sociology" in 1780 AD
Psychology as an independent experimental field of study began in 1879 AD, though the use of psychological experimentation dates back to Alhazen's Book of Optics in 1021,



I think you can classify this timeline in how we human approach our environment as following:
Mathematics is a technical approach, trying to investigate how items are working and how we can improve those objects to improve our living.
Philosophy taught us that there must be value of using objects. Based on the value there must be a dependency between people as it didn't work always. The culture and social being of people has also an impact and contribution to usage of objects and its value. I want to capture this in the term Sociology. After a period the Psychology was introduced to understand how the way of thinking of human kind has its impact on using objects within a mindset of thinking by groups.

Ages of computing
You see a partially similar timeline also in inventions and usage of computers and developments. In the figure below I captured some events from Timelines of computing from Wikipedia *1: I think which are important and symbolize this process.



Perhaps I'm making a wrong assumption, when combining the periods of human skills in relation to the speed of invention in the computing area then I see a similarity. First we tend to approach computing the technical way. After a period it becomes a part of our way of thinking. Then technology becomes a part of our social being and currently we are thinking how technologies can support our way of thinking and how our way of thinking can be adapted in technologies.

Development methods
It seems that computing is quite old, the first humanoid robot was created in ca 250 BC *6.
The first programmable humanoid robot was Al-Jazari's programmable Automata (Humanoid Robot) in 1206. In those days we were not working with development methods like Waterfall, SCRUM, RUP or Agile.



It surprised me that development methods which empowers the business involvement exist already for some time and still we tending to use the more waterfall approaches and thinking.

Software testing area
When talking about software development nowadays you also think about testing that software. D. Gelperin and B. Hetzel *3 classified the phases of software testing as follows:
Until 1956 : was the debugging oriented period
From 1957-1978: the demonstration oriented period where debugging and testing was distinguished now
between 1979-1982: the destruction oriented period, where the goal was to find errors
1983-1987 : the evaluation oriented period
From 1988: prevention oriented period



Unfortunately this timeline might not represent the current status. I think the next cycle in testing could be called "Adaptation" and based on the dates from DSDM, SCRUM, Agile Manifesto I would say that this area started on ca 1995. I would make this assumption as within those methods business involvement started to grow.



If you compare this timeline with the development timeline you will see that most of the development methods are "invented" in the prevention area. Comparing this with the actual inventions made you will see that also most of the inventions done which are impacting our current test environment are also done in this period.

History of management
Visualizing the history of management you will notice that human interaction is already important since the 1930's. *8



If I'm correct you see here that in management human attention and involvement was already an topic since 1930's. In software development it started much later, I would say that from 1983: evaluation period, it became more important in software testing. This made me think if we are still working behind the facts and we should make the next step in software testing. The next step could be instead of only improving our test methods and best practices also learn how we can learn from the past, what we can learn from other processes.

In organizations there are still influences noticeable from the other management schools. As these influences will also have its effect on culture it therefore will have its effect on projects. If organizations are a mix from management schools and styles then software development is also. Therefore it will be hard to use only one testing approach. It will become important to adapt your approach to the business. This adaptation can be done focusing on value for business instead of functionality for the business.

In a previous posting I already made a link between Organizational schools and testing schools: 2008-02-24: Testing Schools and Organizational Schools *4. In this article I tried to visualize that we should not ignore that testing schools might exist and we have to evolve also. Accepting a testing school structure can help to identify your position and check if the thinking fits in the project.

I believe if value is involved we should not only measure the economic gains, it should also fit in the organizational culture.

Organizational culture
Charles Handy *2 classified 4 organizational cultures:
- Power Culture: This concentrates power among a few;
- Role Culture: People have clearly delegated authorities within a highly defined structure;
- Task Culture: Teams are formed to solve particular problems;
- Person Culture: All individuals believe themselves superior to the organization.

When defining the strategy for testing you have to consider the type of culture also. I can imagine that introducing an Agile approach in an organization with a "power culture" will have some struggle to get commitment, involvement and understanding from the business as the business is here not all users, it is just those few people with power.

System approach or contingency approach
One nice short example I found on Systems Approach *7. Here the difference is made that a systems approach deals with an organization as a black box. The organization is more a static environment. In this approach all kind of test methods might fit in well as the borders are well defined. The focus lies more on efficiency and effectiveness.

The contingency approach sees an organization more as a dynamic environment, here it depends. The dependencies are undefined and still to be explored. I like to see organizations and therefore projects more as a organic being. This believe empowers me to think that there is more we have to focus on then only improving our current way of working, we also have to explore our boundaries and keep asking what can we learn from others and how can we adapt.

The next steps
It would be wrong to start ignoring developments which can be done from out a testing perspective. What I tried to visualize in this blog is to open eyes for other approaches. Perhaps some understanding can be found in our history. Sometimes ideas seem to be wrong as they don't match the frame of reference of the tester. If adaptation to situation is expected and answers are sought in a technical area also try to consider the impact on business, management and cultural areas as well.

I believe we have still some fields to explore in software testing and not always a new test method, development method or technical solutions are the only answer. They might solve problems temporarily and might be suitable for the current project. We might still learn from our past and from our guru's.

*1. Timeline of computing 2400 BC–1949: http://en.wikipedia.org/wiki/Timeline_of_computing_2400_BC%E2%80%931949 Timeline of computing 1950–1979: http://en.wikipedia.org/wiki/Timeline_of_computing_1950%E2%80%931979
Timeline of computing 1980–1989:
http://en.wikipedia.org/wiki/Timeline_of_computing_1980%E2%80%931989
Timeline of computing 1990–present: http://en.wikipedia.org/wiki/Timeline_of_computing_1990%E2%80%93present
*2. Handy, C.B. (1985) Understanding Organizations, 3rd Edn, Harmondsworth, Penguin Books
*3. D. Gelperin, B. Hetzel: The Growth of Software Testing. CACM, Vol. 31, No. 6, 1988, ISSN 0001-0782.
*4. Rosink. J (2008): Testing Schools and Organizational Schools: http://testconsultant.blogspot.com/2008/02/testing-schools-and-organizational.html
*5. Software development methodology: http://en.wikipedia.org/wiki/System_Development_Methodology
*6 Humanoid robot: Timeline of developments : The Lie Zi described an automaton. ca 250 BC
*7 Systems Approach: http://www.mgmtguru.com/mgt301/301_Lecture1Page14.htm
*8. History of Management theory: http://web.njit.edu/~turoff/coursenotes/IS679/679newset2/sld040.htm

Wednesday, January 28, 2009

9 reasons why agile testing could fail

Agile development is hot. When there is development involved there should also be testing involved. I think it might be hard for a traditional tester to be involved in an agile project. Here some thoughts of mine of which I think Agile testing might fail:

1. The tester might be outnumbered by the developers and therefore their voice about testing will be ignored or even the tester ignores him self
2. Only "Done" Functionality is shown in demo, the undone items are also their, if the business decides to take the product in production they accept the risk that untested, undone items are also face the production line
3. In demo's only the working items are shown, won't it be good to explain also what is not yet working?
4. If Agile is new for business how can equally trust exist? Understanding is a basis for trust, if something new you cannot expect that business understands what the team is doing or trying to do. If trust is not available in the beginning delay will start and adaptation and learning will be much harder.
5. If trust from the business is based on traditional methods then part of this trust is also based on traditional testing. A traditional approach might danger the benefits from an agile testing approach. The schedule might slip and also reliability in functionality and value;
6. Often you hear that the first 2 a 3 sprints are used to define structure, process, communication and mutual understanding. On the other side you hear that an agile approach is also very suitable for small projects (not only large projects) which needs quick delivery. If already 2-3 sprints are lost nothing will be delivered within a small period of 3 months although the business needs the functionality already now. This might turn into mistrust. The project will fail after all or cost more money
7. Experience testers are focused on traditional testing approaches and traditional testing techniques and demanding traditional documentation. If this is not part of the definition "Done" they might be using time to gain that information instead of supporting the team and adapt their approach to identify the value which is delivered.
8. Agile developers and traditional testers might not be able to adapt their selves to the team goals as in human nature there is some suspicion towards new and old approaches. they are not looking how they can support each other instead they are spending time to convince each other
9. When agile approach is new for the business then the team values the proof that their approach is working above the transparency of their work. Metrics are tuned on to this proofing activity. As testing is often used of proofing the quality of products, often they are also used to proof the process. Metrics are presenting a more ideal situation of the project instead of the value for the business

I'm sure there are more reasons why Agile might fail and even more reasons why it will become a success. The points mentioned above are just some thoughts of which I can think of why it might fail after all.

Tuesday, January 27, 2009

Not on time

I was thinking what would be worse:
1. Finding a bug not on time;
2. Offering a solution on time;
3. Testing a solution on time;
4. Delivering a solution on time;

In all cases "Time" is the key word. The reason is always the cost of money or prevention of higher costs. If I'm correct "on time" means that there is an agreed point of time. And alternative is "in time". When speaking in terms of "in time" we are almost too late.

What I see is that projects are often "in time" or even "too late". Intention is to deliver found bugs or solutions for those bugs "on time" and resources are put in place to meet that intention. As long as we are still "in time" nothing really happens as the feeling we can make it is still there. Actually, no one is getting nervous about this as projects are just done this way.

As long as we don't "value" the bugs or solutions then we will keep planning "on time" and try to deliver "in time". Bugs will become defect ID's and solutions an addendum to that ID or new requirement ID's. In my opinion ID's are could for traceability, they are bad to express "value".

Based on this statement we should try to plan "value" instead of ID's. If "value" is involved people are involved and if people are involved they can be moved much easier as understanding is increased. Mutual understanding is the basis of communication and communication is one major item for success.

Instead on speaking in terms of "Time" we should start speak in terms of value:
1. Find "value";
2. Deliver "value";
3. Use "value";
4. Confirm "value".

Sunday, January 18, 2009

Birthday present with too much functions

I'm not that good in buying birthday presents, especially for my mother. Every year I ask what she would like to get for her birthday and this year I expected she would say that it is a present to have me at her birthday and a present wasn't necessary, until this year. She really would like to get a small handheld vacuum cleaner.

This week I went to a shop to buy me such an item. In the store I noticed I didn't have any knowledge of handheld vacuum cleaners so I asked for advice. As a professional tester I noticed the differences between all those different items. I asked questions about the functions and what use they are for. I got explanation about the meaning of difference in power and performance. A demo of usability thought me more about those things.

And here I made the mistake; I bought the best handheld vacuum cleaner which was available in the store for an acceptable price. It had everything on it, easy to use and maintain. Powerful to cleanup even the tiniest dust and it also looked beautiful.
The only disadvantage it had and also the other machines: it was not small.

I'm sure that we do this all the time also in development and testing. We start with the basic needs and requirements and during the project we find out more requirements and also testing them as we need them. We judge them and tell the business if those are working well it can be shipped to production environments. Business will get overwhelmed by those extra nice and really usable functions. Only the main functionality it forgotten: it should be small as the main purpose was for intended usage since the organization has for those heavy jobs a real vacuum cleaner. Perhaps you can compare it with asking for an excel sheet to use as a small database and giving them a web based SQL driven database with fancy windows.

In traditional projects and development methods this is mostly avoided as requirements are fixed and design is available. The system is tested against those requirements. In Agile projects this is in my opinion a risk. The requirements are not always fixed and grow during the project as business involvement is huge. Those change in requirements is not only triggered by their change of thoughts, it is also triggered by the team as they want to give something good and beautiful. They want all the best for the business. I can imagine that functions which initially were nice-to-have becoming must-haves.

This could be avoided by keep asking the right questions: What do you need? Why do you need it? How are you intending to use it? What if you get less and what if you get more? Do you still want it even it has more? Is it acceptable? ....?

Often we think less is bad, I think more is even worse. As a system contains more functionality then needed it will have also impact on the business processes. Procedures has to be adapted, people needs training to use that additional functionality and so on.

Fortunately for me, my mother liked the handheld vacuum cleaner. And she loved it on its initial usage. The initial requirement was basically met and agreed with me that those machines are not that small as in the past as they became better. I know although in the future she will come up with disadvantages she will never telling me that the machine was too big after all. She perhaps will use it lesser then intended.

In this case a sharply priced machine with too many functions which didn't meet the main requirement completely will become a too expensive investment. This makes me think about the situation in our field: Are we responsible to give advice when requirements are not met initially although the user is more than happy? Are we responsible to give information about this during the process? What tools and procedures do we have to maintain this responsibility?

Saturday, January 10, 2009

Magazine: Quality Matters

Today I received the notification via LinkedIn that a new magazine: Quality Matters is launched.
Quoted from the website: "Quality Matters is the first magazine in the area of Software Testing and Software Quality based in South East Europe"

Subscription is free and will be delivered in a digital format. For more information see link mentioned above.


It is snowing (:) decisions

During the last week it snowed in the Netherlands. While most of the roads are covered with snow I had to go to my work. I already knew that this will be a task which involves some risks because the weather news already warned me for this situation.

Normally there are several roads to take to get from my home to my working place. The decision to choose one of the options is based on available traffic on those roads, current traffic jams and possible chance on traffic jams. These possibilities are also based on the moment in time and the possible delays those options will give.

In this situation I had to calculate also the possibility those roads are cleaned from snow. In those days I had to choose for the main road as the risks for incidents would be much smaller. Alternative roads were no acceptable option in this case.

This is similar for system development. Under certain conditions it is better to use only main functionalities to get from the start of the system to the end of the system. Alternative scenarios should be avoided. For testing this implies only test those situations which can be defined as intended usage.

Before I stepped in my car and drove away I had to do some preparations. First make my windows snow-free. I also noticed that the street I'm living in was not cleaned. There were some decisions I made before I started the journey to go to work.
1. Do I have to go to work at that moment and take some risks?
2. Am I able to drive?
3. Which risks are there in the area I can oversee?
4. What actions can I do?

ad1: Yes, there was a production release scheduled and there were still some tests open which might be performed so production release could continue
ad2: Yes, I have a driving license, only no winter tires under my car and I could turn the car's engine on
ad3: there is snow on my car; if it can be removed I would have clear vision. There was snow in the street; if I was able to cross that area and managed to reach the street with lesser/no snow I minimized the risk for accidents. There were cars standing in my street which might by possible targets to hit due to bad weather circumstances; If I hit those the journey would be stopped or delayed based on damage.
ad4: I had to: turn on my cars engine, remove the snow from my car, drive safely to avoid the nearby objects, reached the first situation of go/no go (the snow free street)

You can compare these actions with an intake test; am I able to start testing and reach the defined goal.
At this moment I made important decisions to start driving using the main roads and defined a go/no go moment.

As I managed to reach the moment on which I decided to continue I knew that I had to calculate delay. As under normal situation I would reach a velocity of average 70 km/hour I noticed after using 1/5th of the main roads my average velocity was just 32 km/hour.

Gaining this information I introduced a new go/no-go moment. I asked myself if it was still possible to make the production release available although I will have a delay? Will I have enough time left to finish testing as due to the delay the testing window minimized? Will others who's support I need for testing, be available because the made the same decision to go to work?
Is it all worth to get there because I also have to go back at the end of the day? Can I speed up to increase my average velocity? Is there other information about traffic jam, weather conditions and road conditions available?

I first started with gaining information; I listen to the radio, checked the internet for traffic jams and called a colleague which would be heading the same direction.

At that moment I accepted the risks that no people and no were available. I didn't choose to speed up as the chance to be involved in an accident will be larger due to weather and road conditions in combination with the tool I have: a car without winter tires.

In terms of software testing you can translate this to the possibility to reach the goal within minimized time and clarifying risks for that moment and making those risks acceptable.

After I made that decision I watched more carefully I kept to the defined travel schedule. I acted on every situation. If I felt my car was slipping a bit I decreased the speed. If there were better conditions on the road I choose to drive the left lane instead of the right lane. You have to know that driving the right lane is introducing additional risks as most of the cars are driving on the left side of the road and the level of snow is lesser then on the right side. I accepted that risk. Knowing that the chance for accidents is lesser when keep paying attention and using controlled speed level. Also knowing that when something happens help would be available fast enough to be able to reach my goal at work.

Almost reaching my work I came in an area were the roads were not cleaned yet. I suddenly realized that I already crossed the point of no return. I started listening more carefully to the car, focused more on the road and the conditions and lowered my speed to almost 10 km/hour knowing that this will reduce the average velocity. I also paid more attention to the circumstances as the weather forecast for the following days would be similar and I had to reproduce the steps I had taken to reach my goal.

Finally I was sitting behind my desk and realized I performed a test process during those exiting hours. In this case the process was not covering all circumstances. As I only used one road to get to work. I didn't monitor the outcome of the other roads I could have taken to get to work. What I did was:
1. Identifying the goal: go from A to B;
2. Indentifying the need: release to production;
3. Identifying the risks and defined the first go/no-go moment: continue based on intended usage;
4. Monitor the chosen route and acting on changing circumstances: Am I still acting on intended usage;
5. Defined another go/no-go moment: am I able to reach my goal based on other conditions;
6. Cross the point of no return and remember the actions to make the defined route reproducible.

The final outcome was a proven and defined route which made me able to use my environment (roads) to obtain my goal: get from A to B. I think this can also be done in software testing: You don't have to cover everything. You don't even want to know what kind of coverage you have. If you are able to use the system based on intended usage and you accept the risks that you didn't test everything you only have to make sure that users are using the system based on the intended usage. Tell them what they are allowed to do and what they shouldn't use. And in case users are not listening make sure that there is a quick action force available to recover damage. Is this a nice way of testing? It depends based on the circumstances you are into. At least a Goal Driven Testing approach might work for you.