Sunday, March 30, 2008

Tetris as Test Management tool

I just had a crazy idea. What if we use the famous game Tetris as Test Management tool instead of other plannings tools like Microsoft Project?

Wikipedia has some information about this game: Tetris

On this site you have some key-words which are similar to testing:
Gameplay: "The object of the game is to manipulate these tetrominoes, by moving each one sideways and rotating it by 90 degree units, with the aim of creating a horizontal line of blocks without gaps."

In testing we also trying to plan every activity without leaving gaps, so testers are not sitting still.

Variations: "It is difficult to place a standard on the game, as newer releases frequently progress it either to make the game better or to keep players interested."

As in testing: there are several standards, schools, methods and approaches.

Tetris variants: "A number of Tetris variants exist. Some feature alternate rules and pieces, and others have completely different gameplay."

Like said to variations: Some methods have similar approaches and some are approaching the test process completely different.

Is it possible to play forever?: "The conclusion reached was that a player is inevitably doomed to lose"

We are not allowed to test forever. There is always some kind of project management who is speeding up the time and makes us stop.

How can we use Tetris as management tool?

  1. Pick the proper version of Tetris which suites your needs;
  2. Identify the types of blocks of your Tetris game;
  3. Name those types in testing terms like: Plan & Control, Preparation, Specification, Execution, Regression testing, and so on;
  4. Define your rules: perhaps: every complete line is a successful planned iteration; or another variant could be: every level is one iteration.
  5. Make screenshots after a predefined number of blocks, this can be your test process planning.

I know there are a lot of reasons why not to use it. Only what could be the reasons to use it instead? And what are the risks if we use it?

The rhythm of software testing

Mainly, the moment when testing should stop is based on the situation if end criteria are met or the deadline is reached.

And sometimes you come into a situation were you want to give advice to continue testing; only there is too less information to support this. It is just a feeling of discontent and you don't have the time to prove it. Perhaps you have the idea that it doesn't sound right to continue going towards production. That sound can be explained by the rhythm you act in over the last days.

If the situation is such, that while new complex functionality is delivered and you only find minor issues and almost all test cases are executed you tending to say that the acceptance criteria are met. At least project management is happy as they only see those criteria are met and they trust you did the right thing because you did your job.

Only did is sound right? Imagine that a testing process is like a song. You have several couplets and refrains. The couplets are deliverance of new functionality and the refrains are compared as regression testing. What happens if in the last iteration new functionality is delivered that initially was planned for earlier deliverance? You have lesser time to play the song in the order you intended to. If you are looking closely you might notice that you are playing a couplet and refrain on the same time.

Just pick one song you love the best and figure out how that song would sound like if you sing a couplet and refrain the same time. I think it won't make you happy.
You even get distracted and missing some of your own notes. And perhaps that was happening during testing also, you didn't find any serious issues.

Don't we say mistakes are of human nature? Don't we sometimes not trust the developer to give us bug-free code?

Perhaps we can convince project management although the number of serious issues didn't raised, that based on the rhythm of the testing process we should get some time to investigate what the impact of this sound is on the environment. Let us get some time to make it sound right. And check if we did the right things. Check if we really did test as we intended to.

In certain situations it might become more urgent to review our test process and not rely only on measurable figures like number of open issues and executed test cases. You have to check if the advice you tending to give is reliable.

Friday, March 28, 2008

Magazine: Testing Experience

Yesterday I ran in to the announcement of a new magazine for testing professionals. As they claim: "Testing Experience is the challenge of having a high quality magazine for professional testers made by and issued for people involved in testing. This magazine is free of charge and finances itself with adverts."

for more information see:Testing Experience

Sunday, March 23, 2008

Open System Thinking and Software Testing (6)

This is a continuation of the posting in the category: Open System Thinking and Software Testing. For the previous post you might check out: Open System Thinking and Software Testing (5)

For defining the items and investigation of the relations of those items to each other I'm still working on Micro Level: Test Project. (See Open System Thinking and Software Testing (1) )

I ended the previous post with a possible task for the test manager or test coordinator or perhaps the SCRUM-master to perform this exercise. Re-reading that posting I can imagine that you start with the exercise and then stop doing it. I think this should be an ongoing process. It should be a new task for them to monitor on daily basis what is happening in the project related to testing.

To start a process like this we need besides a model also some tools and a phase description. This should be embedded in the test process.

Assuming that a test process knows the following phases (borrowed from TMap®):

  1. Plan & Control phase
  2. Setting up and maintaining infrastructure phase
  3. Preparation phase
  4. Specification phase
  5. Execution phase
  6. Completion phase
Assume this method knows the following steps:
  1. Identify per category the values related to test project
  2. Fill in matrix
  3. Define per value the relations to other values
  4. Define the impact of those relations to each other
  5. Define the possible risk or dependency
I think you have the following tools:

You start with a quick scan during Plan & control phase. The focus lies mainly on the categories: Input, Output, Environment and behavior and processes. When the quick scan is performed you perform an assessment to fill in the remaining categories: Goals, Technology, Culture and Structure. The results are embedded in the test plan as risk, boundary conditions and assumptions. Also start creating the early mentioned Risk-Dependency-Backlog.

Keep in mind that it should relate to the test project it selves. For the process (meso-level) and quality control (macro-level) a similar process has to be defined. I think it is wise not to combine these levels as overview will hard to be gained and managed.

When starting the next phases in the test process on daily basis a similar exercise should be performed. Here you can use team meetings, walk around and standup meetings. The results of these should be translated in the backlog performing the steps 2 until 5. If there are changes identified you have to share it with your team, this can be done using the standup meeting, daily/weekly progress reports or email.

It is important that you approach the project as an open system, were all kind of things are happening and you are able to identify those things and judge their impact on the process. And most important, share it with your team. It can help you to make them understand why you made certain decisions.

The next article related to this post can be found on: Open System Thinking and Software Testing (7)

Start managing your Workaround's!

As in most projects the delivered software doesn't always meet the requirements to make it fit into the business processes. Often, due to available time, the cost of implementing the solution or the risk for implementing, the "new" requirement is "created" by a workaround.

Wikipedia saying the following about Workaround: "workaround is a bypass of a recognized problem in a system. A workaround is typically a temporary fix that implies that a genuine solution to the problem is needed. Frequently workarounds are as creative as true solutions, involving outside the box thinking in their creation"

I think the keyword in this definition is temporary.

Due to the temporary kind of workarounds, they are often written down separate documents or as addendum of manuals. They are only not embedded in the software, functional designs and business process documentation.

There are processes how to handle with issues, change requests, manuals, courses and how to define and manage requirements.

Often a workaround stays in the mind of the project or creator. And after a period the temporary solution becomes the only solution. The business has forgotten that the workaround exist and using it as standard procedure in their business process. The initial process became less optimal. The main risk here is that when implementing new systems or functionality they translate the workaround in those requirements.

I think the pitfall here is the synergy effect of workarounds. If a workaround is not identified as such in the current process, it becomes a requirement for the new functionality. If this cannot be embedded correctly a workaround for that workaround will be created. This results in the match between system and the actual business process becomes less pure.

Another situation is the effect of the workaround on the data. Sometimes, when requirements are not build as intended, a workaround is defined. The chance here exist that data is not stored in databases as intended. This might result in partial suitable data. When new functionality is added, it might result in a new data model and perhaps transforming existing data. Only is the data model now based including the workaround or is it defined based on desired structure?

My suggestion is to maintain the relationships of workarounds to requirements, code, functionality, issues, manuals and training materials.

Identification of workarounds
If we want to maintain the workarounds we need to understand the process of workarounds. I think the following phases can be identified:

Workarounds identified during:

  • Business Process Definition phase
  • Requirement phase
  • Application selection phase
  • Development phase
  • Testing phase
  • Implementation phase
  • Production phase
Business Process Definition phase
During the definition phase of business processes it is important to be able to identify were workarounds are already embedded in current process activities. It might be important before starting defining requirements first think how the initial process can be improved by measuring and defining current and new activities. This might lead to a situation that workarounds has to be removed. This should result into new requirements to avoid implementation of existing workarounds.

Requirement phase
During the requirement phase, decisions can be made to stop investigation towards new requirements in a certain area. This might result in some workarounds also.

Application selection phase
Not always systems as build from scratch, Often existing applications are bought. One of the steps of the selection phase is matching requirements towards those systems. And often a system doesn't fit those requirements. The mismatches are often translated towards new functionality in those systems or accepting workarounds.

Development phase
Sometimes during development we noticed that functionality intended to be build is more complex to create then initially thought. The costs and time of building certain functionality becomes a risk for the project. Sometimes decisions are made in this phase to ignore the requirement and accept a workaround.

Testing phase
During the testing phase issues are found which will not be solved immediately. Workarounds are defined for those situations.

Implementation phase
In implementation phase manuals are created and training materials are developed. Here the actual workaround is made final and visible.

Production Phase
When users are working with the application the initial are using it according manuals and training materials. After a while they found their way in the system and define their own "short-cuts" in the system. This way of working is how they workaround intended functionality. Sometimes they also develop their own workarounds for workarounds.

Static vs Dynamic workarounds
In my opinion you have static workarounds and dynamic workarounds. The static workarounds are primarily defined during the first phases. And is a product of the project. The dynamic workarounds are defined in production phase. If those are written down they become a product of the business.

Registration of workarounds
In all phases there are ways of registering workarounds. As far as I know there is no defined notation and location to do this.
Perhaps it is done this way:

Business Process Definition phase: Separate paragraph in process description;
Requirement phase: marking a requirement in scope or out of scope;
Application selection phase: marking standard functionality as customizable;
Development phase: embedding comment in code, functional and technical designs;
Testing phase: in issue tracking tool;
Implementation phase: In manuals and course materials;
Production Phase: personal emails, memo's, additional workaround documents.

Combining workaround registrations
The main challenge would be combining workaround registration. I think it is necessary to get an overview when somewhere a change is involved all teams now what to do. And more important, know that they have some additional work to do. To combine workaround registrations you need some formal registration of them in their separate documents. The basis could be a unique identifier and short description. This should be registered in one document. You might call it the workaround backlog.

Managing Workarounds
Having one document where relations to workarounds are created is not enough. Also the life cycle of the workaround should be defined. As workarounds are temporary solutions, the end date should be defined and monitored. If end dates are approaching, it can help defining the new scope of the next release.
I think you need one owner for this backlog. He becomes responsible for at least providing information about:
  • Where workarounds are defined;
  • If workarounds are mentioned in all necessary documents;
  • What they are;
  • How they can be identified;
  • What the planned end time is;
  • Impact on processes;
  • Impact on data;
  • Impact on functionality;
  • ...
Testing workarounds
As workarounds can have impact on processes, functionality and data they have to be tested. I think they can be tested in the following stages:
Unit Test: Check if newly build functionality is not interfering with defined workarounds;
System (integration) Test: check and measure how workarounds are interfering with standard and new functionality and their impact on data;
User Acceptance Test: Check if the workaround as mentioned in manual still fits the business processes. And check if the relation ships between workarounds are logged correctly.

Conclusion
Workarounds are of temporary kind. Where the chance it becomes a final process is quite huge. We should be aware of the existence of workarounds and monitor them during all phases to avoid a negative synergy effect of workarounds on business processes and systems. To manage them an owner should be assigned and they also have to be tested during the development phases.

Of course there are more ways of registering, managing and testing workarounds. Still I think there is some gains when approaching workarounds more formally and adapting them in the product life cycle.

Sunday, March 16, 2008

Flying in a hot air balloon

There are times when I'm talking about my profession the other person doesn't understand why software testing is so exiting. I think they don't understand me because due to my enthusiasm I went too deep into details. This made me wonder how I can compare software testing.

So I try to go into the height instead the depth.

I think software testing can be explained as flying in a hot air balloon. The project you are into is similar as the balloon. You also have balloons in different sizes and shapes.

Other characteristics of flying in a balloon are the equipment you have. Is the basket large enough? Is it an open or closed basket? Are there enough sand backs on board? Is there an on board heater? Do we need navigation tools?

Besides equipment there are other factors which influence the flight. Is there someone with experience flying a balloon on board? Is it planned to be a long journey or a short trip? Are you flying together with other balloons or is it just in the sky? Is there enough wind? Is it cloudy or clear sight? Did you get on board when the balloon was making an intermediate landing? Did you bring food and drinks with you when you started the trip?

In software testing you can have similar questions.
Imagine that:

  • sky = organization
  • balloon = project
  • balloonist = project manager
  • basket = project size
  • heater and sand backs = tools to adjust direction
  • flight size = project schedule
  • single vs combined flight = project environment
  • sight = project goals
  • wind = resources like time and budget
  • food and drinks = personal skills and experience

Perhaps testing is like flying a hot air balloon. It can be exiting, turbulent, you do all what you can to reach your goal, every time it is different.

In testing you are part of a small team which has the goal to fly from one location to a location 100 kilometers to the North. The sky is blue and the wind is also blowing North. The basket is large enough for us. The balloonist is experienced to fly under these conditions. And we fly together with other balloons.

During the flight the winds fades. And some balloons losing direction. Our balloon is slightly going down. And as we know, there are more bugs living on the ground or nearby the ground. First we turn on the heater. Then we throw down some sand backs. And we continue flying in the right direction. The sight becomes unclear and our navigation tools stops working.

In testing you also define a plan where you want to go and how you want to reach it. During the project you have to watch how the environment changes. To do this you measure the system and the project environment. And if bugs are found we suggest for counter measures. And some times to get on time on the location you intended to, you drop some functionality. And if time is up, you give advice if the system is good enough to meet the criteria as defined as our goal.

Saturday, March 15, 2008

What has Einsteins gravity theory to do with Software Testing

Last week I saw a program on TV where Albert Einstein's theory of gravity was explained. This triggered me to the following idea combining gravity with software testing, knowing that there are still some black holes in my thoughts.

In that documentary they tried to proof the gravity theory of Albert Einstein.
Wikipedia is mentioning the following about Gravity: "Gravitation is a natural phenomenon by which all objects with mass attract each other, and is one of the fundamental forces of physics. In everyday life, gravitation is most commonly thought of as the agency that gives objects weight"

What interested me that they proof existence of objects in space based on how light was bended and therefore objects seems to be on another location then visually noticed. This is sort of explained on Wikipedia by Albert Einstein's theory related to General relativity.

One of the examples they showed in this program to proof gravity and its impact was using a field were some mass was located and dropping balls on that field. When the mass was larger the bending became greater.

A similar picture I found on Black Holes



That behavior gave me the thought about gravity in software testing. For these thought I make the following assumptions:

  1. The System, Object or Function Under Test is the field
  2. The mass is based on the potential issue we find
  3. The ball is the test we execute
  4. The target is the expected result


If we throw a ball on the field and there is some mass close enough to inflect the path of our ball it might not reach the target we predicted. Which was similar to the example I saw on TV and which is also explained by the gravity example using light.


In testing terms this could be translated to: If we perform a test on a system and the expected result differs with the actual result then we are claiming that there is an issue.

Only how would we call it when we see behavior we should not see? As in the figure above: an object which is located behind the sun ought not to be visible for us. Only due to gravity powers of the sun we do see them as it seems to be in a "straight line" of our view.

In software you see sometimes behavior which should not be visible for the user. Like seeing functions which will be disabled by authorizations, only the testing is performed with the so called "grant all user" profile. It is in our behavior when we find issues we continue looking in that area. Finding more issues in that area will increase the "mass". Based on the "mass" we define risks and often development effort is demanded for these issues.

Question would be also: "Was it necessary to perform this additional development effort?" In general I would say yes, as the quality of the system was improved. Only test projects are always under a certain time pressure. This might lead to a situation that functions which will currently not be used are improved. That time and money could also be used for transporting the system earlier to production or build new functionality.

Perhaps being aware of the gravity of issues towards the system can help us defining the proper risks and making the right decisions. Which makes me wonder what happens if the gravity of one issue cancels the gravity of another issue. The system seems to behave correctly as we don't see objects we do not intend to see.

This makes me think that there is also perhaps gravity of functions/objects within the system.
If we want to be aware of gravity of issues we need to be able to measure it. To enable us to measure gravity of issues we need to identify the gravity of functions/objects within the system.

I think an approach can be drawing up a system landscape based on functions. Also define the impact of for instance authorizations on the usage of functions. If we find an issue we pin point it in the system landscape. Now check if that issue also would be visible based on this authorization profile. The next step could be identifying the influence of other functions on the function were the issue is identified. If on those functions also issues are found they could have a certain impact on behavior which resulted in the issue. To measure this we ought to be able to test the function stand alone.

To me this seems still standard operation how to deal with issues, checking if the function it selves behave as shown in the total picture. If it doesn't show the gained result it would result in rejection of the issue. Some would call it waste of time investigating this issue and go even further that the test was not legitimate.

Before issues are solved impact analysis are performed to make sure if an issue needs to be solved immediatly. Perhaps before we start an impact analysis we perform some kind of gravity analysis. This can be a new activity during testing. Based on the gravity we might also addapt our test strategy and test schedule.

Saturday, March 8, 2008

If Software testing was not a profession how would we call it then?

Imagine that there are people:
who are claiming that Software Testing is not a profession.
who think Software Testing is just an activity within another profession.
who think that anyone can test software.

If software testing was not a profession. How would we call it then?
Is it an activity? At least Wikipedia leads to a list Activity which give some definitions, and also giving a list that they called: "This disambiguation page lists articles associated with the same title. If an internal link led you here, you may wish to change the link to point directly to the intended article."

One of the definitions is Task. So, is it a task? Wikipedia defines Task as "a task is part of a set of actions which accomplish a job, problem or assignment."

Of course Software Testing can be part of another profession using this definition. For example: when you are a developer, testing should be one of your tasks.

I think when a set of actions is becoming to complex it can be defined as a specialism. If a specialism results in an isolated set of actions it becomes a profession. As the initial tasks are not performed any more. The focus shifted from programming to testing.

According to Wikipedia Profession can be defined as: "A profession is an occupation, vocation or career where specialized knowledge of a subject, field, or science is applied"

I think Software Testing fits in this definition. As you can specialize yourselves in testing and also in different areas of Software Testing. Which makes me wonder why is developing thought extensively on schools and Software Testing is just a subject of Development? Do other see my profession differently then I do? Therefore I wonder what else is Software Testing? Or should be deal with this profession as we deal with other professions? I think we should.

Nowadays you see Agile and SCRUM as development methods are used. They are based on working with teams and with team responsibilities. This gives the opportunity for the tester to program also. Developing becomes a task of the profession of Software Testing and is not a profession anymore. Testers have to learn how to program during their education.

Now you see that the circle is round. A set of actions can be called Testing. Testing can be seen as a task. A specialized set of tasks can result in specialism which can be identified as profession. The content of a profession can be increased with new activities. Increasing the content of a profession might lead into a new profession or fit into an other profession.

Perhaps the answer on the initial question: "If Software Testing was not a profession how would we call it then?" is based on the starting point and your goal. If you start as professional tester, you can call it a profession. If you start as developer, it is a task. Observe how your profession is growing. Your profession might change over the years.

Friday, March 7, 2008

The border of the mind of testing

If you are a frequent reader of my blog you already know that a lot of things slipped through my mind and makes me bring it in relation with testing. And write those thought down on this blog.

In this case it is not just a though, rather a question. What is more important: the mental status of the team or following the rules you defined to measure quality?

As in lots of projects a procedure is defined how to handle issues which are found during testing. If requirements are written down unambiguously. If test cases are defined according those requirements. If issues are logged according those defined rules. A process like issue registering can go quite smooth. Though, this is not always the case. Issues are rejected by developers. Testers think they are misunderstood. An issue tracking tool is used as communication tool. And people get frustrated which leads to in-productivity.

As issue tracking is also used to measure the quality of a system under test. Are you allowed to disregard the defined procedure and decide that the tester and developer should communicate first with each other face to face to obtain mutual understanding of the issue and define together which way to go? Are you accepting the risk that in this case a developer convinces the tester that it is not an issue and therefore it is not logged? By allowing this to happen, what is then the value of a test case and what is the value of the figures you get out of a issue tracking tool?

Is this a border we are allowed to cross? And what borders are there further?

As said earlier, this blog is not one of writing my thoughts/ideas. It is more about raising some questions hoping you can help me with this.

Sunday, March 2, 2008

What makes you a better tester?

Just some thoughts slipped through my mind. My initial thought was: "What happens if we don't test?" This made me think about the question: "Why am I a software tester?" Which lead me to the next question: "Why am I good in testing software?" And in general: "What makes you a better tester?"

The last question is one of a kind we keep asking to ourselves. At least I do. And that question could be similar to questions future parents and parents start asking themselves. What makes you a better parent?

As parent I try to do and to be good. But when is it enough? And how can I improve my parenthood. Off course there are a lot of books written to this topic. So are there for software testing. Reading those books might lead me in the direction of becoming a better parent. Still there are no guarantees for it.

This reminds me of 2 words: Nature and Nurture. According Wikipedia: Nature versus nurture. Nature is "individual's innate qualities" and Nurture is "personal experiences". Some behavior of parenthood is nature, you do this without thinking and there results appear within acceptable norms of the community. Other things you do or just leave because of your experience.

I think it is hard to answer the question how to become a better parent. Some part, you have it in you, other parts are based on experience and the ability of adapting towards the situation by observing and listening based on the effect of the boundaries and values you defined.

If this is true then answering the question "Why am I good in parenthood?" is a bit easier. Though the answer will not be given as future will tell based on experience. The way how to deal with these experiences is perhaps more important. I think I am able to adapt my defined boundaries and values based on the situation, judged by behavior I notice during observing and listening to my children and my wife. And next to that how the environment reacts on the behavior of my children and my selves. I adapt since I care.

Answering the question "Why am I a parent?" could be answered by simply saying: I choose for it. I think it is also some part of nature.

Leaves us the question "What if we don't care?". I think if we don't care the quality of live becomes lesser. As an individual. Also as community. Assuming that careless children will spread their behavior as the butterfly effect over the communities. This might lead in unacceptable situations.

You see that this question started simple with "What if we don't test?" only now it contains a different context. If we don't test, quality might lead to unacceptable situations. this implies that some situations are also acceptable.

Perhaps answering the question "What makes you a better tester?" can be answered by: "It depends based on your nature and nurture. How you observe and listen, your ability to adapt and how much you care."

Open System Thinking and Software Testing (5)

This is a continuation of the posting in the category: Open System Thinking and Software Testing. For the previous post you might check out: Open System Thinking and Software Testing (4)

For defining the items and investigation of the relations of those items to each other I'm still working on Micro Level: Test Project. (See Open System Thinking and Software Testing (1) )

If items and relationships are defined the next step would be defining how those items have impact on the test project. And see if they empowering or weakening other items. In the figure below I took some items to work with as an example.



  1. Using jargon supports usage of formal test method by communicating in a same language
  2. Lesser experienced testers weaken usage of formal test method as communication might contain noise when using test related terms. Also performing tasks as intended in method might be done differently.
  3. Due to PM is located on other location; it is harder for them to monitor the process if everything is done as intended.


Here you see that 2 and 3 have a relationship. Due to lesser knowledge is should be important for PM to monitor the handling of test related tasks. If it is not done, the implementation of a formal test method might be weakened. In a test plan this can be mentioned as a risk.

As item 1 supports the usage of the test method. This might help to control the impact of item 2 and 3. In this case it can be translated to train the less experienced users in front in the fundamentals of testing. This can be called boundary condition within the test plan.

As dependency the location of the PM can be mentioned. They have to be made responsible for communication and monitoring of usage of the test method.

Normally I saw in projects that risks, boundary conditions and dependencies are defined based on common sense. This approach might help to visualize the process of this common sense.

I can imagine that in some test processes the test manager or test coordinator performing an exercise like this. Perhaps in a SCRUM this exercise can be done by the Scrum Master. He can use this and call it his Risk-Dependency-Backlog.

The next article related to this post can be found on: Open System Thinking and Software Testing (6)

Saturday, March 1, 2008

Does the testing school fits the organization?

Can Approaches support organizations?
Here is another posting related to testing schools. In my previous posts I started to explore my thoughts if I belong to a school. Schools of testing, can you decide?

And continued to think if there is some dependency related to organizations: Testing Schools and Organizational Schools

As Paul Gerrard there is a lively discussion going on towards this topic Clients, Contexts and Schools

Today, on this rainy and windy day, I continued thinking about it. Though, I still have the feeling that the quest for answers is not over yet. (Probably there are no good answers, instead solutions that might work.)

Over the years I saw different people using methods/approach what they knew the best and sometimes forcing organizations to adapt their processes so their method might work. I also saw people adapting their method/approach towards the processes of the organization.

Both approaches might lead to the solution the organization was waiting for. And both approaches create new risks for the organizations. Some people say that risks can be accepted based on the change of failure in relations with the cost of that failure. Less people are asking why to accept those risks. To give an answer to that question I think it is important why organizations initial choose for a certain approach.

I could try to search for answers why to choose for a certain school. And try to start discussions about it. Instead, I try to think out side the box and find out how both ways can be good, how schools can help, how solutions can be provided to organizations.

How would my "box" look like?
To create a picture of the "box" you might ask the following questions:

  1. Does the organization have other goals to improve the testing process and increase quality of the software?
  2. Does the organization have the means to get to those sources which support these goals?
  3. What is the life cycle of these goals?
  4. Can the tester based on his/her skills and experience commit to these goals?
  5. Is the tester able to adapt their approach/method to these goals on continues basis?
  6. Are tester and organization able to communicate on equal basis?
  7. What is the life cycle of the tester in the project?

Imagine how that box would look like if the answers were like:
Example:

  1. The organization is not able to change other processes like development on short time. They also using a test method and there entry criteria to dictate that process.
  2. The organization has the means like time, money, though the market is not providing the right people within time.
  3. Improving test process is long term goal, extend the project life cycle. Improving development process for documentation is short term goal, only should extend over other projects. Deliverance of high quality software is explicitly needed for this project.
  4. The tester they found is able to meet these goals under restriction. This means that the tester doesn't have experience with defining strategies based on the several schools. Though he/she is well skilled using on test method like TMap.
  5. The tester is not able to change his approach as he is well trained on one method.
  6. Based on organizational hierarchy tester didn't get full commitment.
  7. As the market for experience testers is small the tester will attend the project till the end.

Now, close your eyes and investigate the feeling these answers are giving you. Now open your eyes and try to answer the same questions no under best circumstances. Close your eyes again and you might notice a different feeling.


Creating borders
Feelings can lead you into a good direction. Though, it is hard to translate them into risks. To identify risks you need borders. I always try to find the borders of a method. This helps me when a method can be used successfully or when a potential risk might rise. Therefore it is good to have those testing schools be defined. This help to define the borders of your picture. And give you information when you are crossing that border. If you cross the border then this is a sign that a school related approach doesn't fit the organization. And reaching the goals might come into danger.

As organizations are dynamic, testers also should be. I think there is an organization for every school. Only a school might not fit the organization. I don't think that it makes you a less good tester if you belong to one school or a better tester if you know about all the schools. It might help to give the organization an usable solution they need if you have knowledge about them.

Conclusion
A tester should be aware of their skills, be honest towards the organization about their skills. It is not wrong to belong to one school. It might help to know about the "Schools of Testing" and their approach and vision. Be aware that good testing is not a purpose, a good solution should be. And you should help the organization to decide that it is and was a good solution.