Somehow it is not going that well in the global economy. Some people blame it on the financial institutes. Some groups are referring to shareholders who only want to make profit without accepting the risks. I can imagine that there is also a minority who think this is the result of globalization. And I'm afraid that there are also people who keep asking questions like: Is something wrong? What is happening? What are you afraid of? Did I miss something?
Of course I cannot deny something is happening. Only it happens all the time, everywhere. It also happens in our projects.
We also do have stakeholders who asking for profit and don't accept risks. We also have different groups who are playing different tunes. We also rely on our issue registration tool as it was our personal Wall-street. And if something goes wrong we also spend more time on finding the person to blame instead of working together towards a solution.
On global level in the financial institutes we see combining forces to stop to trend getting worse. Will this workout? I'm not sure. At least some decisions are made.
And I like it when decisions are made. Only then you can judge if they were valid or invalid. In both situations you can continue working on those decisions. We have to keep in mind that beyond every decision a risk is hiding. We either should try to control that risk or accept that risk.
Still there is one risk I think we should not accept. The risk of the voice of strong people who are setting the tone of feeling of a situation.
I might be wrong on this when I saying that there are people who are telling that it can only go bad from this point on. If those voices are too solid, it will go wrong. or what about voices who are claiming it can only goes better. If we ignore the signals it might even go worse then it already is.
The same can happen in projects. It can go well or it can go the other side around. I think if there is a euphoric tone of noise we need people who are judging that feeling. They should not be ignored and blamed for their opposite sound. They should be heard. Or our goal should be a recession in our test process to clear the road for improvements as signals become more obvious.
On the other side related to negative signals should be supported by the benefits of a project.
I can imagine that if we keep relying on metrics which are supporting those feelings and ignore the judgments based on experience we might ignoring risks. Ignoring these risks withhold you from accepting or declining those risks.
Initially this sounds good as a decision is made. And making decisions make the project move to some point. If this is done we should be aware that we are accepting the unknown unknowns. As long as we are aware of this we can adapt the context of our thinking towards the project and improve the next time by monitoring which unknowns become known and what is the impact of it.
I suggest we have to beware of noises in projects, avoid a recession, make decisions even if we don't know which unknown risks there are, stop blaming people of not doing there job right and use that energy to move the project move in a controlled direction.
Sunday, November 30, 2008
Recession in projects
Posted by Jeroen Rosink at 8:09 AM 0 comments
Labels: Metaphor
Saturday, November 29, 2008
Open System Thinking and Software Testing (8)
This is a continuation of the posting in the category: Open System Thinking and Software Testing. For the previous post you might check out: Open System Thinking and Software Testing (7)
For defining the items and investigation of the relations of those items to each other I'm still working on Micro Level: Test Project. (See Open System Thinking and Software Testing (1) )
As mentioned in the previous posting the same steps can be followed.
1. Define the general meanings of the categories on meso level;
2. Identify the items per category: Goals, Technology, Culture and Structure;
3. Fill in the quadrants;
4. Define how the items are weakening or supporting each other;
5. Defining the sentences how this empowering/weakening is done;
6. Defining possible solutions how to monitor or to define new improvement suggestions
To define the focus of general meanings of categories you have to consider the basis of an organization: Vision, Mission, Strategy and Objectives.
Based on a vision of one or more persons a mission is defined. That mission is the goal of an organization. To comply to this mission a strategy is developed. To meet that strategy objectives are defined. If one of these objectives are not yet met often they are mentioned in business cases which will be the basis of a project.
I think we need to make a decision. To use this model you can use two approaches.
1. Define the basic context (general meanings) and after performing the defined steps you create heuristics which you want to act on
2. Start defining heuristics and use them as basis to perform the steps.
For more information related to heuristics I want to refer to documentation and weblogs from James Bach, Michael Bolton, Cem Kaner and Brett Pettichord
Documentation:
Brett Pettichord: Schools of Software Testing
The Seven Basic Principles of the Context-Driven School
James Bach, Rapid Software Testing
James Bach, Rapid Software Testing Appendices
Posted by Jeroen Rosink at 10:08 AM 0 comments
Labels: Ideas, Open System Thinking and Testing
What questions to ask when starting testing?
This morning I ran into a A New Testing Challenge related to software testing on the weblog from Matt Heusser: Creative Chaos.
In his log he challenge testers to solve a problem related to testing based on types of products and some laws which are existing. Of course this is a nice challenge to work with on a Saturday morning when the kids are asleep. Only one of the best parts of this posting is the comment Michael Bolton gave. Instead of solving the problem he asked if it is allowed to ask some questions.
To me those questions can be used every time before starting testing.
The questions Michael Bolton Asked are:
0) Is it okay if I ask you some questions? Assuming Yes,
1) Do you want a quick, deep, or a practical answer to the question, "How would you test this?"
2) Has anyone else tested this?
3) What's my timeframe?
4) Is it Sunday? When will it next be Sunday?
5) What are, in your estimation, the most serious risks?
6) What resources are available to me?
7) Who else can I talk to about this? Clerks? Customers?
8) If I see violations of laws other than the ones you've set out, are you interested?
9) What are my references for correct prices, categories, sale items, and so on?
10) What do you want my report to look like?
What I liked about these questions is the reflection of test process. With 10 questions Michael Bolton defines the boundaries how much effort and depth should be taken into consideration. Whether there is already experience about the product available. What values the customer based on avoiding certain risks. How the customer expects the tester to act.
In my opinion one of the strengths of these questions is that every answer from the customer can trigger you to ask another question.
I can imagine that based on these 10 questions and some supporting questions you are able to get enough information within a hour.
Posted by Jeroen Rosink at 8:54 AM 2 comments
Labels: Testing in General
Monday, November 17, 2008
Roles in Software Testing
Perhaps I'm the only one who's is wondering about the reason why we are making testing so complex. When ever a new approach is defined, also new roles are introduced. When new books are written, new roles are explained. Even when people are describing themselves on a network-site like LinkedIn, they express them selves in different roles.
When I started in the software testing business I was aware of the following roles:
- Test Analyst
- Test coordinator
- Test manager
- Test tool specialist
Over the years I also noticed that in TestFrame you had roles like:
- Navigator
- Analyst
Other roles I was aware of are:
- Test consultant
- Test Professional
- Test tool consultant
- Test tool developer
- Test team leader
In TMap next they introduced:
- Test project administrator
- Test Infrastructure coordinator
In a recently followed presentation about Model Based Testing they introduced the role:
- Test constructor
And there are roles all over the place like:
- Agile Tester
- Security Tester
- Usability Tester
- Test Oracle
- Software Tester
- Requirement Tester
- Test Architect
All these roles made me a bit confused. How can I call myself the best?
It sounds to me a bit stupid when I introduce myself as:
Jeroen Rosink: Test coordinator, agile tester, test architect, test consultant, test analyst, Software tester, Test Idiot.
I think other people already lost me. Perhaps we might have to go back to the basics.
1. Tell people what you are: e.g. Software tester
If there is room for further clarification you can continue explaining:
2. What your specialties are: e.g. Coordination in Agile project
3. What your knowledge is about: e.g. Test strategies, Test techniques, Functional testing
4. Skills: e.g. TMap, ISTQB, Context Driven Testing, Embedded environments, Web based applications, aso.
Only now I wonder if going back to these basics is enough, sufficient? Does it tells enough? Does it bring us what we need, or better what the customer needs?
If we go along this way we are on the edge of introducing more certifications as every one wants to be recognized by its specialism. At least it might simplify the way you can introduce your selves. It could be: Jeroen Rosink: 10 out of 21 certifications (assuming that there is already this many certifications related to software testing).
Only does this number tell anything? Imagine you have to explain what you do for living to an outsider. I always tell proudly that I'm a software tester and I test software. Although people don't understand that, I'm sure they can picture it. At least it is better understood then Test Analyst or Test constructor.
So perhaps you, reader can explain it to me why are we intending to make things so difficult? Or do you have other examples about roles in our field of expertise?
Posted by Jeroen Rosink at 9:05 PM 0 comments
Labels: Process, Testing in General
Thursday, November 13, 2008
Yesterday, I met some of my heroes
On November 12th 2008 I attended the EuroStar 2008 conference.
Of course I went to visit some presentations, gain some information by those stands. Also important was to see other or former colleagues.
As some or perhaps most know the Netherlands is a small country. When it is rainy, which is currently in this time of year, traffic jams are things you can count on. I did the same. Only I should have counted further then just an half an hour.
I calculated 1/2 hour additional time through traffic jams, 5 minutes registration, 15 minutes for a coffee break including showing my face to the stand of my employer Squerist and then straight ahead to the first keynote.
Only I needed all this calculated time to arrive at the location. And on the quest for some coffee I came in the room where the keynote was already started.
The keynote given by Randall Rice went about Trends That May Shape Software Testing which gave me some ideas to think about how the future can be related to SOA, Energy etc.
Almost at the end (I skipped the outsourcing part) my need for coffee beat the interest for outsourcing. Only the coffee bar was closed so I had to get some from the restaurant. (I expected that coffee is for free the whole day, what was I wrong)
Drinking my coffee within 3 minutes I walked to the Stand of Squerist. On the way to it I saw one familiar face. Egbert Bouman, author of a book related to software testing: SmarTest. I this book because it presents another approach on software testing were Business is taken a central place and compares their approach with TMap, ISTQB, TestFrame and TestGoal by not telling they are right or wrong. He explains how they do it. He was the first hero I met that day. Why is he one of my heroes: He has something to say and is willing to share it with others, listening to you and also keep recognizing you after you have talked to him.
Arriving atthe stand I saw there some of my colleagues, when they read this, they will also know that I collected them as my personal heroes as they are giving me the chance to become what I like to be :) (I'm thinking this is getting a bit slobbery) So I ran to the presentation about model based testing presented by Elise Greveraars: Tester Needed? No Thanks, We Use MBT!
It was quite a good presentation, only I have some doubts/questions now on Model Based Testing. Why do we need another new role for a tester? Why is Model Based Testing an answer if we can't use models which are there to create code from because if they containing errors we are creating those errors also in our test cases? Why do we always need tools? Why not start thinking first and keep it simple so every one can understand it.
After this presentation there was a short break. I went again to the dungeons of the building. Yes, literally to the dungeons as there was the exhibitor section. And there can be fun in the dungeons, as there were some former colleagues I worked with and some more new people to talk to. Even a long lost former colleague who went to Finland to find his luck. (Rolf, it was good to see you again!)
I had to hurry to be on time for a workshop given by Michael Bolton: Heuristics: Solving Problems Rapidly
He is good!!!! As he writes on his blog about the "Heuristics Art Show, EuroSTAR 2008" It was a show. And I'm glad I didn't miss that show. Michael has the gift to make the crowd feel happy by given examples and by his enthusiasm. He has to gift to exploit the happy feeling to interaction and understanding. At least he provided me with some guiding directions to think, read, and discuss more about heuristics. Michael: thank you for this show. At that point you already were ahead for becoming one of my heroes.
I intended to have a meeting during lunch with him. Somehow I wasn't the only one in the building. So I didn't skip lunch and eat my sandwich alone :)
During lunch I spoke shortly with him only he wanted to attend another presentation with the promise we would meet eachother lateron. I forgot that I planned to go to the presentation from Graham Freeburn: Make Your Testing Smarter - Know Your Context! And also forgot to go.
So I spend my time to meet other people and also a person I met also earlier this year, Derk-Jan De Grood. He is author of a book called TestGoal. This is another great book which is also available in English. It is not offering a new method or technique, it provides guidelines how testers can give contribution to their job and a framework to structure it. (I couldn’t resist to looking at the demo and will mail you about my findings)
After a while he came down the stairs, entering the dungeon together with Graham Freeburn and another person I forgot about his name. Michael and also Graham took the time to explain the meaning of certification. Why testing the tester is more useful. How to ask correct questions. Giving us a context to think in.
Graham, thanks for that hour. (or was it more). To you, Mr. Michael Bolton, thanks for taking the time to gently push me in a direction to think more and further about software testing and their context. I recommend also others to monitor his blog
Almost at the end of the day, no time to get home as there certainly would be traffic jams all over the country, I attended the book presentation from related to Agile testing by Anko Tijman and Eric Jimmink: Test2.0. Besides the presentation there was also food. Plain good "Snert" with bread.
Posted by Jeroen Rosink at 10:06 PM 0 comments
Labels: Conferences
Monday, November 10, 2008
"Schools of testing" are evolving
The first time I read about "schools of testing" I posted already an comment about it
Wednesday, February 20th 2008: Schools of testing, can you decide?
Sunday, February 24th 2008: Testing Schools and Organizational Schools
To understand this topic you might view the presentation of Bret Pettichord: Schools of Software Testing
Currently the discussion is still going on about this topic on several weblogs:
Cem Kaner:
Friday, December 22nd, 2006: Schools of software testing
Paul Gerrard:
Thursday, February 21st 2008: School's Out!
Monday, February 25th 2008: Clients, Contexts and Schools
Tuesday, November 4th 2008, Schools of Testing - Go Away
Thursday November 6th 2008: Labels, Stereotypes and Schools
Friday, November 7 th2008: I'm Not Ready for School... Yet
James Bach:
Wednesday, February 20th 2008: The Gerrard School of Testing
Wednesday, November 5th 2008 : Schools of Testing… Here to Stay.
Michael Bolton:
Thursday, November 06th, 2008: Schools can go away... when we all think alike
Monday, November 10th 2008
In very general terms these articles are about the need for schools for testing. Do they exist? Do we need them? Or can we solve it using the correct heuristics and axioms?
In my posting related to organizational schools I already mentioned a history about these schools. These schools also evolved over time. We kept learning and adapting the current situation and found new approaches. This didn't meant that the other approach was wrong or useless. I think there was another situation which needed another solution.
In history we see more of these examples. You can take examples from psychology (History of psychology), sociology (History of sociology) and also political List: forms of governments.
So my thought is that we cannot avoid "schools of testing". It is in human nature to create groups.
From Wikipedia:
Group:
In sociology, a group can be defined as two or more humans that interact with one another, accept expectations and obligations as members of the group, and share a common identity
The main reason of creation of groups is the need for interaction and share a common identity.
I think we as testers didn't do anything else over the last decades. We also want to interact with other people. Not only people part from our own group. Also communicate with people from other groups like developers, managers, users etc.
As those other groups have also their own behavior; testers adapted their approach towards those groups based on their needs.
In some organizations an Quality approach is more appropriate then an Agile approach.
To support these approaches we have methods to support them like: TMap, TestFrame, ISTQB and SCRUM.
This raises the following question to me: Are we capable to support an organization properly when we are experts in one of these methods and the need is for another "school approach"?
This made me think about were to position the Context Driven approach. As far as I understand they look at the context which is best suitable for the customer within defined boundaries.
As it might be impossible to be an expert in all methods; thinking within the good context might be an answer. The organization might have chosen for a method. Because this method is carried by several people, read group, they all want to make it a success. Using this as context you would be able to be successful although you are not an expert in Agile or TMap or that method. You are able to adapt yourselves to the need of the organization. Perhaps I'm wrong here.
To get this picture more clear for me I hope I will meet next Wednesday on EuroStar Michael Bolton who might explain it to me.
Posted by Jeroen Rosink at 8:28 PM 0 comments
Labels: Testing Schools
Tuesday, November 4, 2008
SCRUM: definition "DONE" and the influence of a tester
What I hear and read about the impact of SCRUM for testing differs from success stories to fear when using SCRUM. When using SCRUM a tester might have to change his perception of testing. In these kinds of projects requirements are not that solid. Documentation is not that final. How are we able to test? How can we influence as tester a development process like this?
There are some ways we can walk, here some of my ideas:
I think the power lies in the concept that the team is responsible for the delivery of a potential shippable product. The first thing which comes up in my mind is: Make sure you are on that team. Become also on of the responsible persons. Don't be the outsider test coordinator who sit down and wait until everything is delivered.
When you are on the team you can discuss about one of the concepts of SCRUM: the definition "DONE". If it is important that documentation is available and also according certain standards you have to bring this up for discussion and see if it fits in the goals of the organization that time is spend for good documentation. The same you can do with requirements.
I'm sure that there are more opportunities to help the team, the organization and yourselves. I suggest start with these 2 basic steps first.
Posted by Jeroen Rosink at 6:02 AM 1 comments
Labels: Agile, SCRUM, Test Methods
Monday, November 3, 2008
How to deal with "Ecoliability"
Do you like the idea?
I recently started to write about "Ecoliability" as a new quality attribute. Although I'm certainly not the person who is able to dictate what is right or what is wrong; at least I can use my weblog to share my thoughts. Perhaps there are others around the world who also likes the idea of this new quality attribute.
Why the need for this attribute?
As in previous posting I already referred to the trend to focus on data storages to save energy and special forums to explain the benefits of virtual machines. See:
04--7-2008: The green part of development and software testing: "Ecologiability"
11-02-2008: Introduction of the quality attribute: "Ecoliability"
Currently environmental awareness is getting already much attention. Discussions are held how to store data or how to use machines efficiently. Why not make it also important and explicitly measure the outcome of it by assigning those figures to an individual attribute?
Why not using an existing Quality Attribute?
In my opinion you could capture the urge for this green part under several other attributes like:
Efficiency: Time behavior and Resource Utilization: e.g. how much CPU do you need?
Portability: Replacebility: e.g. how easy can an old machine be replaced?
Only is this enough? If we capture the sense of ECO under those attributes, they will just become a part of the whole picture. If we make a single attribute for it: we can measure it as part of the requirements. It might help making decisions and also make it visible that an organization cares. As an example an organization can demand that the new application should reduce the data storage by 10%.
What to measure?
I think there are enough objects we can measure. Only we have to measure with sense . And as technology is evolving all the time and Ecological common sense also I suggest making this attribute the first dynamic Quality Attribute. Only it should be dynamic within borders. Otherwise we get a situation as we have now with the name Agile development. As long as it fits in our picture we call it Agile.
As said before objects to measure can be:
- batch jobs are scheduled on a need for information basis, not only because we just like them to run
- data is stored because we need that data now and not possible in a unknown future
- database tables are defined based on need not on easy of usages or as a workaround
- environments can be mirrored in an virtual environment
I can imagine after making this explicitly mentioned in requirements they would look like:
- Data storage is minimized by 10%
- CPU usage is reduced by 15% of machine "XYZ"
- The environment should exist out of maximum of x-machines
- Backups are created based on business critical data only
Which Possible Dynamic Borders?
Perhaps the borders can be related to:
- Infrastructure: e.g. which machine are we using and how are the connections defined?
- Datastorage: e.g. what is the lifecycle of data? What is the level of redundancy?
- Functionality: e.g. how does the functionality support the infrastructure and data storage borders?
When to measure?
I think it can be measured in all stages from Unit test towards user acceptance testing.
The question is: when do we start measuring?
I dare you reader to think with me how this could work.
Posted by Jeroen Rosink at 9:12 PM 0 comments
Labels: Ecoliability, Ideas, Testing in General
Sunday, November 2, 2008
Introduction of the quality attribute: Ecoliability
In a previous post 7 April 2008 The green part of development and software testing: "Ecologiability" I already spoke about introducing a new quality attribute called: "Ecologiability"
In this post I want to change the term as non-native-English-speaker I have difficulties to pronounce this word. I would introduce "Ecoliability"
I still believe that in software testing we should consider this also as a area to focus on as recently more articles are written about green computing. Servers which are using lesser energy. Choices to use virtual environments etc.
http://www.greenercomputing.com/current
There are even discussions about introducing a badge for power efficient servers.
Energy Star for green servers to come this year
If there is discussion about the green part of applications, then it should also be measured. If it can be measured and it is, why not make a quality attribute for it.
You can use this attribute for testing infrastructure and perhaps in the nearby future also applications are designed to use lesser resources of those applications.
Therefore I suggest the idea: If we want to be greener, name it and make it measurable. An approach for this is defining this quality attribute: "Ecoliability"
I can imagine that there are arguments to measure this in other quality attributes like: Efficiency. To measure it in this attribute it just becomes a part of an attribute. I think our environment is more important to be just a part of something. Let us make it explicitly!
Posted by Jeroen Rosink at 8:29 AM 0 comments
Labels: Ecoliability, Ideas, Testing in General
Saturday, November 1, 2008
Destructive vs Non-destructive testing
Do you have a destructive mind?
Now and then I have discussions about the purpose of software testing. And mainly I got the feedback that testing is all about finding defects. It is not just finding defects on a controlled way. It is finding defects in every corner. Without a found issue you didn't test well enough.
It is more the game if you have a mind that enables you to find those defects which "destructs" the application and/or process. The conclusion of this way of testing is: If a defects is found you cannot use the application.
Proof that it works
Another way of testing can be a part to proof that the functionality works. In this approach you are not focused on finding defects, the focus lies on proofing that in that part of functionality using it under those conditions no defects are found. The conclusion of this approach of testing: If no defects are found you can use the application under those conditions.
Which one is wrong?
The definition of testing according to the glossary of ISTQB of testing is:
Testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Based on this definition there is room for both:
1. demonstrate fit for purpose: proof it works
2. detect defects: find those defects
In my opinion demonstrate the fit for purpose can only be done by non-destructive testing. Using a destructive approach you more likely proofing it doesn't fit. To detect defects can be done by both approaches.
Nondestructive testing vs destructive testing
In Wikipedia these definitions are also mentioned. It is not really focused on software testing.
Nondestructive testing
Destructive testing
In the last explanation conditions are given when to use a destructive testing approach:
Destructive testing is most suitable, and economic, for objects which will be mass produced, as the cost of destroying a small number of specimens is negligible. It is usually not economic to do destructive testing where only one or very few items are to be produced (for example, in the case of a building).
Is it economical to destruct your application?
It depends.
In the above mentioned definition you have to decide if there are more pieces or just a few and if it is economically allowed.
Example: if you are buying a car with the purpose to drive from A to B. Are you a driver who tries to drive 120 km/hour using only the first gear? As you already know that this will stress the engine. Only is that car build to drive in this way? I don't think so. You try to use the car as you want to use it and check if it behaves as expected.
Will a destructive approach help you buying the car? At least after this approach no-one will buy the car for that price.
Example: If you want to buy a tent one of the purposes is that it gives a shelter in certain conditions. You don't bring a bucket of water with you to proof it is water resistant. You trust that it is as this is one of the main functions of a tent. What you can do with your means is pulling the frame to simulate a "storm". If it stands you will be happy. If not you continue searching for another tent.
Will a non-destructive approach help you here? Yes: as you can check for space, color and usability.
Will a destructive approach help you here: Yes, only you are using controlled force based on expectations you might describe to that tent.
In software development, we often don't have more pieces, though we are able to create more pieces, or to correct the damage. Only this cost extra money.
Should we avoid a destructive approach of testing?
As destructions cost money we should decide in front if we have the budget to recover the damage. And in this case it is not only recover it during development (project); also to recover while we are already in production.
When used good and wise a non-destructive approach already provided enough information that the chance of errors in production is limited when using properly or as defined. Therefore a destructive approach is not really necessary. Normally in test processes the time is limited and therefore the chance that everything is tested as defined is minimized. This might be the main reason also to use a destructive approach.
If destructive is done using certain amount of force we should use it controlled, or not at all.
Approaches
As the goal and perception of testing differs you might explicitly mention your approach in the test plan. Based on this approach you can define you strategy. Here some approaches:
1. testing using a non-destructive approach
2. testing using a destructive approach
3. testing mainly focused on non-destructive testing, parallel a destructive test focus
4. testing using a non-destructive approach, and when a certain coverage is not reached switch to destructive to proof although the current positive results the system is not yet mature
5. test using a destructive approach and when a certain maturity of the system is reached, start using a non-destructive approach.
Based on the chosen approach you can decide which techniques fitting best and which metrics you are steering on.
Which one to use?
As for each approach there are arguments to use. I think it is important to inform the organization which approach you want to use based on arguments. Not because you like that approach.
Posted by Jeroen Rosink at 8:26 AM 0 comments
Labels: Metaphor, Testing in General