Wednesday, April 30, 2008

Software testing compared with making music

Have you ever been in a situation when you see a music instrument you tending to touch it? And did you made some sounds though you are not able to play that instrument? Did you also hope nobody saw and heard you? Still you kept playing a bit around. Something like approaching the piano and touch some keys? Or handling the guitar and stroke some snares?

I have to say I did and sometimes I still do. It was fun doing so, only I want to learn and understand it myself. Therefore I recently started to play the Electric Bass.

Instead of starting learning notes I start learning to play the bass. The initial lesson was learning to handle the bass and get some basic sounds out of it. The teacher introduced me in the world of tabulature. Wikipedia mention the following about Tabulature: "Tablature (or Tabulature) is a form of musical notation, which tells players where to place their fingers on a particular instrument rather than which pitches to play."



This notation method helped me to make some sound also on my own. As it though me were to place my fingers and make some "music". It only did not help me with the speed of playing the sound. (Still most of the numbers go to fast for me. Or should I say: I practice to less to get up with the speed of music.)

After a few lessons the teacher thought me about other notation methods like mentioning the notes of the music scale. Came up with other terms like quint, octave etc. Initially it seems to me as higher mathematics. Only it is fun as a new world of listening to and understanding music opened it selves for me. Even the easier child tunes were fun to play as there is more behind it.

I think software testing is quite similar. If you have no experience in the world of testing and you are asked to test a piece of software you also sit behind the PC and touch some keys, hoping no one will see you.

If you choose to learn more about software testing you will see that there is much more behind this profession. You have different ways of defining your test cases. You can compare the testing techniques as the different notation methods for writing music. You also don't start with the difficult parts. And your speed of defining will increase after a lot of practice. Also in certain circumstances defining your cases is more like higher mathematics. The challenge also in testing and defining your cases is that others also understand what you are doing. Sometimes you have to use the formal notation of and sometimes you just write down the basic and explain what you have done.

Sometimes you play solo and sometimes you do it as part of a band. You can be in the lead or you are supporting the band with your instrument and skills.

Can you compare software testing with making music? I think so. I mentioned some similarities and of course there are much more. There or more ways to perform your testing as long as you understand what you are doing and are able to explain to others what you are doing. It has to sound right and make sure it gives you fun. Both do it for me.

Friday, April 25, 2008

Security Testing: an incomplete story

Security is a hot topic. At least when it comes to protect the privacy in terms of information of persons or organizations.
The first thing which came up in my mind related to security testing was focusing on prevention that data or systems are accessed to get information which can be turned into money. Or get other advantages.

On Wikipedia Security testing is defined as follow: "(The) Process to determine that an IS (Information System) protects data and maintains functionality as intended.
The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorisation, availability and non-repudiation."


The main words are here protecting data and maintaining functionality as intended. When I think of security testing it was mainly focusing on PC's and servers. They should be protected from the outside world introducers to make sure that data is not stolen, altered or removed. Another thought I had was to leave the functionality as is, prevent others to replace existing code with their own to get easier access. This access can be get using the so called about trojan horses.

The definition on Wikipedia: Trojan Horse: "In the context of computing and software, a Trojan horse, or simply trojan, is a piece of software which appears to perform a certain action but in fact performs another such as a computer virus."

Only leave functionality as is can also be read as leave the functionality accessible or usable by the intended user. I think most of the security testing projects focusing mostly that other cannot alter the code or functionality. Only another side is avoiding that user are not able to use their systems.

Another way to get access to information systems known bugs can be used by developing an exploit to make usage of holes/bugs in the system.

Wikipedia defined Exploit as: "An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of software, a chunk of data, or sequence of commands that take advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized)."

During a search on the internet I came up with an article which is explaining how exploits can be automatically generated. I have to mention that this is quite a technical article. Though it gives some good information about the time pressure we might have on deploying patches and therefore even more the time pressure in the testing cycle.

The article I'm referring to is: Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications by David Brumley,Pongsin Poosankam, Dawn Song, JiangZheng

In this article they explain that they were able to generate exploits using a tool. Those exploits can be generated within minutes based on the newly release patches. On of the interesting statements is that about 80% updates their system NOT within 24 hours when an update is released. If an exploit is generated within minutes this leaves persons a playing ground for almost 24 hours times 80% of the users connected to the internet.

In the same article they explain how their tool works. Looking to the chapters it has some relation towards testing:

  • Patch-Based Exploit Generation using Dynamic Analysis
  • Patch-Based Exploit Generation using Static Analysis
  • Patch-Based Exploit Generation using Combined Analysis
Perhaps such an approach and tool can also be used in security testing. Trying to generate the possible exploits for you system and use this information for risk analysis.

That time to market for solutions is important was also mentioned by Michael Kranawetter, Microsoft (D) on the SQC 2008 in Dusseldorf on April 17th 2008. He presented the Security Development Lifecycle. (see also my posting: Software Quality Conference 2008, Dusseldorf)

In his presentation he pointed out that the pressure to market for new deployments will increase. Exploits are found and have to be solved in a very short time. If exploits can be generated automatically within minutes, the time frame between a new deployement, a new exploit and a corresponding solution is extended. For example: if an update is deployed on moment T1 and the solution takes about 1 month for deployement and the exploit was initially found in 1 week. You have a risk period for about 3 weeks. If now an exploit can be found in minutes. This leaves that the risk period is extended from 3 weeks to 4 weeks.

Some exploits doesn't allow the users to gain control over the system. Though there are exploits that prevent users to use their system. Prevent users using a system can be done by brute force like the so called: Denial-of-service attacks. Were persons trying to attempt to make a computer resource unavailable.

On Wikipedia they define Denial-of-service attack as: "A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users."

An exploit can also be used for this. Without using brute force and in that case without a lot of other means, like infected systems by trojans.

Perhaps it is our fault that when we hear the word computer we think of the PC's or notebooks we are working with. I think in the definition above a computer can also be a system which contains a CPU. This means that any product which contains a CPU can be part of a DOS or an exploit.

Currently we are focusing more or less on security testing on our information systems. This is wrong in my opinion. CPU's are also embedded in all kind of things. I recently heard that about 99.8% of the CPU's in the world are used outside PC's and servers. If those CPU's are used for protecting or helping us in the real world, we should protect them also for intruders.
Security is here very important. Only sometimes the risk for persons is very low and therefore security is somehow neglected. In the Netherlands recently a public transport card was introduced. It was proven that the security had some holes. The direct damage for people was not that high. Only the trust in the project was minimized.

If securiy testing will not get the same attention as functionality testing then the project might fail just after it will be used. Exploits can be nowadays generated very quickly.

This will have some impact on the testing process. Michael Kranawetter suggested to reserve about 20% of the testing time in a project for security testing. If this is valid figures it means that either the testing time will extend, other functionality should be tested lesser or the test team should be extended.

Another impact is the moment when it is valid to start security testing. Initially I would say: start at the beginning at the same time normal testing starts. Only the system is still in development, sometimes the developer did not yet pay attention towards security issues. On the other side "wrong" coding can be discovered much earlier and the impact of repairs can be minimized in an early stage.

A solution for this is also mentioned in SCRUM projects, Continuous Integration. To make this succesfull you have to automate your testing.

If security testing will get a larger part in a project and the development process is addapted to it; you need testers with different skills. A test team might consist in short of the following people:

  • Test experts, skill of usaging of different test methods, informal and formal testing techniques which fit the choosen development method (Development method knowlegde)
  • Tool experts: skills of using functional tools as well technical tools with programming language
  • Security experts: testers with skills and knowlegde of the latest tools and security methods.
In my opinion Security Testing has quite an impact on projects and testing. The test strategy will change, the process will change and also the skills of testers will change and the tools will change. And this all have to be done in a shorter time frame. And last but not least: The project managers should be aware that projects might shift delivering right and save products instead of reaching the deadline. In some cases you can go into production when there are workaround for known issues related to functionality. Only those issues should not have impact on security. If security fails, the project might also fail when it goes into production and articles are written in papers.

Monday, April 21, 2008

Software Quality Conference 2008, Dusseldorf

On April 17th 2008 I visit the Software Quality Conference in Dusseldorf. Based on past experience I knew that it would be a interesting day. The main topics I intended to visit was Model based testing and security testing.

Presentations I attended:
Security Development Lifecycle by Michael Kranawetter, Microsoft (D): Interesting points here were the obvious awareness you are never on time on testing security. As at the moment you are deploying a fix, the next exploit is already waiting. A strong point of the presentation was defining a security development lifecycle. M. Kranawetter recommended to plan about 20% of the time of a regular testing process for security testing. See picture below for the model of the security lifecycle:

Based on this lifecycle he mentioned that there must be actions taken for process improvement.

He ended his presentation with a short movie see: http://video.google.com/videoplay?docid=5627966010916286426 somehow I forgot the reason he showed this to us, though it was very funny.

Software Security Metrics 101 – Why & How? by Dr. Markus Schumacher, Virtual Forge (D): This presentation gave using examples a basic overview why security is important and will become more important in the nearby future. He came up with a quote that 0.2% of the CPU's are placed in PC's/Servers and the remaining 99.8% in regular products like watches, toasters, cars and perhaps in the nearby future in milk cans. He used this statement to express that our environment will change in the future as we will use more of the information on those cpu's like RFID's etc. At least he opened my eyes that testing security for those situations like embedded software will take a much important place in the nearby future. And since embedded will result in using cheaper production methods, security might become an issue over there which needs our attention.

Model-based Testing Enhances Action-word Based Testing to Boost Test Automation by Emmanuel Verge (Fr): In this presentation the position of model based testing was explained and how their tools can support that approach. It gave some basic view how a process of model based testing looked like based on an example UML model using their tools. One of the strengths I see is that you start defining you test model based on requirements and define models for it which can be used for deriving test cases. The tool they have to support this is called: Leirios Test Designer. Using this tool test cases can be designed based on the model of test cases. You immediately get an overview how your coverage is against the models and therefore the requirements. If the test cases are defined you can use their tool called Leirios Test Publisher to create test script.

In the approach related to model based testing you have the following phases:

  1. Requirement Management
  2. Model-based testing
  3. Test Management
  4. Test Automation

A Maturity Model for Model-based Testing by Thomas Rossner, Imbus (D): One of the first statements T. Rossner made was: "Model based testing is not UML" With this statement he tried to trigger us that we have to see model based testing as a process where models are used. During his presentation he explained more in detail how a maturity model fits in Test Process Improvement (TPI) Model. He explained which key-areas are suitable for usage in model based testing and how the maturity levels would be according to him. As he defined that model based testing also knows a certain maturity leveling, therefore it is hard to expect that the usage of model based testing will result immediately in fast and better results. I think the main thought he wanted to give is that you can also improve your model based testing process during time based on a customization of the TPI model.

Some Exhibitors I talked to are:
  1. Leirios: After I had the presentation about model based testing I visit their stand. One of the strengths of their tool is not only that if a requirement changes you only have to alter on just a limited number of locations you script. I think that based on their approach and tooling as tester you are able to tell management what the impact for testing will be if a requirement will change based on the effect of the number of test script that will change. I think it is worthwhile to take a look at their approach and their tools.

  2. Microsoft: At this stand some detailed information was given about the usage of Visual Studio Team System 2008. And how it incorporate several processes of development in one suite. I think one of the benefits of using tools like this is that you can embed tests better to development. They gave me some trial versions of the Team Foundation Server and Team Suite. I hope I can post some experience of these tools very soon on my blog.

  3. Metrixware: They provided information about a tool to monitor not only the infrastructure during test processes, though also monitor the system while it is in production. This area is quite new for me, perhaps it is a tool which can be supporting infrastructure testing.

  4. SQS: I obtained a demonstration about SQS Professional. As I know this tool from own experience it was good to get again confirmation the focus more on automating the test process rather then automating test execution. I still think the first gains can be get in this area.

  5. Frologic: The presented a tool called Squish. The strength of this "record/playback" tool is that it supports different platforms. And supports different scripting languages like: perl, Python, Java, Tk and some more. for an overview you might take a look at: Squish

  6. Wibas: I was triggered by the Map Of Change they presented (see picture below). I think with this map the tending to trigger people that testing is not a process on their own. It should be part of the process of improvements.


Some interesting links related to:

security testing:
The Security Development Lifecycle
Security Developer Center
Michael Howard's Web Log
Security Development Lifecycle (SDL) Banned Function Calls

Model-based testing:
Wikipedia - Model based testing
model-based testing home page
Model-Based Testing in Practice
Model Based Test Generation Tools

Monday, April 7, 2008

The green part of development and software testing: "Ecologiability"

Over the last few months in several resources are paying attention to the ECO effect of our systems. They already started defining infrastructure in such a way that our environment is somehow lesser charged.

Like:
IBM cools data centre with swimming pool
Sun to set up datacentre in coal mine

Another trend to reduce hardware by using virtual development and test environments.

If this trend is continuing, is the quality attribute we know now as efficiency enough to measure the benefits of lesser data storages or virtual environments? Perhaps we need a new one: Perhaps: Ecologiability?

Currently we see functional testing and infrastructure testing as two different specialists areas. I can imagine that this new quality attribute is combining both of those good worlds.

Combining both worlds take more then just another location of data storage. It can go further then that.

Won't it also be good to make this work over the business requirements? Let the business think during defining requirements if all data should be stored and also if old data should be kept.

Let the architects think about an optimal design of databases and infrastructure.

Let the developers search for solutions in function usage that prevents creation of redundant or too much information.

Let organizations think what information they actual need at this moment and leave thoughts like: "In the future we might eventually need this kind of information."

If we use this quality attribute we can define goals for reducing CO2 reduction in terms like: Our infrastructure should create a maximum of xxxxx CO2. The next release should reduce the CO2 creation by 5% per year.

Using terms like this will have impact on the architecture, environments, data usage and also the functions that are initially creating those data. Therefore it will impact development and testing.

To act on frequent basis in development, we should be dynamic in our development. Perhaps this leaves us more towards development methods like: Extreme Programming, Agile Development and SCRUM. As this requirement can change or introduced in every iteration.

As tester we should be able to measure it. We should have knowledge of both worlds: functional and infrastructure testing. We should also be able to adapt ourselves towards those development methods, including their tools.

I ask you, reader, do we need such a new quality attribute? How can we deal with it?

Sunday, April 6, 2008

Do we Test wrong?

Today I watched a video from Lee Copeland: Proving Our Worth: Quantifying the Value of Testing, August 10 2006

Here he speaks of Note the similarities: (time in video: 8:59)

  • The Process of finding
  • The Process of evaluating
  • The Process of measuring
  • The Process of improving
Quote from Lee Copeland in that video: "For a quarter of a century now we as testers have focused on the wrong things. We have focused so much on the internal processes: How we do testing. How you put together a test plan. How do you design test cases. How do you execute those. How do you automate some of that stuff. That we generally ignored the purpose of testing. We have focused so much internally, we have focused so much inward."

After this point he brings up a quote from James Bach about the real purpose of testing: James Bach wrote somewhere: "The ultimate reason testers exist is to provide information that others on the project use to create things of value."

This information gave me some feeling I'm on the right direction of thinking. I also think testing consist of several processes. And we should not only focus internally how we can do our job correctly. We should initially focus how the process fits in an organization and how the information of our processes is supporting the organization.

Sometimes you here testers say that they write test scripts for the future. Only think about it if it is necessary. Like James Bach is mentioning that we should provide information to others on the project to create things of value. The key word here is project. A characteristic of project is that it has a defined starting time and ending time. This expels the usage for the future.

Perhaps before we start defining our test process we first have to investigate what kind of information does the organization need. If some of that information has to come out of the test process then we have to define our process such as we are able to provide that information.
The next step would be defining of what level of quality that information should be. This might lead in some improvement suggestions towards the organization so we are actually able to provide that information on that level of detail.

If we know what kind and type of information we should deliver, we have to investigate how it is used in other processes. Before we can investigate this, we should be able to identify those other processes. Mainly this is based on organizational structure but also which development method is used.
I think we first have to investigate how the testing processes fits in the organizational processes.

Based on this information we should be able to define our internal processes like the four mentioned above quoted from the video.

There is still one process I want to add: The Process of adaptation.

This process is based on observing the environment, the processes which are using our information and defining the criteria of information.

I think by looking to the external relations of our processes instead of internally we shift the goal from correct test plans, test techniques, test cases and other information towards correct information. Doing it like this all those attributes are becoming tools instead of goals.

Saturday, April 5, 2008

Testing your software like building a house

Often, software testing is compared with building a house. This analogy is often used in explaining test methods. We are tending to mention that a foundation is important to start with. Requirements are needed on front as it cost more time and money to adapt the house during the construction phase.

Mostly, a house is fulfilling the same basic needs all over the worlds. Give shelter, security and some privacy. If we are asked to draw a house in a few minutes we all create similar houses with the same basic attributes like: walls, door, window(s) and roof. So in theory we have the same perception what a house is. And perhaps also what it is used for.

Only when we take a look at the actual build houses all over the world we see immediately differences. Below are some pictures of houses from all over the world:




These houses don't look in detail the same to each other, though some raw similarities they have.

If we keep comparing the analogy of building a house with software testing, then we should admit that we also are building different houses no matter what method we are using. The pictures above are from several places with huge distances taken over the world. (The Netherlands, Austria, USA, some place in Africa, Kenia)

I think this gives a risk when off shoring software testing to other countries. The perception of software testing is different as the culture is different.

You also will see some differences when you just cross your borders. In The Netherlands the have other construction rules then in Belgium or Germany. Imagine that crossing borders is the same as just stepping out of your office and look at your neighbors. They do things differently. So outsourcing software testing might result in different deliverables.

There is some other "hidden" similarity in building a house and software development which might lead in differences of deliverables. In some countries they plan the whole construction phase in detail and continue according plan. This can be compared to a waterfall approach. In other countries, they ship all the needed materials and start building a house, during the construction they check if they need more or other materials. Perhaps this is more like an Agile approach.

If our basic goal of software testing is the same all over the world: "deliver acceptable quality within time and budget" we should also define the sub goals. And keep monitoring them. To do this we have to be aware about the impact of culture, internal and external rules on software testing and define how to communicate with each other. We might create some kind of map/dashboard which gives an overview of our environment and provide us information to identify risks and conditions. For this, perhaps my idea about Open System Thinking and Software Testing can provide a guideline.

For structuring communication, Paul Gerrard tried to give some direction defining Test Axioms - second attempt.

Still I think we have another challenge left. What I see in a lot of articles, books and projects is that testers keep sticking to one method or one level of communicating. If they speak about testing, they dive into one development method and how testing should be embedded in that method. Others keep diving in test techniques. And some are expressing their test method/approach. And they try to change the world around software testing by laying up conditions towards others like: requirements should be written in this language, development should be ready at this time, and documentation should be delivered in this form containing this and that information and so on. So at least their process is under control. In this situation setting up a solid test process is becoming a goal instead of deliverance of software.

I think we should use our knowledge and adapt it to the situation we are in to and not force the situation to change so at least we can do our work correct. We need a wider thinking and approach. Perhaps this might be some task for test managers and test coordinators.

Of course as testers we should give some direction and explain what we expect to get to perform our task. And we also should be able to learn from previous results and adapt to newer situation. Only improving the test process should not be come a more important goal instead of software deliverance. The way the process is defined should also fit in the organization policy and we should not try to change that policy. First we should try to adapt our way of testing to it.

Perhaps we should have a bit of influence how the house should be build, only we should not be leading. If we are able to construct a house which has the same raw drawing, no matter where we are or who we are, we might be able to approach software testing as a living organism also. Software testing methods can be used to define the outside color of the house and the inside architecture.