Saturday, March 15, 2008

What has Einsteins gravity theory to do with Software Testing

Last week I saw a program on TV where Albert Einstein's theory of gravity was explained. This triggered me to the following idea combining gravity with software testing, knowing that there are still some black holes in my thoughts.

In that documentary they tried to proof the gravity theory of Albert Einstein.
Wikipedia is mentioning the following about Gravity: "Gravitation is a natural phenomenon by which all objects with mass attract each other, and is one of the fundamental forces of physics. In everyday life, gravitation is most commonly thought of as the agency that gives objects weight"

What interested me that they proof existence of objects in space based on how light was bended and therefore objects seems to be on another location then visually noticed. This is sort of explained on Wikipedia by Albert Einstein's theory related to General relativity.

One of the examples they showed in this program to proof gravity and its impact was using a field were some mass was located and dropping balls on that field. When the mass was larger the bending became greater.

A similar picture I found on Black Holes



That behavior gave me the thought about gravity in software testing. For these thought I make the following assumptions:

  1. The System, Object or Function Under Test is the field
  2. The mass is based on the potential issue we find
  3. The ball is the test we execute
  4. The target is the expected result


If we throw a ball on the field and there is some mass close enough to inflect the path of our ball it might not reach the target we predicted. Which was similar to the example I saw on TV and which is also explained by the gravity example using light.


In testing terms this could be translated to: If we perform a test on a system and the expected result differs with the actual result then we are claiming that there is an issue.

Only how would we call it when we see behavior we should not see? As in the figure above: an object which is located behind the sun ought not to be visible for us. Only due to gravity powers of the sun we do see them as it seems to be in a "straight line" of our view.

In software you see sometimes behavior which should not be visible for the user. Like seeing functions which will be disabled by authorizations, only the testing is performed with the so called "grant all user" profile. It is in our behavior when we find issues we continue looking in that area. Finding more issues in that area will increase the "mass". Based on the "mass" we define risks and often development effort is demanded for these issues.

Question would be also: "Was it necessary to perform this additional development effort?" In general I would say yes, as the quality of the system was improved. Only test projects are always under a certain time pressure. This might lead to a situation that functions which will currently not be used are improved. That time and money could also be used for transporting the system earlier to production or build new functionality.

Perhaps being aware of the gravity of issues towards the system can help us defining the proper risks and making the right decisions. Which makes me wonder what happens if the gravity of one issue cancels the gravity of another issue. The system seems to behave correctly as we don't see objects we do not intend to see.

This makes me think that there is also perhaps gravity of functions/objects within the system.
If we want to be aware of gravity of issues we need to be able to measure it. To enable us to measure gravity of issues we need to identify the gravity of functions/objects within the system.

I think an approach can be drawing up a system landscape based on functions. Also define the impact of for instance authorizations on the usage of functions. If we find an issue we pin point it in the system landscape. Now check if that issue also would be visible based on this authorization profile. The next step could be identifying the influence of other functions on the function were the issue is identified. If on those functions also issues are found they could have a certain impact on behavior which resulted in the issue. To measure this we ought to be able to test the function stand alone.

To me this seems still standard operation how to deal with issues, checking if the function it selves behave as shown in the total picture. If it doesn't show the gained result it would result in rejection of the issue. Some would call it waste of time investigating this issue and go even further that the test was not legitimate.

Before issues are solved impact analysis are performed to make sure if an issue needs to be solved immediatly. Perhaps before we start an impact analysis we perform some kind of gravity analysis. This can be a new activity during testing. Based on the gravity we might also addapt our test strategy and test schedule.

No comments:

Post a Comment