Friday, January 27, 2006

Risk Based Testing

Definition of Risk-based testing

The technique used to prioritise the development and execution of tests upon the impact and likelihood of failure of the functionality or aspect being tested.

Commentary

Testing using risk prioritisation. Should there be any other approach? In time, risk-based testing will be a synonym for testing. There will no longer be a need to emphasise that risk is the basis for testing.

In the meantime there is clearly confusion about risk-based testing. I entered "risk based testing" in Google and found an amazing array of contradictory information on the topic. Some entries are clearly written and well thought through. An example is this paper on heuristics and risk by James Bach. Others are not. An example of this is here.

Now the sad thing about the second paper is that it was the second entry I found when I searched. So it must be frequently found by those searching for information on risk based testing. It is also nice to look at and has well drawn diagrams in it. So it seems highly plausible as a sound software testing method. But if you start to look at the content it has some of the strangest advice I've ever seen on this topic. For example:

"Since the risk increases with the frequency of use, you should look at the most used feature by a user to identify the riskiest ones"

Do you believe this statement? I frequently play cards on my system. But if the card programme wasn't working properly it would not increase the risk that I couldn't write a performance testing report or process test management information. It's obvious to anyone who thinks about it, that the impact of failure is a far more important indicator of risk than frequency of use.

This is not an isolated instance though. If you read the paper you'll find lots more gems of this quality. Perhaps it is deliberately like this. An ironic sting: if anyone is cavalier enough to follow the testing approach described then they evidently don't assess the risk of finding insane instructions on the Internet as well as sound advice. I wonder if Caveat emptor applies to free advice too...

Friday, January 20, 2006

Software testing papers - open season

Hardly have we posted one call for white papers and presentations on software testing than another one appears. If they run true to form a third will be along any moment now. In the meantime enjoy this one for the 14th EuroSTAR conference on Software Testing Analysis & Review.

The theme this year is something or other about dreaming. In breaks from these relaxing trances (hopefully not induced by the same methods that Samuel Taylor Coleridge used to induce his trances - which is way too illegal to seriously consider) give some thoughts to:

Software testing processes:
"how do we organise software testing and how do we process software? How do we measure quality? And this year we add the question, how do we ensure that our processes help our people to be a better team? How do we decide the go or no go of software together? How do we communicate?"

Software testing tools:
"how do we deploy and apply software testing tools as means to increase our productivity in making software? And again this year, we will look for the testing tools and functionality that we use to facilitate team work. Clear metrics, sound reporting, shared repositories will be looked at from a people-supporting point of view."

Software testers:
"how do we organise and manage people, insuring that they become passionate about making software? This year we investigate what it takes to turn an involved co-worker into a concerned or even passionate software engineering professional."

Read full web version of the Call for Papers and then contact Mary Duggan
(Programme Administrator, email: mary@qualtechconferences.com ).

Monday, January 16, 2006

Software testing conference - late call for papers

I've received a note from the International Conference on Practical Software Quality and Testing in Las Vegas saying they have extended the deadline for submissions of papers.

The topics they are interested in are:

Testing technology, process and automation
Agile and eXtreme approaches and testing
Testing web, Internet, e-Commerce applications
Security testing
End-to-End testing
Performance, load, stress testing
Static testing: reviews, Inspections
Integration, systems and regression testing
Use Cases and testing
Successful tool usage
Risk-based testing
Testable requirements
Related topics
Managing test function and processes
Test Effort estimation
Risk management and mitigation
Developing, managing a test team
Developing, implementing standards
Requirements management, modeling
Defect tracking and studies
Certified Software Test Professional (CSTP)
Certified Test Manager (CTM)

The conference organise is Dr Magdy Hanna, Mhanna@psqtconference.com if you're interested in submitting a paper

Friday, January 13, 2006

Software Testing FAQ - No. 9

Is a software tester more than a tourist?

This question appears to have come from somebody connected with Wiley Computer Publishing. You've got to admit, it is an interesting question. The converse (is a tourist more than a software tester?) is surreal. But surreal and interesting aren't synonymous anymore. Some question if they ever were.

Now, strangely, I was reading Lessons Learned in Software Testing which has an eerily appropriate section:

"Lots of things you can do with a product that aren't tests can help you learn more about it. You can tour the product, see what it's made of and how it works. This is invaluable but it's not quite testing. The difference between a tester and a tourist is that a tester's efforts are devoted to evaluating the product, not merely witnessing it. Although it's not necessary to predict in advance how the software should behave, an activity that puts the product through its paces doesn't become a test unless and until you apply some principle or process that will identify some kind of problem if one exists."

Pretty conclusive really. The answer is a resounding yes.

software testing books and puzzle

Friday, January 06, 2006

IBM Rational testing tool advance

Computer Business Review ran this interesting story on the recovery of IBM Rational:

"Having lagged behind all the other IBM software brands with nominal or negative growth, by Q3 last year, the Rational brand tied Lotus with 12% growth, exceeded only by WebSphere. After years in the slow lane, the natural question is whether Q3 was a fluke or a sign of rebirth.

Over the past couple of years, Rational has modernized its client-side developer tools under the Eclipse framework. It came after some false starts, such as XDE, which was supposed to be the successor to the Rational Rose modeling tool, and Rational Rapid Developer.

Last year saw the release of testing tools that linked to Tivoli for closing the loop on performance testing with actual production data, the first upgrades to Rational Portfolio Manager (RPM), and the long-awaited open source updating of Rational's process tools, which will also cover alternatives to Rational's Unified Process (RUP).

And last year saw the changing of the guard. Danny Sabbah, whose background included chief technology architect for WebSphere, succeeded Rational co-founder Mike Devlin. To many observers, the transition couldn't have happened soon enough because, until recently, many felt that Rational's best days were behind it.Way back when, Rational invented the concept of a unified software development lifecycle. It spanned use case modeling and requirements management to architecture, design, development, and testing.
"

For the full article visit here