Software application testing Featured

8:00pm EDT August 26, 2007

Software companies are continuing their move toward decoupling, building component sets or services that work with other packages in a much more complex model.

“Testing is taking on an increasingly important role,” says Bill Russell, executive vice president, Allegient. “But a lot of companies minimized their testing competency while they were implementing the big, monolithic software packages.”

Smart Business spoke with Russell about how a dynamic testing strategy, the right team and the latest tools can lead to quality assurance and a seamless transition for end users.

When should application testing strategies first be considered?

The axiom around testing is the earlier you catch the defect, the less expensive it is to fix. Testing strategies should be considered as early as possible, and certainly when the requirements are being elaborated and as the design of the overall system begins to take shape. For more complex implementations, particularly those with a number of interfaces or conversions from an old system, additional testing is required and, therefore, you’ll need to start developing your test strategy even earlier.

What are the key components of an application testing strategy?

A test strategy is composed of very high-level, broad sets of guidelines that define how you’re going to conduct the testing. The different phases of testing are testing strategy, testing plans, test scenarios or test cases, test scripting, test execution, and test results management.

Test plans lay out the approach to take for specific components of the application, like the functions, integration, interfaces and conversion of old data. Each plan delineates test scenarios or test cases that represent the functionality, or system assurance, you’re trying to validate. Specific test scripts define what the tester will actually do, step by step, to validate that the software does what it’s supposed to do. Any defects discovered are handled through a test results management plan.

You can’t test everything. A test strategy and test plans are very much based around picking out which application or system parts are critically important and must be tested. If there are 100 different possible paths through a software package, with a much greater amount of permutations of resulting data, you can’t test all 100. So are you going to test 50? And if you are going to test 50, which 50 are you going to test? What is your acceptable risk?

Aren’t packages pretested by the manufacturer?

Yes, to some degree. They’re tested straight through the optimal path with everything working great; what we refer to as the happy path. But they’re not testing any client-specific configurations or any client-specific interfaces or data sets. That’s why planned testing is still required and very important.

What areas of testing cause companies the most pain?

Any kind of testing that’s not well planned because without a documented plan that defines what you are testing, it’s not going to be reasoned relative to the amount of risk. It becomes ‘guerrilla testing,’ where the enterprise doesn’t really know what has been tested and what has not.

Many companies also run into trouble when they don’t test for performance. They often test only the graphical user interface and say, ‘Is it working the way we think it should?’ You’ve got to do some performance testing to ensure the software will scale. Can it do 1,000 functions at the same time and continue to run? That’s how you identify your break points.

If testing exposes problems, how are they mitigated?

This is called test results management. Once the tester documents a defect, it is sent back to the software team for triage. They determine in which part of the code the defect occurred and assign it to a configuration software engineer, programmer or developer for repair.

The corrected code is then reintroduced, and at this stage it becomes critical to run regression testing to make sure the fix didn’t break something else. This can be onerous, so automated frameworks with testing tools can be applied to run some of those scripts. These tools also help keep track of the defects and trace them back to the requirements they didn’t meet.

How can software be tested with the least inconvenience for end users?

Optimally, you should establish an independent test environment that is separate from your development and production environments. Next, employ those testing tools to help organize and manage the testing process. A professional test team should be assembled to perform the functional testing, regression testing, integration testing and performance testing. Eventual users, in the final step of user acceptance testing, can also be tapped to catch anything the test team didn’t find and identify anything that doesn’t look right.

BILL RUSSELL is executive vice president of Allegient. Reach him at (317) 564-5701 or brussell@allegient.com.