Visual Basic

Performance Testing

Performance testing is somewhat less rigid in its documentation requirements than the other types of testing. It is concerned with the responsiveness of the system, which in turn depends on the efficiency of either the underlying code or the environment in which the system is running. For example, a database system might work fine with a single tester connected, but how does it perform when 20 users are connected? For many systems, performance is just a matter of not keeping the user waiting too long, but in other cases, it can be more crucial. For example, if you are developing a real-time data processing system that constantly has to deal with a flow of incoming data, a certain level of performance expectation should be included in the design specification.

Performance is partly up to the efficiency of the network subsystem component within Windows, but it is also up to you. For example, if you are accessing a database table, what kind of locks have you put on it? The only way to find out how it will run is through volume testing. But performance is also a matter of perception. How many times have you started a Windows operation and then spent so long looking at the hourglass that you think it has crashed, only to find two minutes later that you have control again? The Windows Interface Guidelines for Software Design (Microsoft Press, 1995) offers very good advice on how to show the user that things are still running fine (using progress bars, for instance).

Profiling your code is an obvious step to take when performance is an issue, particularly for processor-intensive operations. Profiling can point out where the most time is being consumed in a piece of code, which in turn will show you the most crucial piece of code to try to optimize.

Preparing a Suitable Test Environment

If you are testing a system for a corporate environment, it's a good idea to have a dedicated suite of test machines. As a result, machines are available for end users to try out the new system without being an inconvenience to you, and they can also focus on the task at hand by being away from their own work environment. More important, it means that you are not running the software on a machine that might contain other versions of the system (or at least some of its components) that you are developing.

The nature, size, and variety of the test environment will inevitably depend on the size of your organization. A large corporation will conceivably have dedicated test rooms containing a dozen or so test machines. This setup will not only offer the scope to test the software under different conditions but will also allow for a degree of volume testing (several different users using the same network resources at the same time, particularly if you have developed a product that accesses a shared database). If you work for a small software house or you are an independent developer, chances are you will not be able to provide yourself with many testing resources.

Most software these days has one of two target markets. The software is either intended for some form of corporate environment, or for commercial sale. Corporate environments can normally provide test environments, and if you work for a small software house or you are an independent developer, you will probably be expected to perform system testing on site anyway. (Obviously your user-acceptance tests must be on site.) If, however, there is no mention of this during your early contract negotiations or project planning, it is worth asking what sort of test facilities your employer or client will be able to provide for you. It's better to arrange this at the outset rather than muddle your way through a limited test.