Visual Basic

Decide Where to Put Test Code

This might seem like a strange heading, but what we need to consider is whether the nature of the development warrants a dedicated test harness program or whether a bolt-on module to the application itself would be suitable. Let's examine this further.

A major component-for example, a remote ActiveX server-has clearly defined public interfaces. We want to test that these interfaces all work correctly and that the expected results are obtained, and we also need to be sure that the appropriate error conditions are raised. Under these circumstances, it would be most suitable to write an application that links up to the remote server and systematically tests each interface. However, let's say a small, entirely self-contained GUI-based application is being created (no other components are being developed and used at the same time for the same deliverable). In this case, it might be more appropriate to write the test code as part of the application but have the test interfaces (for example, a menu item) only be visible if a specific build flag is declared.

Ensure Source Code Coverage During Testing

A series of test scripts should, of course, run every single line of code in your application. Every time you have an If statement, or a Select Case statement, the number of possible execution paths increases rapidly. This is another reason why it's so important to write test code at the same time as the "real" code-it's the only way you'll be able to keep up with every new execution path.

The Visual Basic Code Profiler (VBCP) add-in that shipped with Visual Basic 5 is able to report the number of times each line of code is executed in a run. Using VBCP while testing your code will allow you to see which lines have been executed zero times, enabling you to quickly figure out which execution paths have no coverage at all.

Understand the Test Data

This is an obvious point, but I mention it for completeness. If you are responsible for testing a system, it is vital that you understand the nature and meaning of whatever test data you are feeding to it. This is one area in which I have noticed that extra effort is required to coax the users into providing the necessary information. They are normally busy people, and once they know that their urgently needed new system is actually being developed, their priorities tend to revert to their everyday duties. Therefore, when you ask for suitable test data for the system, it should be given to you in a documented form that is a clearly defined set of data to be fed in. This pack of test data should also include an expected set of results to be achieved. This data should be enough to cover the various stages of testing (unit, integration, and system) for which the development team is responsible. You can bet that when the users start user acceptance testing, they will have a set of test data ready for themselves, so why shouldn't they have a set ready for you? Coax them, cajole them, threaten them, raise it in meetings, and get it documented, but make sure you get that data. I realize that if you are also the developer (or one of them), you might know enough about the system to be able to create your own test data on the users' behalf, but the testing process should not make allowances for any assumptions. Testing is a checking process, and it is there to verify that you have understood the finer points of the design document. If you provide your own test data, the validity of the test might be compromised.