Visual Basic

Create Regular Builds

I have always been a fan of regular system builds. They force the development team to keep things tidy. If everybody knows that whatever they are doing is going to have to cooperate with other parts of the system every Friday (for example), it is less likely that horrendously buggy bits of half-completed code will be left in limbo. Code left in this state will sometimes not be returned to for several weeks, by which time the original developer will have forgotten which problems were outstanding and will have to rediscover them from scratch.

If I'm writing a set of remote ActiveX servers, I will generally try to have a new build ready each Monday morning for the other developers to use. If I'm working on a large GUI-based system, I will probably look more toward a build every other week. It's hard to be precise, however, because there are always influential factors and, of course, you need the necessary numbers of staff to do this. If you are in a team development, I suggest that this is something you should discuss among yourselves at the beginning of the coding cycle so that you can obtain a group consensus as to what is the best policy for your particular project. It is likely that you will get some slippage, and you might well decide to skip the occasional build while certain issues are resolved, but overall, creating regular builds will allow everybody to get a feel for how the project is shaping up.

If you consider yourself to be a professional developer or part of a development team, you should be using a source code control system (for example, Microsoft Visual SourceSafe). I recommend that you only check in code that will not break a build. This helps maintain the overall quality of the project by keeping an up-to-date, healthy version of the system available at all times.

Write Test Scripts at the Same Time You Code

Having stepped through your code, you need to create a more formal test program that will confirm that things do indeed work. Using a test script allows for the same test to be run again in the future, perhaps after some changes have been made.

The amount of test code that you write is really a matter of judgment, but what you're trying to prove is that a path of execution works correctly and any error conditions that you would expect to be raised are raised. For critical pieces of code-the mortgage calculation algorithm for a bank, for example-it might be worthwhile to actually write the specific code a second time (preferably by someone else) and then compare results from the two. Of course, there is a 50 percent chance that if there is a discrepancy, it is in the test version of the algorithm rather than the "real" version, but this approach does provide a major confidence test.

I know of a company that was so sensitive about getting the accuracy of an algorithm correct that they assigned three different developers to each code the same routine. As it happened, each piece of code produced a slightly different answer. This was beneficial because it made the analyst behind this realize that he had not nailed down the specification tight enough. This is a good example of the prototype/test scenario.