The Value of Software Testing
Software development projects come in all shapes and sizes, and they also vary in their nature. In terms of business applications, a standalone telephone directory system for a medium-sized company probably exists at the "relatively simple" end of the spectrum. With an underlying Microsoft Access database and a few data-bound controls, it doesn't take much effort to put together, and because of the extensive use of pretested components it probably wouldn't be too challenging in terms of testing, either. However, at the other end of the spectrum might be a major banking application that facilitates the processing of all branch-level account activity using a remote ActiveX server set via a local Microsoft SQL Server database, and then synchronizes each local set of data with the central bank's DB2 database. A system of this size is a much larger development effort that therefore requires a more thorough and comprehensive planning and design exercise, and the result is of course a greater amount of source code. The implications of a failure in the code could result in a loss of money for the bank, and this in turn leads to a loss of credibility in the business community. It is therefore imperative that the system works as expected.
Another example, perhaps closer to home for many of us, is the forthcoming Microsoft Windows NT 5. Back in the days when Windows NT existed as version 3.51, it had about 5 million lines of code. The last released version of Windows NT 4 Server was the Microsoft Enterprise Edition, which included Microsoft Internet Information Server, Microsoft Message Queue Server, and Microsoft Transaction Server (all now considered to be part of the base platform), and contained 16 million lines of code in all. At the time of this writing (April 1998) it is estimated that the final gold release of Windows NT 5 will have about 31 million lines of code. This new code also includes some fairly radical alterations to the existing technology (most notably the inclusion of Active Directory services). With so much code, the testing effort will be massive. In this particular case the 400 or so programmers are joined by another 400 testers, providing a ratio of one tester for every developer. The overall team also has around 100 program managers and an additional 250 people on internationalization.
Although software development has formal guidelines that lay down techniques for drawing up logic tables, state tables, flow charts, and the like, the commercial pressures that are frequently placed on a development team often mean that a system must be ready by a certain date no matter what. Some people might choose to argue with this observation, but it happens nevertheless. One of the biggest headaches for a software developer is time-or rather, the lack of it. When your project lead sits you down and asks you to estimate the amount of time that it will take to code up an application from a design specification that he or she has given you, it is difficult to know beforehand what problems will arise during the project. You are also faced with a bit of a dilemma between giving yourself enough time to code it and not wanting to look bad because you think that the project might take longer than the project lead thinks it will. The logical part of your mind cuts in and tells you not to worry because it all looks straightforward. However, as the development cycle proceeds and the usual crop of interruptions comes and goes, you find that you are running short of time. The pressure is on for you to produce visible deliverables, so the quality aspect tends to get overlooked in the race to hit the deadline. Code is written, executed once to check that it runs correctly, and you're on to the next bit. Then, eventually, the development phase nears its end-you're a bit late, but that's because of (insert one of any number of reasons here)-and you have two weeks of testing to do before the users get it. The first week of the two-week test period is taken up with fixing a few obvious bugs and meeting with both the users and technical support over the implementation plan. At the beginning of the second week, you start to write your test plan, and you realize that there just isn't time to do it justice. The users, however, raise the alarm that there are bugs and so the implementation date is pushed back by a month while the problems are sorted out. When it finally goes live you get transferred onto another project but quietly spend the next fortnight trying to get the documentation done before anyone notices.
I dread to think how many systems have been developed under these conditions. I'm not saying that all development projects are like this, and the problems are slightly different when there is a team of developers involved rather than an individual, but it's an easy trap to fall into. The scenario I've described indicates several problems, most notably poor project management. Even more detrimental to the quality of the final deliverable, however, is the lack of coordinated testing. The reason I so strongly tie in testing with the project management function is that a developer who is behind schedule will often ditch the testing to get more code written. This is human nature, and discipline is required (preferably from the developer) to follow the project plan properly rather than give in to deadline urgency. It is very important to report any slippage that occurs rather than cover it up and try to fit in the extra work. The discipline I'm referring to involves writing a proper test plan beforehand and then striving to write code that can be easily tested. The project management process should ensure that the creation of the test suite is also proceeding along with the actual development. I've seen projects fall woefully behind schedule and be very buggy because of poor project managers working with relatively good developers. If, among a team of developers, there is one poor developer, the rest of the team will probably make up for it. However, if the project manager is poor at performing the job, the effect on the project can be disastrous, often because of resultant low morale.
Software projects often run into trouble, more so when they are team developments. The industry average for software development projects is that typically about four in every five overrun their planned time scales and budgets, and less than half actually deliver everything that they are supposed to. In the fight to deliver a bug-free system as quickly as possible, project managers often end up negotiating with the end users for a reduction in functionality so that the developers can concentrate on the key parts of the system. The remaining functionality is often promised for delivery in the next version.updated