Partner with Another Developer
One good approach to testing is to partner with another developer with the understanding that you will test each other's code. Then, as you type, you will be asking yourself, "How would I test this if I were looking at this code for the first time? Would I understand it, and would I have all the information that I needed?"
Some questions are inevitable, but I have found that if you know from the outset that somebody else is going to perform a unit-level test on your code without the same assumptions or shortcuts that you have made, that is excellent news! How many times have you spent ages looking through your own code to track down a bug, only to spot it as soon as you start to walk through it with another developer? This is because we often read what we think we have written rather that what we actually have written. It is only in the process of single-stepping through the code for the benefit of another person that our brains finally raise those page faults and read the information from the screen rather than using the cached copy in our heads.
If you're looking at somebody else's code, you don't have a cached copy in the first place, so you'll be reading what is actually there. One further benefit of this approach is that it will prompt you to comment your code more conscientiously, which is, of course, highly desirable.
Test as You Go
Testing as you go has been written about elsewhere, but it is something that I agree with so strongly that I'm repeating it here. As you produce new code, you should put yourself in a position where you can be as certain as possible of its performance before you write more code that relies on it. Most developers know from experience that the basic architecture needs to be in place and stable before they add new code. For example, when writing a remote ActiveX server that is responsible for handling the flow of data to and from SQL Server, you will need a certain amount of code to support the actual functionality of the server. The server will need some form of centralized error handler and perhaps some common code to handle database connections and disconnections. If these elements are coded, but development continues on the actual data interfaces before these common routines are tested, the first time you try to run the code, there will be many more things that can go wrong. It's common sense, I know, but I've seen this sort of thing happen time and again.
The first and most obvious way to test a new piece of code is to run it. By that, I don't mean just calling it to see whether the screen draws itself properly or whether the expected value is returned. I mean single-stepping through the code line by line. If this seems too daunting a task, you've already written more code than you should have without testing it. The benefit of this sort of approach is that you can see, while it's still fresh in your mind, whether the code is actually doing what you think it's doing.
Sometimes you will need to code routines that perform actions that will be difficult or impossible to reverse. When such routines fail, they might leave your application in an unstable state. An example might be a complicated file moving/renaming sequence. Your ability to test such code will be limited if you know that it might fail for unavoidable reasons. If you can predict that a sequence of operations might fail and that you can't provide an undo facility, it helps the user to have a trace facility. The idea is that each action that is performed is written to a log window (for example, a text box with the multiline property set to True). If the operation fails, the user has a verbose listing of everything that has occurred up to that point and can therefore take remedial action.