There's been a lot of hype recently about Test-Driven Development (TDD). I really like it for the most part. It forces the programmer to think about how the code will be used. This will often result in a more easy-to-use API. It's also a good way to explore all of the edge cases and make sure they are covered with a unit test.
I've found, though, that sometimes having to write the test before the code is a little too restrictive. Sometimes you have no idea what the resulting code will look like or how exactly it will achieve your goal. In this case, I am an advocate of what I like to call exploratory coding. Code first until you have an idea of what your code needs to do. This may be accompanied by some ad-hoc testing. Then, when you have the functionality nailed down, cover it with tests. Of course, this only works if you're careful to keep your code modular and testable. If not, you may need to refactor first or rewrite the code altogether following TDD.
We often joke at work that the proper way to do TDD is to write the code first and comment it out, then get your failing test and fix it by uncommenting the code. This just goes to show how counter-intuitive test-first can be at times.
I often hear people argue that if you write the code first, there's a chance you'll get lazy and won't get around to testing it afterwards. I disagree. This just requires a little discipline, which I would argue is essential to being a good developer in the first place.
Essentially, the principle behind TDD is the importance of modular, well-tested code. Indeed, I would say that the rise of unit testing is one of the biggest developments to improve the overall quality of code in recent times.