The name Test Driven Development is deceptive. For me, it creates an image of software that is tested and ideally "working" before the product is even usable. What is really happening is that the programmer is creating a series of design queues that act as reminders to set this variable value here or exit a loop there. I think about TDD as a similar tool to outlining a new article. I start with a broad idea of the article that needs to be written, this is like a feature for the developer. From there, I work down to the details.
Rather than starting a new feature with a line of code that will eventually make it into production, TDD starts out with a test or two. Those tests act as a constant reminder to the programmer on what they are building, and also a sort of stop light that tells them when they should move on to the next little bit of code.
The "process" of a test driven developer is known as RedBar->GreenBar->Refactor
- Redbar - Create a test followed by just enough code to make the program compile, but not enough to make the test fail. For example: Call a function. Make the function, but have it do nothing.
- Greenbar - Create just enough code to make the test pass.
- Refactor - Improve the design of the code you just implemented. Re-run unit tests; they should pass.
- Test - Infected developers run this cycle every day, something between a few new tests a minute to a few minutes per new test.
Fending Off Risk
Each code change creates risk. Risk from scenarios directly related to that change, like users being able to enter values that are larger than a data type will allow. And also more indirect types of risk like the programmer mis-understanding an old scenario, mis-reading the code, and changing an old behavior. TDD doesn't prevent risk, but running the old unit tests does re-check that the old expectations are still valid.
One way to do this is be running all the tests all of the time. While developing and running your own tests and production code, you'll not only see that the code you write makes your tests fail, but you'll also see new failures that get introduced in other unanticipated parts of your code base.
TDD also works very well with pair programming (Specifically pairing between a programmer and a tester). With this setup, the tester can stub out tests and continuously ask questions along the lines of "What about this?" and "What happens if the user does that?" while the programmer is working on making those tests green.
Since you will be writing and running checks before production code exists, some problems can be fixed before your code hits CI.
Start Automated Testing Now With a TestComplete Free Trial
Automated Testing for Desktop, Mobile, Web, & Packaged Applications
Another hidden benefit of TDD is reducing the batch size - the amount of work that programmers do in each "step." Traditionally, a developer would work on a feature till that was done. This means that the earliest someone else might get to look at new code is when the feature was done. The batch size is however big a feature is, and that probably varies a lot. TDD encourages the programmer to make the average batch size a little bit smaller, and a little bit more consistent.
The normal batch size for a person using TDD whatever it takes to get a few tests passing. Each time a few tests pass, potentially testable code can be checked in and sent off to the CI system.
Common Types of Resistance
The two most common complaints about TDD I see are related to speed and the amount of extra code needed to make things work.
Starting TDD often feels incredibly slow. Rather than immediately writing code that can be run, the programmer has to sit and think for a bit, and then write a couple of tests. This can take some time, especially in the beginning when people are getting used to the new way of working. Sometimes, things are just slower because tests were never written in the first place.
That slowness is an illusion. Writing tests up front usually means code quality will be better and there will be less need for rework and churn later. Have you ever got a test build that falls right over as soon as you log in? That happens much less with TDD.
The other complaint I see is related to all the code required to make TDD work, namely mock objects. Since the thing you are testing doesn't exist yet, we have to rely on mock objects to simulate real objects. This does add overhead to the project, the mocks have to be created and maintained as the project changes over time. It also reduces dependency and can be a valuable design tool.
Starting Test Driven Development is a learning experience and can take a little time. Much like planting a tree, the best time to start is today.