A recent question from a coworker elicited ideas I'd like to share about one of my favorite topics: Test Driven Development.
The thing that blocks me the most is striking the right balance between writing too many tests such that the software isn’t as flexible and not enough such that the software isn’t well tested.
In practice, I’m constantly asking myself, “How much to test?" The obvious answer is everything, I guess, but it’s difficult to define what "everything" is.
That’s great that these questions are coming up for you because it shows that you’re engaging the practice at a higher level.
If you think your tests are making the software less flexible then you’re writing the wrong tests. The tests should represent what you want the software to do, not what it already does. This is a practice and it takes time to become skillful in it.
It's excellent that you're asking yourself "How much to test?" You should be asking yourself that constantly. There is no right answer. “Test everything” is incorrect because it fails to take into account the cost of writing tests. There’s a good heuristic convention that I recommend, which is 100% coverage. This means every branching code path is exercised by a test. My code editor gives me real-time feedback telling me whether each line is covered. Some packages I've created recently use an automatic tool to measure coverage and they won’t pass
npm test unless the tool reports 100%.
It may take some time but you’ll reach a beautiful zen moment after you’ve been practicing 100% coverage for a while when you’ll realize that code which isn’t covered by a test has no purpose in life. Why is it there? How is it justified? Exactly what value does it create? There will be times when you’ll discover uncovered code and your response will be a) to write a test to cover it or b) to erase it as though you’re a sculptor and your chisel has found stone to remove in order to make your creation more beautiful.