Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that many testing advocates are 100% testing diehards. They believe that anything under 100% test coverage in several disciplines (unit, functional, integration, etc) is just as bad as no testing.

There is, of course, a middle way. Implement as much testing as makes sense for you. Think Pareto.. just the right 20% of testing could give you 80% of the gains.

So.. implement integration tests that test your apps very broadly and as soon as one fails, you know you can start looking deeper. No need to always start with line by line unit tests :)



You're right. And the problem with that is that the only opportunities for 100% coverage happen when you can start a project that way with the buy in of everyone on it. That is only likely to happen when everyone is familiar with TDD and comfortable with it. That creates a substantial hurdle for a group to cross when adopting it for the first time.

I prefer the approach we used here (at a non-startup). We simply decided that we were going to use automated tests as much as possible on a new project. Everyone was encouraged to use them. Our coverage wasn't 100%. However, when you are committed to creating an automated test case to reproduce any bug regardless of how the bug was originally found, your coverage expands. Usually you can write variants of the same test to cover related areas.

I can't speak for every member of the group, but my total testing time went down slightly and my test coverage improved enormously. Full TDD is not necessary in every instance. Just because you can't immediately make the jump to full TDD doesn't mean that automated unit test tools won't make your code better, and your testing easier.


I agree, it seems to me that like many things in life, the answer lies between the extremes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: