Possible Duplicate:
What is a reasonable code coverage % for unit tests (and why)?
I am in the middle of putting together some guidelines around unit test code coverage and I want to specify a number that really makes sense. It’s easy to repeat the 100% mantra that I see all over the internet without considering the cost benefit analysis and when diminishing returns actually sets in.
I solicit comments from persons who have actually reported code coverage on real-life, medium/large-sized projects. What percentages were you seeing? How much is too much? I really want some balance (in figures) that will help developers produce hight quality code. Is 65% coverage too low to expect? Is 80% too high?
When you mix code coverage with cyclomatic complexity, you can use the CRAP metric.
From artima.com:
Method’s Cyclomatic Complexity % of coverage required to be below CRAPpy threshold ------------------------------ -------------------------------- 0 – 5 0% 10 42% 15 57% 20 71% 25 80% 30 100% 31+ No amount of testing will keep methods this complex out of CRAP territory.No amount of code coverage is going to guarantee "high quality code" by itself alone.
From the comments…
It’s definitely too lax to give simple methods a pass on coverage. What you will likely find when implementing this on existing code is that the code coverage will rise as you’re refactoring those ugly methods (code coverage should rise otherwise you’re refactoring dangerously).
The 0-5’s are essentially low-hanging fruit and the ROI isn’t all that great. That being said, those methods are wonderful for learning TDD because they’re often very easy to test.