This is about .NET libraries (DLLs).
What are the options for measuring code that is covered by unit test cases? Is it actually worth the efforts (measuring the code coverage)? I wonder it might be too easy to cover 70% of code and almost impossible to go beyond 90%.
[EDIT] Another interesting question (put up by "E Rolnicki") is: What is considered a reasonable coverage %?
-
NCover will help show you coverage. Coverage is incredibly useful, unfortunately it can obviously be gamed. If you have bad developers covering code just to get the %age up, yes, it will ultimately be useless and hide uncovered areas. Once you fire those people you can fix it and get back to useful information. Setting coverage goals that are unattainable is a sure-fire way to get bad coverage.
E Rolnicki : what is considered a reasonable coverage %?Nick Veys : Whatever you feel comfortable with really. 70% should be doable but some areas you'll want 100%, others you won't care much at all. Knowledge about your codebase is what you're going for ultimately, so after a few runs of the coverage tool you get a feel for what you want out of it. -
I haven't used it personally, but one of my co-workers swears by nCover (http://www.ncover.com/).
As far as coverage goes, in Ruby at least, 70% is easy, 90% is doable, and 100% is seldom a possibility.
-
NCover (both the commercial one and the open source one with the same name) and the code coverage tool in Visual Studio are pretty much your main tools in the MS world.
Code coverage is a reverse metric. It doesn't really show you what code is adequately tested. Like Nick mentioned, you can have test that cover but don't really test much. Code coverage instead tells you what area of your code have absolutely no tests. From there, you can decide if it makes sense to write tests for this code.
In general, I think you should do code coverage since it doesn't take much effort to set up and it at least give you more info about your code than what you had before.
I agree that getting that last fraction of code is probably the toughest and there may be a point where the ROI on it just doesn't make sense.
wowest : Exactly what I wanted to say. -
If you are doing Test Driven Development your code should be hitting at least 70% without trying. Some areas you just can't or is pointless to have test coverage, thats where NoCoverage attributes for NCover come in handy (you can mark classes to be excluded by code coverage).
Code coverage shouldn't be adhered to religiously, it should simply be a useful way to give you hints at areas you have missed with testing. It should be your friend, not Nazi-like!
-
There are two things to take into account when looking at Code Coverage.
- Code coverage is a battle of diminishing returns: beyond a certain point each additional percentage yields less value. Some code (like core libraries) should be 100% covered whereas UI code/interaction can be very difficult to cover.
- Code coverage is a metric that can be deceiving: 100% code coverage does not equate to a fully tested application.
Take for example this snippet:
if (a == 2) { do_A_Stuff(); } if (b == 3) { do_B_Stuff(); }
Run a test where a = 2 and a second test where b = 3. That's 100% code coverage. But what happens when with a test where a = 2 & b = 3? These are "Variable Relationships" and can lead to overconfidence in coverage metrics.
-
Visual Studio Team System Developer Edition includes code coverage. This can be included when running unit tests.
Open up the .testrunconfig and select which assemblies you want data for.
0 comments:
Post a Comment