We've tried to do our best to prepare non-biased, based on features, comparison of various code coverage tools available on the market in order to help in evaluation process. Information gathered here is based on the official tools' documentation as well as on documentation of tools' integrations. In case you'd find some information presented here inaccurate or outdated or if you think that new tool or feature shall be added to the comparison, please do not hesitate to raise an issue - we will review it and update the page accordingly.
|Class files||off-line instrumentation||off-line and on-the-fly instrumentation||off-line and on-the-fly instrumentation|
|JMX / sonar||via extensions||lists tests per file|
|Source code metrics|
|Available metrics||20+ metrics, also custom ones||cyclomatic complexity||cyclomatic complexity|
|Data management and report filtering|
|Merging of coverage databases||clover:merge||via <jacoco:merge>|
|Historical reporting||more details||via sonar||via sonar|
|Selecting scope of code coverage||file patterns, class patterns, method patterns (entire signature), code block type, statement's regular expression, code complexity, CLOVER:OFF/ON code comments|
|class patterns||file patterns||package patterns|
|Cross-report linking||via <structure> element|
Clover recognizes Groovy-specific language constructs, such as "?.", "?:" operators, properties, methods with default arguments. It also handles Grails-specific AST transformations such as controller classes. Thanks to this, you can see more accurate data in reports (compared to pure byte-code instrumentation tools).
|Other|| instrumentation is class-based so theoretically any JVM language is supported, but it may lack good reporting (esp. for language-specific constructs); it may also have problems with synthetic methods etc.|
1.3-1.8 (for "-source" level setting)
|Supported test frameworks|
It's possible to define custom patterns for tests, based on file names, class signatures, method signatures (including annotations and javadoc tags). More details.
It's possible to define test boundaries by special instructions put in code comments
|Build tools integrations|
CI servers integrations
last release - 2013
few releases / year
last release - 2011
few releases / year
Atlassian Support, 24h response
(for customers with active maintenance subscription)
open source community
|open source community||open source community||open source community|
Clover has great and highly configurable HTML reports (showing not only code coverage but also top risks etc), per-test code coverage and test optimization, distributed per-test coverage and many tool integrations; it is being actively developed and supported.
|Easy to use thanks to off-line byte code instrumentation. You can measure coverage without having the source code. It has very nice and easy to navigate HTML report.||Very easy to integrate thanks to the on-the-fly byte code instrumentation. You can measure coverage without having the source code. It has nice HTML report (example).||It has the most detailed code coverage metric (MC/DC), which may be useful for critical systems (medical, aeronautical etc). The Eclipse plug-in comes also with a cool Boolean Expression Analyzer view and a Test Correlation matrix. It has also an interesting feature to start/stop test case via JMX, which can be useful for manual testing.||PIT is a tool for mutation coverage, which means it will not only measure line coverage of your code but will also perform mutations in application logic in order to check how well written your tests are.|
|Disadvantages||Due to a fact that Clover is based on source code instrumentation, integration requires a build - it's necessary to recompile code with Clover. Most Clover's integrations have an automatic integration feature, but in some cases you may need to add Clover JAR to a class path or set some Clover options.||Classes must be compiled with debug option.||Classes must be compiled with debug option.||Last release has been performed 3 years ago. The HTML report generated is quite fragmented - source code is shown separately for every method.|
1. Statement and line metrics are roughly similar in terms of their granularity (i.e. code has roughly one statement per line). Statement coverage has huge advantage over line coverage in case when language uses many short statements in a single line (a good example is Java8 stream with several map() and filter() calls) - it's more precise as it can detect partially covered lines.
4. While it's possible to instrument test classes and run test frameworks with Cobertura and JaCoCo, there is no built-in, dedicated support for these frameworks. Which means that standard HTML report shows neither test results nor per-test coverage. There is also no per-test data in Eclipse IDE (EclEmma, based on JaCoCo).
5. Sonar can generate separate coverage data sets for every test case and show them in a combined report. See http://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/
Q: Why Emma is not included in this report?
A: The original Emma is a dead tool, last release has been made in 2005. On the other hand, the Eclipse Emma (EclEmma), which was originally based on Emma, since version 2.0 (released 2011) is based on JaCoCo. So its feature set is the same as for JaCoCo.