Getters, Setters and the Great Coverage Conspiracy

Posted in Java, Software Development by Dan on April 1st, 2008

A frequent topic of Java-related blogs is whether it is worthwhile to write unit tests for simple getters and setters. This posting that I came across today proposes a reflection-based trick for eliminating much of the work in writing these tests. Maybe this is an improvement over other approaches, but what bothers me most is the motivation for wanting to test getters and setters in the first place.

It seems that many of those advocating unit testing simple getters and setters are driven by a desire to improve their coverage scores with the actual utility of the tests a secondary concern.

Firstly, I should state that I am absolutely in favour of measuring coverage for test suites. In fact, I think it’s pretty much essential. If you are writing automated tests but not measuring code coverage then you are just scratching around in the dark. What’s great about coverage reports, particularly those that show branch coverage as well as line coverage, is that you get to see exactly where your tests are neglecting certain scenarios. Coverage reports can also be useful in highlighting code that is not used and can be removed.

The problem with code coverage is that it only shows where your tests are weak. It does not prove that your tests are good, even if the coverage is 100%. So writing tests with the sole aim of improving the coverage score is merely an exercise in self-deception. It’s the tail wagging the dog.

If you need to add tests for all your getters and setters in order to achieve x% code coverage, where x is some mandated target, there are two questions you need to ask:

  1. Do you have too many getters and setters?
  2. Are you avoiding testing difficult code?

I could go on for pages about the first point. There are far too many getters and setters in most Java code. Too many developers think encapsulation is simply a case of making fields private and providing access to them with getters and setters. It would be better to aim for making fields private and not providing access to them with getters and setters. Favouring constructor-based dependency injection over setter-based DI is something else to consider (although that’s a whole other article in the making…).

How do you know if you have too many getters and setters? Well your coverage reports are a good starting point. If the getters and setters are essential to your application, it will be just about impossible to avoid exercising them indirectly from other tests. If you have good coverage elsewhere but the getters and setters aren’t touched, chances are they aren’t needed. Adding more tests is not the only way of improving your test coverage. Another way is to remove code so that you have less to test.

The second question above is also important. If you require your team to achieve a rigid 75% test coverage target then you are almost guaranteeing that you will get tests for the 75% of the application that is easiest to test. Writing tests for getters and setters helps to fulfil the 75% requirement without needing to think about how to test the difficult bits of the system. Unfortunately, the other 25% is probably the code that really needs testing/refactoring.

For me it’s pretty clear. Don’t write unit tests for getters and setters. Better still, don’t write getters and setters (except where necessary). And don’t confuse test-driven development with coverage-driven development.