Speed in unit tests matters
It’s extremely frustrating to have to wait for over ten minutes when you’re ready to commit some new code, just because you have to wait for a big, slow unit test suite to complete. It’s also frustrating when you’re actively addressing a known bug that’s been exposed by unit tests and, having made a change that will hopefully fix it, sit and twiddle your thumbs as the tests re-run. Efficiency matters, even in unit tests.
I’ve spent a few workdays attacking the test suite for the module I’m working on with the proper tools—a profiler and KCacheGrind, a profiling data visualiser. By figuring out where the test suite spent most of its time and optimising the slow parts (largely by caching data that were recomputed superfluously, caching prepared statements, etc.), I cut down the expected running time for company-wide unit tests by an estimated 10% and my own module’s tests by approximately 80%—an improvement by a factor of 5, from 12:31 to 2:40!
Of course this number is going to creep up as the test suite grows, coverage improves, and setup becomes more involved. However, that’s all the more reason to do this, and just means that it may become relevant to do it again at some point in the future.
As a bonus, the majority of the performance improvements were to business code exercised by the unit tests rather than code exclusive to the test framework, so application performance will be improved as well. I should be cautious in my conclusions here, though: While there will be improvements, some of the code exercised very heavily by unit tests is not run very frequently by users.