I recently took a peek at the results for the Continuous Integration (CI) build at a project that runs in parallel to my current project. It had not been built successfully for about a month because approximately 100 of their 500 unit tests are failing. I mentioned this to the project team and their reply indicated there had been a lot of database changes and that it was better to have the code compile with a lot of failing tests.
Such a waste
I would always suggest correcting the failing unit tests before moving on to other things, but, we don't live in a perfect world so lets assume that there are valid reasons for them accepting a large number of failing tests until they can find time in the project to put things right.
What happens if, after a refactoring, more tests start to fail? Not because of the database changes, but for some other reason. Who is going to notice this when there are already 100 or so tests failing?
This minority of failing tests are removing the (huge) value offered by their 400 or so working tests and all the effort they put into setting up CI isn't being maximised.
If you've taken the decision to accept a number of your tests are going to fail temporarily remove them from your CI test suite. That way you'll still get some value from the tests that do work.
25 Apr 2007
» Next Post:
Why IoC is different to a Service Factory
« Previous Post:
Comments are closed for this post.
26 May 2007
This is such a great post, even just talking unit testing in general; forget the CI part. I've re-read it several times now and have passed it on to collegues. It's one of those things that if your just going to ignore broken tests why do you have them in the first place?
27 May 2007
thanks very much for your kind comment. I am glad you enjoyed the post!