Sunday, March 6, 2011

How to deal with long running Unit Tests?

I've got about 100 unit tests and with a coverage of %20, which I'm trying to increase the coverage and also this is a project in development so keep adding new tests.

Currently running my tests after every build is not feasible they takes about 2 moments.

Test Includes:

  • File read from the test folders (data-driven style to simulate some HTTP stuff)
  • Doing actual HTTP requests to a local web-server (this is a huge pain to mock, so I won't)
  • Not all of them unit-tests but there are also quite complicated Multithreaded classes which need to be tested and I do test the overall behaviour of the test. Which can be considered as Functional Testing but need to be run every time as well.

Most of the functionality requires reading HTTP, doing TCP etc. I can't change them because that's the whole idea of project if I change these tests it will be pointless to test stuff.

Also I don't think I have the fastest tools to run unit tests. My current setup uses VS TS with Gallio and nUnit as framework. I think VS TS + Gallio is a bit slower than others as well.

What would you recommend me to fix this problem? I want to run unit-tests after every little bit changes btu currently this problem is interrupting my flow.

Further Clarification Edit:

Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)

Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.

And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.

From stackoverflow
  • These don't sound like unit tests to me, but more like functional tests. That's fine, automating functional testing is good, but it's pretty common for functional tests to be slow. They're testing the whole system (or large pieces of it).

    Unit tests tend to be fast because they're testing one thing in isolation from everything else. If you can't test things in isolation from everything else, you should consider that a warning sign that you code is too tightly coupled.

    Can you tell which tests you have which are unit tests (testing 1 thing only) vs. functional tests (testing 2 or more things at the same time)? Which ones are fast and which ones are slow?

    dr. evil : So I assume the correct move would be : refactoring the code in a way where I don't have this much coupling. Then running functional tests daily but running unit-tests after every build?
    dr. evil : Also I've updated the question regarding the your answer.
    Brad Wilson : You could even run all the functional tests on every checkin with Continuous Integration. When I'm writing unit tests, I tend to run the ones local to what I'm working on very frequently, then larger sets less frequently, and everything before I commit.
    dr. evil : I didn't consider setting up a Continuous Integration solution since this is lone wolf team. It's only me working on the project as a developer. But I think setting up such a system might help me, or even simply setting up a task schedule and report errors.
  • You could split your tests into two groups, one for short tests and one for long-running tests, and run the long-running tests less frequently while running the short tests after every change. Other than that, mocking the responses from the webserver and other requests your application makes would lead to a shorter test-run.

  • Further Clarification:

    Code is highly coupled! Unfortunately and changing is like a huge refatoring process. And there is a chicken egg syndrome in it where I need unit tests to refactor such a big code but I can't have more unit tests if I don't refactor it :)

    Highly coupled code doesn't allow me to split tests into smaller chunks. Also I don't test private stuff, it's personal choice, which allow me to develop so much faster and still gain a large amount of benefit.

    And I can confirm that all unit tests (with proper isolation) quite fast actually, and I don't have a performance problem with them.

  • First, those are not unit tests.

    There isn't much of a point running functional tests like that after every small change. After a sizable change you will want to run your functional tests.

    Second,don't be afraid to mock the Http part of the application. If you really want to unit test the application its a MUST. If your not willing to do that, you are going to waste a lot more time trying to test your actual logic, waiting for HTTP requests to come back and trying to set up the data.

    I would keep your integration level tests, but strive to create real unit tests. This will solve your speed problems. Real unit tests do not have DB interaction, or HTTP interaction.

  • I always use a category for "LongTest". Those test are executed every night and not during the day. This way you can cut your waiting time by a lot. Try it : category your unit testing.

    dr. evil : I think I need to play with VS TS panels, I normally use NUnit but somehow after final .NET Framework updates NUnit GUI doesn't work, So i stuck with stupid MS test Suite and Gallio.
  • It sounds like you may need to manage expectations amongst the development team as well.

    I assume that people are doing several builds per day and are epxected to run tests after each build. You might we be well served to switch your testing schedule to run a build with tests during lunch and then another over night.

    I agree with Brad that these sound like functional tests. If you can pull the code apart that would be great, but until then i'd switch to less frequent testing.

    dr. evil : Actually currently that's what I'm doing. Just running unit-tests about 3 times in a day.
  • I would recommend a combined approach to your problem:

    • Frequently run a subset of the tests that are close to the code you make changes to (for example tests from the same package, module or similar). Less frequently run tests that are farther removed from the code you are currently working on.
    • Split your suite in at least two: fast running and slow running tests. Run the fast running tests more often.
    • Consider having some of the less likely to fail tests only be executed by an automated continues integration server.
    • Learn techniques to improve the performance of your tests. Most importantly by replacing access to slow system resources by faster fakes. For example, use in memory streams instead of files. Stub/mock the http access. etc.
    • Learn how to use low risk dependency breaking techniques, like those listed in the (very highly recommended) book "Working Effectively With Legacy Code". These allow you to effectively make your code more testable without applying high risk refactorings (often by temporarily making the actual design worse, like breaking encapsulation, until you can refactor to a better design with the safety net of tests).

    One of the most important things I learned from the book mentioned above: there is no magic, working with legacy code is pain, and always will be pain. All you can do is accept that fact, and do your best to slowly work your way out of the mess.

0 comments:

Post a Comment