« Location-aware passcode settings | Main | Safari: Keyboard shortcut for opening current page in Google Chrome »

Unit testing asynchronous code

Grand Central Dispatch and blocks have made it very easy to send blocks of code off to the background for execution and since it is so much easier, asynchronous code is much more common.

All this is a blessing – except when it comes to unit testing. To test the result of an asynchronous task, you need to force it back to “synchronicity”, so to speak. Unfortunately, Xcode’s built-in testing framework SenTestingKit does not provide any help in this regard. All its test macros like STAssertEquals assume values to be returned synchronously, leaving it up to you to provide them from asynchronous tasks.

GHUnit, on the other hand, does have a mechanism to test asynchronous tasks. However, its setup is a little more complicated than simply adding the built-in SenTestingKit. Unless you need GHUnit for some of its other features, STK is therefore usually the best way to get started with unit testing in Xcode.

To allow for asynchronous testing with SenTestKit, I’ve added a category to SenTestCase that is based on GHUnit’s asynchronous test. It is available as part of a sample project on github.

A test of an asynchronous task using the added method waitWithTimeout:forSuccessInBlock: looks like this:

- (void)test_completion
  Downloader *dl = [[Downloader alloc] init];
  __block BOOL received = NO;
  [dl startDownloadWithCompletion:^{
    received = YES;
  [self waitWithTimeout:1.1 forSuccessInBlock:^BOOL{
    return received;
  STAssertTrue(received, nil);

This code tests the completion handler of an asynchronous call of a Downloader class (see the example project for details) by waiting for a block to return YES within a given time limit.

What I prefer about this category over the one in GHUnit is that the test is fully contained within the body of the test method. There is no need to implement any other callback method and reference it from this test. This makes copy and paste of tests for re-use much simpler, for example.

Reader Comments (2)

Hi, Nice post. I have used a similar technique and find that the hard coded timeout value can seem fine and allows all tests to pass on one machine, but when running the same tests on a build server, the tests get mangled and fail due to the hard coded timeout, surely caused by the test environments efficiency or lack thereof. Are you aware of any techniques that do not use the time out?

December 11, 2012 | Unregistered Commenterjarryd

Thanks, Jarryd. Yes, timeouts are tricky but you need to have something in place to make the test suite finish within a reasonable amount of time. I’ve used configurable (i.e. machine or environment specific) timeout values in the past to avoid seeing lots of timeout failure due to different environment. A scaling factor in an environment specific plist can be used to multiply the timeout values given in the tests to achieve this without hardcoding things.

You can always set a high timeout value but I find that there’s also information in tests starting to run into timeouts. Even without a big performance test suite you can tell that you’ve screwed up some algorithm when suddenly tests start running into timeouts on an otherwise unchanged system.

December 11, 2012 | Registered CommenterSven A. Schmidt
Comments for this entry have been disabled. Additional comments may not be added to this entry at this time.