Swift & Nimble Testing

I’m a big fan of unit testing and when I discovered the BDD framework Nimble with Swift support a couple of months ago I was delighted. One of its advantages is that instead of using the XCTAssertEqual macros you can write:

expect(answer).to.equal(42)

which, thanks to Swift’s support for operator overloading, can be made even more expressive:

expect(answer) == 42

However, there was one area where I found the syntax a bit verbose – when comparing floating point numbers:

expect(answer).to(beCloseTo(42.0))

What this does is simply make the comparison fuzzy by allowing a difference of 0.0001 – the default delta as defined by the framework.

It would be great if we could write it this way:

expect(answer) ≈ 42.0    // type Option-x for ≈ (U.S. keyboard)

And actually we can, thanks to Swift’s support for custom operators. Now understandably people worry about misuse of custom operators and I fully agree that you need to be very careful when and where you use them. But I feel that test code is a good place where a readability ‘optimisation’ like this one can be applied.

All unit tests do is compare expectations to actuals and anything we can do to make this concise and readable for the actual values stand out over the boilerplate is a win. Custom operators are a good tool for this, especially if they mirror universal symbols like the mathematical sign of inequality.

Justifications aside, how does this work?

Nimble allows you to define so-called custom matchers that extend the set of validations:

public func equal(expectedValue: T?) -> MatcherFunc {
  return MatcherFunc { actualExpression, failureMessage in
    failureMessage.postfixMessage = "equal <(expectedValue)>"
    return actualExpression.evaluate() == expectedValue
  }
}

The existing package already provides a beCloseTo matcher for decimal number comparisons and it is then straightforward to define an operator for it:

infix operator ≈ {}
public func ≈(lhs: Expectation, rhs: Double) {
    lhs.to(beCloseTo(rhs))
}

But what’s missing here is the case where you specify a delta different from the default:

expect(answer).to(beCloseTo(42.0, within: 1.0)

In other words if we want to specify the delta (and that’s probably quite common) we’re back to the more verbose version. Ideally we’d like to write this as:

expect(answer) == 42.0 ± 1.0    // type Option-Shift-= for ± (U.S. keyboard)

Turns out we can, and the way this works is as follows. First we create a binary operator ± that converts the value to its left and the delta to its right into a tuple (expected: Double, delta: Double):

infix operator ± { precedence 170 }
public func ±(lhs: Double, rhs: Double) -> (expected: Double, delta: Double) {
    return (expected: lhs, delta: rhs)
}

Then we add an overloaded method which takes an Expectation<Double> and the tuple (expected: Double, delta: Double) as parameters:

public func ≈(lhs: Expectation, rhs: (expected: Double, delta: Double)) {
    lhs.to(beCloseTo(rhs.expected, within: rhs.delta))
}

These changes have been kindly accepted and integrated by the Nimble team into the framework as of Jan 5 (commit e7bafdb)

Part of this change was also an extension for comparisons of arrays of numbers:

expect([0.0, 1.1, 2.2]) ≈ [0.0001, 1.1001, 2.2001]
expect([0.0, 1.1, 2.2]).to(beCloseTo([0.1, 1.2, 2.3], within: 0.1))

See the Nimble documentation for further examples.

SpriteKit and Swift – converting a project from Objective-C

In November 2013 I was looking for an excuse to play with SpriteKit and came up with the idea for a mini-game with a Christmas theme. This is what it looks like:

When Swift was announced at WWDC 2014 I was eager to give it a try and looked through my grab bag of side projects for a suitable candidate to try migrating a project from Objective-C to Swift. ’Shooter‘ was a good candidate, because it‘s not too big but also not too trivial.

The source code is available on github and I‘m planning to post about lessons learned when transitioning from Objective-C to Swift in a future update. However, the commit history already tells a pretty decent story of the transition. Commit ab6df2e merges the ‘swift’ branch into master and 8ed7782 is where the journey starts.

Switfly build a Mac app

Curious about Swift, I went ahead and translated Matt Gallagher‘s example from Objective-C to Swift. If you stick this in a Playground file it will launch a minimal Mac app. Or you create a simple text file and chmod +x it for direct execution from the command line.

#! /usr/bin/swift -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk

import Cocoa

var app = NSApplication.sharedApplication()
app.setActivationPolicy(.Regular)

var menuBar = NSMenu()
var appMenuItem = NSMenuItem()
menuBar.addItem(appMenuItem)
app.mainMenu = menuBar

var appMenu = NSMenu()
var appName = NSProcessInfo.processInfo().processName
var quitTitle = "Quit \(appName)"
var quitMenuItem = NSMenuItem(
    title: quitTitle,
    action: Selector("terminate:"),
    keyEquivalent: "q"
)
appMenu.addItem(quitMenuItem)

appMenuItem.submenu = appMenu

var window = NSWindow(
    contentRect: CGRect(x: 0, y: 0, width: 200, height: 200),
    styleMask: NSTitledWindowMask,
    backing: NSBackingStoreType.Buffered,
    defer: false
)

window.cascadeTopLeftFromPoint(NSPoint(x: 20, y: 20))
window.title = appName
window.makeKeyAndOrderFront(nil)
app.activateIgnoringOtherApps(true)
app.run()

Location-aware passcode settings

Are you on the fence what the passcode setting should be on your iOS device? Are you worried that anyone could access all your data if you left it somewhere yet at the same time annoyed by having to type in your passcode all the time?

I know I am. I use my phone so frequently that a five minute passcode setting actually rarely kicks in. My iPad on the other hand I don’t pick up as often and therefore I would have to enter the passcode pretty much every time. Which is why I don’t use one.

This came back to bite me a few weeks ago when I left my iPad on the cross-trainer in the gym. When I noticed my iPad was missing I wished I had at least set a passcode to protect its contents. Thinking about why I hadn't, it occurred to me how I could have had the best of both worlds: no passcode while at home or at other places I consider “safe” and a passcode everywhere else.

iOS should simply allow you to define location specific passcode settings. You should be able to define a default which applies everywhere and then one or more regions that are exceptions. This would allow you to define a basic safe setting with a passcode and then other areas where you’re more liberal. You could even have passcode settings of different complexity depending on where you are. No passcode at your home, a complex one abroad, and a simple one everywhere else.

Below you‘ll find an illustration of what the setup for this could look like.


Unit testing asynchronous code

Grand Central Dispatch and blocks have made it very easy to send blocks of code off to the background for execution and since it is so much easier, asynchronous code is much more common.

All this is a blessing – except when it comes to unit testing. To test the result of an asynchronous task, you need to force it back to “synchronicity”, so to speak. Unfortunately, Xcode’s built-in testing framework SenTestingKit does not provide any help in this regard. All its test macros like STAssertEquals assume values to be returned synchronously, leaving it up to you to provide them from asynchronous tasks.

GHUnit, on the other hand, does have a mechanism to test asynchronous tasks. However, its setup is a little more complicated than simply adding the built-in SenTestingKit. Unless you need GHUnit for some of its other features, STK is therefore usually the best way to get started with unit testing in Xcode.

To allow for asynchronous testing with SenTestKit, I’ve added a category to SenTestCase that is based on GHUnit’s asynchronous test. It is available as part of a sample project on github

A test of an asynchronous task using the added method waitWithTimeout:forSuccessInBlock: looks like this:

- (void)test_completion
{
  Downloader *dl = [[Downloader alloc] init];

  __block BOOL received = NO;
  [dl startDownloadWithCompletion:^{
    received = YES;
  }];

  [self waitWithTimeout:1.1 forSuccessInBlock:^BOOL{
    return received;
  }];
  STAssertTrue(received, nil);
}

This code tests the completion handler of an asynchronous call of a Downloader class (see the example project for details) by waiting for a block to return YES within a given time limit.

What I prefer about this category over the one in GHUnit is that the test is fully contained within the body of the test method. There is no need to implement any other callback method and reference it from this test. This makes copy and paste of tests for re-use much simpler, for example.

GHUnit, on the other hand, does have a mechanism to test asynchronous tasks. However, its setup is a little more complicated than simply adding the built-in SenTestingKit. Unless you need GHUnit for some of its other features, STK is therefore usually the best way to get started with unit testing in Xcode.

A test of an asynchronous task using the added method waitWithTimeout:forSuccessInBlock: looks like this:

Safari: Keyboard shortcut for opening current page in Google Chrome

Currently shipping Macs come without Adobe Flash preinstalled and I’ve been running that same setup without Flash for quite a while now myself. More and more webpages work fine without Flash and only the occasional video requires it. When that is the case, I simply go to the ‘Develop’ menu (enable it in the ‘Advanced’ section of Safari’s preferences if you don’t have it) and select ‘Open Page With’ ➡ ‘Google Chrome.app (20.0.1132.21)’. Since Google Chrome ships with integrated Flash, this is a simple way to switch to a Flash-enabled browser.

Now, rather than having to choose Chrome from the menu it would be nice to be able to assign a keyboard shortcut for this menu item. This is actually quite simple: Open the keyboard preference pane in System Preferences, select ‘Application Shortcuts’ and add a shortcut for the ‘Google Chrome.app (20.0.1132.21)’ menu item to Safari. However, the problem here is that the menu item contains the version number of Chrome and since Chrome updates frequently (and in the background), you’ll find yourself with a broken shortcut very soon.

The fix for this is a little Apple Script OpenURLInNewChromeWindow.app by Mike Hardy which basically tells Google Chrome to open the URL via an Apple Script command. If you run this script once, it will register itself as a application that can handle URLs and will therefore also appear in the list of browsers under ‘Open Page With’. Opening a page with this script will open the current page in Chrome just like before but the point is that the menu command will stay the same no matter what version of Chrome you have installed. Therefore you simply assign the shortcut to this ‘browser’ instead of the ever changing Chrome one.

An added benefit (and actually the reason Mike Hardy wrote the script in the first place) is that the page opens in a new window and not in a new tab (which can be quite annoying when using virtual screens). See Mike’s blog post on more details how to use his script in that context.

Peer to peer synching with TouchDB

Updated 2012-06-05: Incorporated Jens's suggestions and corrections.

TouchDB is a lean CouchDB-compatible database framework that can be embedded in iOS applications (or more generally, mobile or desktop applictions but this post is about iOS). Jens Alfke, its author, describes it this way: “If CouchDB is MySQL, then TouchDB is SQLite.” The project is available on github.

TouchDB is CouchDB-compatible with respect to its replication API when initiated on the device against another ‘regular’ CouchDB. You can create push and pull replication tasks on TouchDB. However, out of the box, TouchDB does not offer an HTTP interface for other TouchDB (or CouchDB) instances to connect to. This means that initially, you are limited to a “star” topology with a regular CouchDB at its center and iOS devices with TouchDB connecting to it as a synchronization hub.

However, with a little extra work, it is quite easy to turn this into a peer to peer setup, thanks to the Listener framework Jens has included in TouchDB.

In order to get this to work, you first need to build the listener framework. To do so, clone the git repository, pull the submodules and build the “Listener iOS Framework” target as follows:

git clone https://github.com/couchbaselabs/TouchDB-iOS
cd TouchDB-iOS
git submodule init
git submodule update
xcodebuild -target "Listener iOS Framework"
open build/Release-ios-universal

The open command will open a Finder window with the framework, which you need to add to your existing project.

After you have done that, you need to start the listener. One place where you might want to do that could be application:didFinishLaunchingWithOptions:. Add the following code to start the listener:

CouchTouchDBServer *server = [CouchTouchDBServer sharedInstance];
[server tellTDServer:^(TDServer *tdServer) {
  NSLog(@"Starting listener");
  _listener = [[TDListener alloc] initWithTDServer:tdServer port:59840];
  [_listener start];
}];

NB: Make sure _listener is retained outside the block and lives on, otherwise your listener goes out of scope and stops listening immediately. And as you can tell from the unbalanced alloc message: these code snippets are assuming ARC.

This is basically all you need to do to connect to your TouchDB instance via HTTP. For example, you could use curl to query it for documents. However, peer to peer benefits from advertising and discovering your service via Bonjour and the rest of this article briefly describes how to achieve this.

First off the advertising part. Add the following to a startup section of your application, for example right after creating the listener:

UIDevice *device = [UIDevice currentDevice];
self.netService = [[NSNetService alloc] initWithDomain:@"local" type:@"_myapp._tcp" name:device.name port:59840];
NSData *data = [NSNetService dataFromTXTRecordDictionary:[NSDictionary dictionaryWithObject:conf.localDbname forKey:@"path"]];
[self.netService setTXTRecordData:data];
[self.netService publish];

Replace myapp and 59840 with values of your choosing and note that it is advisable to choose a better service name than simply the device name as I have done in this example.

For discovery, you create an NSNetServiceBrowser and search for hosts of your service type:

self.browser = [[NSNetServiceBrowser alloc] init];
self.browser.delegate = self;
[self.browser searchForServicesOfType:@"_myapp._tcp" inDomain:@"local"];

You will be notified of any matches by implementing the following NSNetServiceBrowserDelegate protocol callback:

- (void)netServiceBrowser:(NSNetServiceBrowser *)netServiceBrowser didFindService:(NSNetService *)netService moreComing:(BOOL)moreServicesComing
{
  [self.services addObject:service];
  if (! moreServiceComing) {
    [self.tableView reloadData];
  }
}

In this example, I’ve added the service to an array. This could be an array that is driving a UITableView for example. (There’s a complete bonjour browser example available on the iOS Dev Center that includes a browsing UI and discovery and resolution for bonjour services that these code examples are based on.)

As Jens Alfke correctly points out in the comments, it is important to implement the companion method netServiceBrowser:didRemoveService:moreComing: as well in order to remove a service from the list when it disappears:

- (void)netServiceBrowser:(NSNetServiceBrowser *)netServiceBrowser didRemoveService:(NSNetService *)netService moreComing:(BOOL)moreServicesComing
{
  [self.service removeObject:service];
  if (! moreServiceComing) {
    [self.tableView reloadData];
  }
}

Once a service is selected in this table view, we try to resolve it:

- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
  NSNetService *service = [self.services objectAtIndex:indexPath.row];
  [service setDelegate:self];
  [service resolveWithTimeout:0.0];
}

Finally, we implement the relevant part of the NSNetServiceDelegate protocol to handle the resolved address. This is where we would then update the sync settings for our app, which is encapsulated in the [self updateSync:url] in this example. This would be the same updateSync: present in the TouchDB example apps.

- (void)netServiceDidResolveAddress:(NSNetService *)sender {
  // Construct the URL including the port number
  // Also use the path, username and password fields that can be in the TXT record
  NSDictionary* dict = [NSNetService dictionaryFromTXTRecordData:[service TXTRecordData]];
  NSString *host = [service hostName];
  NSString* user = [self copyStringFromTXTDict:dict which:@"u"];
  NSString* pass = [self copyStringFromTXTDict:dict which:@"p"];
  NSString* portStr = @"";

  // Note that [NSNetService port:] returns an NSInteger in host byte order
  NSInteger port = [service port];
  if (port != 0 && port != 80) {
    portStr = [[NSString alloc] initWithFormat:@":%d",port];
  }

  NSString* path = [self copyStringFromTXTDict:dict which:@"path"];
  if (!path || [path length]==0) {
    path = [[NSString alloc] initWithString:@"/"];
  } else if (![[path substringToIndex:1] isEqual:@"/"]) {
    NSString *tempPath = [[NSString alloc] initWithFormat:@"/%@",path];
    path = tempPath;
  }

  NSString *ipAddress = nil;
  for (NSData* data in [service addresses]) {
    char addressBuffer[100];
    struct sockaddr_in* socketAddress = (struct sockaddr_in*) [data bytes];
    int sockFamily = socketAddress->sin_family;
    if (sockFamily == AF_INET /* || sockFamily == AF_INET6 */) {
      const char* addressStr = inet_ntop(sockFamily,
                                         &(socketAddress->sin_addr), addressBuffer,
                                         sizeof(addressBuffer));
      int port = ntohs(socketAddress->sin_port);
      if (addressStr && port) {
        NSLog(@"Found service at %s:%d", addressStr, port);
        ipAddress = [NSString stringWithCString:addressStr encoding:NSASCIIStringEncoding];
      }
    }
  }

  NSString* url = [[NSString alloc] initWithFormat:@"http://%@%@%@%@%@%@%@",
                   user?user:@"",
                   pass?@":":@"",
                   pass?pass:@"",
                   (user||pass)?@"@":@"",
                   ipAddress?ipAddress:host,
                   portStr,
                   path];

  NSLog(@"service: %@", service);
  NSLog(@"url: %@", url);
  [self updateSyncURL:url];
}

The method above references one simple helper method to access bonjour data from the service:

- (NSString *)copyStringFromTXTDict:(NSDictionary *)dict which:(NSString*)which {
  // Helper for getting information from the TXT data
  NSData* data = [dict objectForKey:which];
  NSString *resultString = nil;
  if (data) {
    resultString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
  }
  return resultString;
}

As mentioned above, this bonjour code is mostly from the Apple example code of BonjourWeb but it required some minor changes. I’ve added the path component to broadcast which database to replicate with. I’ve also commented out the AF_INET6 socket family part, because it did not work with the replication and for the same reason I’m using the IP address for the URL rather than the clear name, because this also did not yield a working connection.

Hopefully this post will help people getting started with TouchDB peer-to-peer replication!

Asynchronous, lazy initialization with synchronous accessor

I’ve come to love Grand Central Dispatch and blocks for making it so easy to add asynchronous tasks to your application. Without the overhead of thread class instantiation or defining callback methods you can send a task in the background and keep your main thread unblocked.

However, sometimes you need a mix of synchronous and asynchronous tasks or more specifically you want to start something asynchronously initially and to block and wait for its completion elsewhere in your code. One example of this could be a unit test of an asynchronous algorithm where you need synchronous access to the results for validation.

Another example is a current project of mine which involves plotting of medical data that is parsed from CSV files. There are four CSV files and each takes about a second to parse. It’s not long but when you try and do it on demand when a plot is about to be displayed on screen, you find that blocking your main thread for a second can be very annoying. The obvious solution is to do the parsing on a background queue but that immediately raises the question: How do you then handle the plotting? Do you show an empty plot which populates later, when the data is available? That doesn’t look good. Another alternative would be to make the whole display plot action asynchronous. But then you’ve decoupled user interaction (user taps a button to bring up a plot) and GUI action (plot actually displays) and will probably find that users tap multiple times until the plot shows up.

Ideally then, the data would be loaded early on and the actual plotting would be synchronous. In my application, the data is loaded and parsed asynchronously in the initializer of a singleton which is used throughout the application for global data. Therefore, as soon as my global is being accessed for the first time the data gets loaded in the background. I can then afford to use blocking access to the data, because there is no (or very little) chance for the user to activate the GUI to display the plot before the data has been parsed. And even if they do, the processing is far along and the delay minimal.

So in summary, the requirements for my use case are:

  • Several initialization tasks need processing
  • Processing can happen in parallel
  • Processing must not block the main thread
  • Access to the results should block while processing is in progress

Here is how it’s implemented:

First off, we have an initializer that does our parsing, slowInitForKey in the example code. The idea here is that initialization work is based on a key (e.g. a filename) and returns a single result object that can be stored in a results dictionary.

Next we define a singleton Globals which is instantiated early on in our code, for example in viewDidLoad and has the following init method:

- (id)init {
  self = [super init];
  if (self) {
    valuesSerialQueue = dispatch_queue_create("valuesSerialQueue", NULL);
    self.values = [NSMutableDictionary dictionary];

    [[NSArray arrayWithObjects:@"A", @"B", @"C", @"D", @"E", @"F", nil]
     enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
      dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        NSString *value = [self slowInitForKey:obj];
        dispatch_async(valuesSerialQueue, ^{
          [self.values setObject:value forKey:obj];
        });
      });
    }];
  }
  return self;
}

What does this do?

  • First, we set up a serial queue and a dictionary for the results. We use a serial queue to make sure that only one thread will access the values dictionary at a time. Think of it as a locking mechanism in GCD terms.
  • Next, we iterate over the initialization keys (A-F in this example – these would be filenames in the CSV parsing example). Each key we send to a concurrent dispatch queue for parallel processing of slowInitForKey:. After processing is finished, the result is written to the dictionary via an async dispatch to our serial queue valuesSerialQueue. Again, this ensures that no two threads access the values dictionary at the same time.

Now that initialization is on its way, all that’s left is the synchronous access to the results. This is pretty simple:

- (NSString *)valueForKey:(NSString *)key {
  __block NSString *result = nil;
  do {
    // keep polling until there’s a value
    dispatch_sync(valuesSerialQueue, ^{
      result = [self.values objectForKey:key];
    });
  } while (result == nil);
  return result;
}

All we do is simply poll the results dictionary via the serial queue until there is a value. Of course you need to make sure that your initializer will always set a result – otherwise you would block forever. A safer way would be to set a time limit on how long you block before you eventually break from this method and return nil.

If you are worried that there may be a lot of polling going on until there is a result you could add a little delay after each unsuccessful poll. It’s probably irrelevant though, because polling only happens until initialization is finished.

An example project is available on github.

iOS User Accounts

Wouldn’t it be convenient if you could pick up any iPhone or iPad and have it personalized with your settings quickly? This is something that occurred to me last week when my girl friend had left her iPhone at home and wanted to continue reading her book in iBooks. I had my iPad with me but of course it is tied to my iTunes account, not hers, and it’s way too much hassle to reconfigure it just for a brief reading session.

But it made me wonder what that feature could look like on iOS and what it would take to make it happen. Basically, you’d want an extension of something that’s already possible on OSX: signing in with an Apple-ID. Once you’re authenticated with your Apple-ID, your content and settings are only a few steps away: iCloud, if you’re using it, has got it and in theory, that’s all you need to restore your device.

I’ve upgraded quite a few devices in the past and so far backup and restore has worked really well. Now imagine there were an (optional) login screen on iOS devices where you could log in to your iCloud account and immediately you’d get your home screen, with your content and settings trickling in in the background - just like it’s happening now when you restore through iTunes or from iCloud. With future devices having more storage space, the OS could cache multiple user accounts so that on subsequent logins your data would only need an update rather than a completely fresh pull. Also, you can imagine some things like big apps being referenced from multiple accounts and therefore needing to be stored only once on a device and not per account.

If that use-case still sounds esoteric to you, because your iPad is yours alone, think about places where iPads could be shared by larger audiences: Schools, universities, sales people, etc. For example, if a school wanted to start using iPads in one course only, say their biology class, they’d only need to get enough iPads for their maximum class size, not for the total number of students attending that class. (Caveat: no iPad based learning at home unless students log in using their private iPad.) Or there could be iPads per course that wouldn’t need to be moved: Your course material appears at your desk wherever you are - you don’t actually carry it there anymore. It would certainly help reduce the risk of iPads being dropped between classes or on the bus.

Technically, I would assume something like that being investigated or even in place already at Apple. It’s probably just a matter of broadband connections catching up to make this a smooth experience. One that Apple would be willing to ship and tout as a new feature.

autotm 0.94 supports local backups

As introduced in a previous blog post, autotm is an OSX system daemon that automatically switches Time Machine targets depending on their availability. The initial version of autotm only supported network based targets but I’ve recently updated the script to also allow locally connected disks (e.g. USB). This update requires some minor changes to your autotm.conf file: The server section is now called destinations and each destination has a type, which can be remote or local. For example:

destinations:
 - type: remote
   hostname: myhomeserver.local
   username: jdoe
   password: s3cr3t
 - type: remote
   hostname: myofficeserver.local
   username: john_doe
   password: pa55
 - type: local
   volume: /Volumes/Time Machine

To learn more about autotm, have a look at the Readme on Github. Please file any problems you encounter on the issue tracker at github.

Thanks to Andy and Daniel for their help in testing this release!

CouchDb Migrations

A few weeks ago I attended CouchConf in Berlin and during the sessions (and in between) one topic was raised several times: How to migrate data between “schemas” or document versions. I described how we are migrating documents and I want to take a moment to explain the process in more detail. It might sound trivial but there was interest in the description during the conference, so I’m hoping it may prove helpful for others nonetheless.

Since CouchDb is inherently unstructured, there’s no global schema that you manage to control your data’s structure. That’s often a good thing, because it gives you flexibility, but it can also cause problems, for example when you want to access documents without handling all sorts of different “versions” of your document you might have.

For example, say you have started out with an initial player document (we’re sticking with the RPG theme set in the Couchbase examples ;):

{
  'version' : 1,
  'name:' : 'Player A',
  'xp': 1234
}

Now say during testing you find that you need to know a player’s level. You’ve decided that the it should always be xp/100 + 1 but you don’t want to recompute this all the time in code but rather store it in the document directly. For various other reasons you’ve also decided against creating a view and therefore you want to migrate all your documents to this format:

{
  'version' : 2,
  'name:' : 'Player A',
  'xp' : 1234,
  'level' : 13
}

Note that the initial document already included a version attribute that we’re using to keep track of our migrations but even if this weren’t the case from the start, it’s easy to simply treat documents without a version attribute as “version 0” so to speak and handle them similarly to the rest of this example.

So how do we migrate from version 1 to version 2 then?

The idea is to create a view that shows all old revision documents and process them until the view has no more items. The view would be defined with the following (trivial) map function:

function(doc) {
  if (doc.version && doc.version == 1) {
    emit(doc._id);
  }
}

Now it’s simply a matter of processing all items in this view, for example with the following python-couchdb method that takes a database object as a parameter:

def migrate_v1_v2(db):
  v1 = db.view('_design/migration/_view/v1')
  for row in v1.rows:
    doc = db[row.key]
    if doc['version'] == 1:
      doc['version'] = 2
      # we want to add the level stat, which is simply xp/100, starting from 1
      doc['level'] = doc['xp']/100 + 1
      db[doc.id] = doc

where v1 is the name of the view we defined above.

The complete example in the form of a unit test is available on github. The only dependency is python-couchdb. It should be trivial to translate this pattern to other client libraries. It might also be useful to extend this concept to a migration framework á la Ruby on Rails.

Using S/MIME on iOS Devices

The following article explains how to set up your iPhone or iPad to send and receive encrypted emails via S/MIME. Prerequisite is an S/MIME certificate from a certificate authority. Some CAs provide them free for personal use. The procedure is not very complicated even though the description may look lengthy due to the many screenshots. The biggest hurdle is to pick the correct file format when exporting your S/MIME key on your Mac. (A description on how to export the correct certificate on Windows will follow.)

Set-up for Receiving Encrypted Emails

1. Export your private key in a format that you can import on your iOS devices.

To do this, open “Keychain Access” and find your certificate. Select it and choose “File” / “Export Items”, as shown below.

01 export key

2. Next, save the certificate in p12 format.

In the process of saving the certificate, as detailed below, you will be asked to provide a password to encrypt your key. This will allow you to send it via email without fear of it being intercepted and used by someone else. Depending on your keychain settings you will also be asked to provide your administrator password to read the privatekey for exporting.

02 save p12

3. Now drag this exported file to your Mail.app icon to send it to yourself.

(Make sure you don’t encrypt it ;)

03 send key

4. Turn to your iOS device to import the certificate.

Open the email you just sent to yourself and tap on the attachment to import your certificate.

04 import on ios 05 unsigned certificate 06 enter password 

5. Enable S/MIME in advanced mail settings and choose your certificate.

On your iOS device go to “Settings” / “Mail, Contacts, Calendars” / “<Your Account>” / “Advanced” (at the very bottom of your account settings) and activate S/MIME. Important: Make sure you leave the account settings by tapping “done” in the top right of the tool bar. Changes don’t appear to be applied until you do so.

07 enable smime 07b confirm settings

You can also enable signing and encrypting of messages here but more on that in a moment. What we’ve achieved so far is being able to read messages that have been signed with our public key. Unfortunately, sending encrypted messages involves a few more steps and has a few caveats.

Set-up for Sending Encrypted Emails

In order to send an encrypted message, you need to do the following.

1. Import the recipient’s public key.

This happens automatically in Mail.app on OSX but requires some manual interaction on iOS. You may have noticed when looking at signed messages (like the one you sent yourself earlier) that there’s a new little star icon in the blue email address bubble after S/MIME has been activated. This is the UI indicator for signed messages. And the address bubble is also a button that you can tap to bring up address - and certificate - information.

08 address bubble star

Tapping this button will bring up the address info view:

09 address info

Tap install to register this public key, which will allow you to send encrypted emails to the key’s owner. You will need to repeat this procedure once for every recipient.

2. Send email.

There’s not really a step two other than making sure you’re sending to the recipient’s correct email address and from your correct account so that the available keys match up with the email addresses used in the process. You can tell that your message is being encrypted by the “Encrypted” string in the title bar of your message:

10 encrypted message

Caveats

What’s a bit unfortunate is that there’s no easy way to selectively send encrypted emails. The encryption setting is global for the account under “Settings”, meaning that you have to go there and enable/disable encryption for all messages from that account. It would be nice if that were the default only, with an option to override it in the message composition view.

It would also be nice if public key importing were automatic, like it is on the Mac.

But all in all, it’s nice to be able to read encrypted emails on iOS devices now.

EasyPay in the Apple Store 2.0 app

In the latest 5by5 Talk Show John Gruber and Dan Benjamin speculate how EasyPay in the Apple Store 2.0 app works. EasyPay is a feature that allows shoppers to scan an item’s barcode and complete a purchase via their iTunes account without any interaction with the shop’s staff.

John and Dan are puzzled by how Apple prevents someone from just walking out without properly scanning and purchasing an item.

My wild guess would be the following happens: The barcode contains an RFID chip that will allow sensors at the exit to tell when an item is removed from the store. The barcode also contains an ID that is associated to this RFID in the store’s inventory system. When you scan the barcode and purchase an item, the RFID associated with that item is cleared to leave the store and will therefore not raise an alarm.

Maybe that’s one reason this only works in US stores right now.

Automatic Time Machine Switching

With the ubiquity of mobile computer and especially their dominance among Apple’s product offerings, it’s probably a very common set-up for people to use a MacBook both at home and at the office. This gives you a lot of flexibility and avoids having to maintain two installations – which can take a lot of time, depending on the amount of customization you’re applying to your machine. You bring your machine and therefore never have to sit down at an out-of-date computer.

There’s one problem though: Do you also carry along your time machine backup? Because if you don’t, and you spent a significant amount of time at either location, there will be large gaps and opportunities for failure in your backup schedule. (Yes, there’s mobile time machine but I see that as an option when you’re really on the road. A same disk backup is not truly a backup, it’s more like “Trash on steroids”.)

So what are the options? You could carry an external disk around and use that for backups. The problem with this is, though, that it takes a lot of discipline to hook it up every time you sit down in one place in order for the hourly time machine backups to happen. Part of the beauty of time machine is that, if it’s configured to back up to a network volume, you never have to do anything for it to kick in. All you need to do is enter you wifi zone.

Another reason an external disk is not ideal is the fact that it’s not redundant in itself. It’s just a single disk and single disks fail. Ideally, a time machine backup sits on a RAID-5 or some other redundant configuration – none of which is going to be portable.

In my opinion, the ideal solution to this is to have a time machine set-up at each location where you spend a significant amount of time and which you get switched to automatically on joining the respective network. When I saw the macosxhints article about using two time machine backups a few days ago, I knew that all the bits were there to set this up. However, I didn’t want to install extra tools like the article describes (MarcoPolo) and therefore I wrote a little ruby script that does everything automatically.

The script is available on github. The readme file explains most of the details but in a nutshell autotm does the following:

  • autotm looks at your system.log to determine if the last backup failed
  • if it failed, autotm will go through the list of configured servers to look for an alternative
  • if multiple servers respond to pings, autotm will pick the fastest one (your office server may be visible via a presumably slower VPN connection for example but you want to avoid backing up there from home)
  • if your last backup was successful but the server is not available anymore autotm will check for alternatives and pick the fastest one, as described above

So essentially, all you have to do is set up two (or more) time machine backups for you machine and then record their details in the config file. The LaunchDaemon will then trigger autotm every 30 minutes to check if it needs to switch time machine targets without any action required on your end.

Coffee Disaster

It was bound to happen. I spend countless hours tapping away on my MacBook Pro and I drink lots of coffee while I’m at it. So inevitably, as always when chances are small but opportunities abundant, disaster struck and I managed to pour half a cup of Rosabaya de Columbia on my Macbook’s keyboard. Don’t ask for details. Let’s leave it at too lazy to walk twice, balancing too many things, and the presence of gravity.

So I ended up with half a cup of coffee on the WASD area of my keyboard plus some on the trackpad, going for the edges. What do do? I went for:

  1. Calming down by shouting expletives
  2. Turning the laptop upside down
  3. Shutting it down
  4. Fetching a vacuum cleaner to suck out coffee, while holding the machine upside down (a situation man apparently doesn’t find himself in very often, or evolution would have had us develop a third arm)

Initially that didn’t help much. After rebooting I found that some keys appeared to work while others didn’t. It took me a moment to realize that actually all keys worked except the fn key, which was ‘stuck’ in the on position. Or rather coffee remains bridged it into a pressed state and it wouldn’t go.

In a situation like that you find out things you never would otherwise, like:

  • fn + cursor keys does nothing – even though fn + pretty much any other key sends the key
  • I never missed forward delete, on the contrary, I’m more of a backspace person (fn pressed will turn backspace into forward delete and drive you mad)
  • keys with state are a nightmare and I love the fact that you can turn off caps lock on Lion
  • speaking of caps lock, why is it even there and why didn’t it break instead of fn (ok, I wouldn’t notice even if it had, actually)
  • the keys can be removed rather easily (revealing things better left unseen...)

and finally:

  • the aluminum bluetooth keyboard actually fits perfectly on top of a unibody 15" MBP

The battery compartment rests nicely in the groove above the function keys and thanks to the identical key size and layout you end up with a nice piggy-back set-up that you can actually work with:

Strap-on Keyboard

The even better news is that after a day of drying, the fn key has decided to get stuck in the ‘off’ position. That means I can’t control the brightness nor the volume from the keyboard right now but at least the rest of it is back to normal.

Update: And a few days later everything is back to normal!

Integrating git version info in iOS/Cocoa apps

This is a quick reminder on how to add version info from git to your Xcode application – iOS or Cocoa – so you can see in the actual application which repository state this binary was built from.

There’s nothing new in this post really – others have done the same and blogged about it – but it serves as a note to self on how to quickly go about it and, as such, may be helpful to others. It’s really a quite simple two step process:

  1. Add a script build phase to your build target (at the end, after the other build steps):

    git status
    version=$(git describe --always --tags --dirty)
    echo version: $version
    info_plist=$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.app/Info.plist
    /usr/libexec/PlistBuddy -c "Set :CFBundleVersion $version" $info_plist

    The extra command git status to refresh the index was added, because sometimes the repos would be reported as dirty otherwise. Not sure why exactly it’s happening but this is the fix. Maybe it’s some temporary build file.

  2. Add a UILabel to one of your views (or simply NSLog to the console) and show the version:

    - (void)viewDidLoad {
      [super viewDidLoad];
      NSString *version = [[[NSBundle mainBundle] infoDictionary]
      objectForKey:@"CFBundleVersion"];
      self.versionLabel.text = version;
    }

Thoughts On Unit Testing

In a recent seminar at our company we talked about unit testing and during the discussion that ensued I found that I had a few things to say about the topic. There’s probably no wrong or right here (apart from the fact that you must test!) but there are things that I found work well in practice and others that really only look good on paper. At lot of it has to do with how you end up working, especially when under pressure, and, of course, to some extent with personal preference.

With that out of the way, the following are my observations collected over the course of some relatively large projects.

Don’t bloat test numbers

A lot of IDEs these days provide automation for tasks that are tedious. One of these tasks is setting up your test infrastructure. There are tools that auto-generate tests or test stubs whenever you add a method to a class or something similar. While this is sure to increase your coverage I’m not convinced this is a good idea in the long run.

What’s going to happen is that you generate a large number of trivial tests that you otherwise wouldn’t have written. The kind of trivial tests that come to mind are accessors, for example. What’s the point of testing methods for attribute assignment/read-out? Especially if they’re auto-generated anyway.

Effectively, what you’ll do is make it harder to manage tests overall. For instance, when your test logs are spammed with meaningless succeeding tests you’re more likely to miss a problem. Also, updating expectation values should your inputs change is going to generate a lot of work (maybe to the effect that you’ll hold off on it, which is never good). But most importantly, your test run time is going to increase and in consequence may make you run the full suite less often, especially when you’re in a hurry.

Similarly, if you yourself have written code that auto-generates other code, don’t auto-generate test code as well. Create tests for your generator and make sure it works but don’t spawn code and tests – you’re essentially re-testing the same thing with predictable outcome.

Make sure your tests add value

In the same vein, make sure your tests actually increase the value of your test suite. There’s no point copy-pasting a test and just changing parameters to have yet another test unless it actually ends up running a new code path or testing an edge case (NULL parameters etc).

This may sound trivial but perhaps a less obvious case is a wrapper method that calls a complicated one. Is it really worthwhile to add a test for a trivial method which basically only replicates the complicated test? Blindly adding tests for sake of coverage can lead to the problem of long-running test suites with no or little extra value, as mentioned above. Either use the trivial, higher-level method to test the complicated under-lying one or test the latter directly.

Coverage is great but don’t over-interpret its value. If full coverage leads to a test suite that is so slow you’ll only run it once a month you’ll end up testing less, not more.

Adapt your test strategy to your type of software

There are different kinds of software and along with them come different approaches to unit testing.

I believe the main distinctions to make are the following:

  • library code / APIs
  • faceless application
  • GUI application

Libraries & APIs

Library code is probably the easiest to tackle with traditional unit testing and the one where the strictest rules apply. I would maintain that every public API has to be covered by unit tests, typically including edge case parameters. It’s pretty embarrassing to ship an API and find the advertised calls into your library don’t work. The best way to ensure it does is to be your own (and first!) client by running your test suite against the full set of APIs. It doesn’t stop at the public API, of course, but it’s really the place to start, also to get a feeling if the interface is good. Test driven design works great for libraries.

What makes it easier to maintain full or extensive coverage in library unit testing, is that there’s typically much less state involved than in application code – it’s much simpler to maintain test data.

Faceless Application

A ‘faceless application’ is an application that interfaces to users by some means other than GUI or code, for example file based exchange or network sockets.

Therefore, testing the interface of a ‘faceless application’ is quite different from library or GUI testing. Where in the case of a library you write code to test your interfaces, here you are going to spend much more of your time setting up your test fixtures in the form of files or network services (or mock interfaces for that matter).

I believe in unit testing it really comes down to what the interfaces to your code are. In case of the library, it’s really other code that connects with yours and therefore the best way to test is to write calls into your library. With a faceless application you have quite different interfaces. So while internally, i.e. inside your application, you still use test driven design to cover your internal interfaces to some extent, you also have to address a different test infrastructure to your public interface.

Typically setting up this kind of test infrastructure is quite a bit more complicated than in the case of library code. You’ll probably end up doing some “test triage”: You can’t be everywhere, so to speak, and I believe the most important coverage is that of your public interface.

GUI Applications

Finally, GUI applications. I have not really found a good way to do GUI application unit testing. Naturally one will have “back-end” code that can and must be tested in “traditional” ways but most importantly you really want to have tests in place for your public interfaces, i.e. do common user tasks so you can be sure that you (or rather your test) has successfully clicked all the essential buttons before you ship.

I know there are tools like “Eggplant” that claim to cover this. They may well do so but I’ve never tried any of them for the few (small) GUI applications I’ve written.

What I did try was use AppleScript automation for an Objective-C/Cocoa/OSX application (I’m sure there are similar scripting tools for other platforms/languages) but in the end it was too slow to allow running a big test suite. Also, this approach is limited in its result checking: One can read out controls (“did the expected text appear when I clicked the ‘Transform’ button?”) but obviously result checking is not always that simple.

Actually, I’m not convinced that automated unit testing is really the proper model for GUI applications. One reason for this is that they allow for so much freedom in how you can chain together actions that it’s virtually impossible to instrument all combinations in unit tests. Plus if you do, you may end up being constrained by your tests not to make GUI changes for fear of breaking a huge suite of unit tests.

You may argue that you’re expecting the same “breaking with habit” from your users but GUIs are much more about design and “feel” and you sometimes need to “force” the direction. Legacy unit tests can make you keep a “stale” GUI when you should really move ahead. For library code this is really the other way around: Unit tests ensure that you maintain source compatibility and make you think twice if it’s worthwhile to make an incompatible change because you’re the first one that’s going be hit by it: you’re going to have to update all your tests – the same you’re asking your software developer clients to do. From how much work this update is going to be for you you can judge if the change is really worth it. The defining difference is probably that in one case you have persisted behavior (code) whereas in the other it’s transient – the users’ “muscle memory” or habits.

The solution for GUI applications is probably: Rely on good beta-testers. (See also: http://www.wilshipley.com/blog/2005/09/unit-testing-is-teh-suck-urr.html)

Apple Digital Media Library?

Please don’t call it cloud...

I’m setting up new macs quite regularly and I’ve found over the years it’s gotten easier to switch from one work Mac to another. There was a time when I had a strict “main work machine”, a macbook, which would have all my latest data and apps I use with a fallback machine for testing, heavy duty compiling, etc. This was typically a desktop.

The idea was to be able to work everywhere using the laptop and switch to the desktop if required for stuff the laptop couldn’t handle well. Thanks to faster internet connections and better synchronization options I find that nowadays I regularly swap machines depending on wether I want the bigger screen of the desktop or the better mobility of the laptop. I have my source code in a subversion repository off-site, my contacts and calendars sync via mobile.me and my mail is hosted on an IMAP server. That covers pretty much everything I need for work. Except maybe for open source software, which I install on one machine and rsync to the other.

When I looked at this process I thought I might as well sync my iTunes and iPhoto Libraries with rsync to benefit from the bigger desktop screen and the speakers. And then I thought – why do I have to do this when my contacts, my calendars, and my mail are available on all my macs pretty much out of the box?

And isn’t it odd that we buy not only music but movies, TV shows, and now apps from the iTunes Music Store? When will Apple rename this to Apple Digital Media Store and consolidate our iTunes Music and iPhoto Library into a Digital Media Library, offering local network (maybe even remote network) synching?

LHC: Physicists vs Lawyers

On September 10 the LHC at CERN has taken up operations with protons in the beam. There has been quite a bit of media coverage lately about the risks the machine might pose to the general population: Black mini holes (or mini black holes?) could be produced and subsequently swallow the earth. The latest news is that there’s even been a law suit filed at the European Court of Human Rights against CERN to stop the operations.

It’s a pretty interesting constellation: Physicists like to report the superlatives this new machine is capable of and now, all of a sudden, these superlatives scare people. There’s an attempt to turn the the theories against us! Of course, there are counter-theories why the bad scenarios won’t happen...

So here’s my take: All the black holes we observe in the universe are not created by suns of more than a certain size imploding at the end of their life but rather by physics experiments gone south! It appears that invariably every civilization advances to the point where they can build accelerators of sufficient size to nuke themselves and their courts are simply too slow to stop them in time. Cynics might say these civilizations were still better off than the ones where the courts or lawyers were in fact powerful enough...