Mocking gesture recognisers with OCMock

Full credit for this tip goes to my colleague, Tom Kelly.  He doesn’t have a blog at this point, so I figured I’d blog the tip for him.

You can mock gesture recognisers like any other type of object.  Its not something I’d considered before, but this means that you can unit test gesture handling functionality on your classes by calling their gesture handling methods and passing your mock gesture recogniser.

Below is an example of a mock tap gesture recogniser:

id gestureMock = [OCMockObject partialMockForObject:[[UITapGestureRecognizer new] autorelease]];
[[[gestureMock stub] andReturnValue:OCMOCK_VALUE(CGPointZero)] locationInView:[OCMArg any]];

Then, in your unit test, you call the tap handler on your class under test:

-(void)testTapBehaviour
{
    // Set up control under test here...

    // Create the mock tap gesture handler
    id gestureMock = [OCMockObject partialMockForObject:[[UITapGestureRecognizer new] autorelease]];

    // Expect:
    [[[gestureMock stub] andReturnValue:OCMOCK_VALUE(CGPointZero)] locationInView:[OCMArg any]];
    // Set up any expected behaviours on your custom class here...

    // Call the tap handler method on your custom class here...
}

Automating unit tests in XCode

I’m currently working on an iOS project.  Within the project we have a set of application unit tests.  We are using the Jenkins CI service to automate builds of our product.  We have written a build script, which Jenkins runs once it has checked out our code.  As part of the shell script, we would like to run the unit tests in the project as well, to make sure that the commit hasn’t broken anything.
The steps we have followed to achieve this are based on this excellent blog post: http://longweekendmobile.com/2011/04/17/xcode4-running-application-tests-from-the-command-line-in-ios/

By adding this line to our build script:

xcodebuild -target unitTestBundleTargetName -configuration Debug -sdk iphonesimulator5.1 TEST_AFTER_BUILD=YES clean build

and patching the /Developer/Platforms/iPhoneSimulator.platform/Developer/Tools/RunPlatformUnitTests file as described in the blog post above, we are able to run our unit tests.

That’s not quite it though.  We’ve updated to use XCode 4.5, which now includes the simulator for iOS6.  There are problems with iphonesimulator6.0.  If you update the line above to use iphonesimulator6.0, you will get the following error:

Unknown Device Type. Using UIUserInterfaceIdiomPhone based on screen size
Terminating since there is no workspace.

This isn’t a problem for us, as we’re not using iOS 6 features. However, if you are, and you need to run the iphonesimulator6.0, then I can imagine this will be a major issue.

I’m going to have a little rant at this point.  It is just good practice to use a continuous integration environment to produce your builds.  One of the prerequisites for this is being able to run and test your builds from the command line.  XCodebuild gets you some of the way, in that it will create a build for you, but it feels wrong that developers are having to patch the Apple developer tools in order to be able to run their unit tests from the command line.  And even if they do, they are not currently able to test iOS6-specific features.

XCode is a powerful tool, but it feels like Apple should be doing more to support good practice like continuous integration.  Coming from a Java background, where this kind of thing is well supported, it is surprising that it is not so in the iOS world.

Creating a screenshot of a view containing Shinobi charts

As part of a project I’ve been working on, I’ve had a use case to create a snapshot of a UIView, which contains a set of Shinobi charts.  For those of you which haven’t used the framework, the Shinobi charts framework is an excellent framework which allows you to quickly create high performance charts in your application.

The charts which it creates are rendered using OpenGL.  OpenGL content is not captured in the same way as UIKit components on a page, so in order to capture a screenshot, we are going to have to do a bit of work to capture both kinds of elements in our view.

The approach I have used in heavily based on that outlined in the excellent blog which Stuart Grey of the Shinobi team has produced, to capture a snapshot of a Shinobi chart.  It can be found here: http://www.shinobicontrols.com/blog/posts/2012/03/26/taking-a-shinobichart-screenshot-from-your-app/

In this blog, Stuart describes how you can capture the OpenGL content of a chart in an image view, then capture the UIKit content of the chart in a different image view, and merge the views together.

I will be doing something broadly similar, although I will also be capturing any UI components which are outside the chart.

In order to do this work, I have made use of the SChartGLView+Screenshot category which Stuart has written.

First, I will create a category on UIView, which allows the user to create a snapshot of the view, containing a set of OpenGL subviews.

#import "UIView+Screenshot.h"
#import <QuartzCore/QuartzCore.h>

@implementation UIView (Screenshot)

- (UIImage*)snapshotWithOpenGLViews:(NSArray*)openGlImageViews  {

    // Create an image of the whole view
    if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
        UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, [UIScreen mainScreen].scale);
    } else {
        UIGraphicsBeginImageContext(self.frame.size);
    }
    [self.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    UIImageView *imageView = [[[UIImageView alloc] initWithFrame:self.frame] autorelease];
    [imageView setImage:viewImage];

    //Add our GL captures to our main view capture
    for (UIImageView *glImageView in openGlImageViews)    {
        [imageView addSubview:glImageView];
    }

    //Turn our composite into a single image
    UIGraphicsBeginImageContext(imageView.bounds.size);
    [imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *completeViewImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return completeViewImage;
}

@end

Our category starts off by capturing the whole view, which won’t include the OpenGL components.  It creates an image view from the resultant image.  The method takes in an array of image views which hold the OpenGL captures.  The origins of the image views should be such that the OpenGL views are correctly positioned relative to the main view.  We then add the OpenGL views as sub-views of the main view, and take a snapshot of the composite.

An example of this method in action is below:

#import "UIView+Screenshot.h"
#import "SChartGLView+Screenshot.h"
#import <ShinobiCharts/SChartCanvas.h>

// Rest of code here

- (void)doSomethingWithSnapshot  {
    UIImageView *chartImageView = [[[UIImageView alloc] initWithImage:[chart.canvas.glView snapshot]] autorelease];
    CGRect chartFrame = chartImageView.frame;
    chartFrame.origin.x += chartView.frame.origin.x + chart.canvas.glView.frame.origin.x;
    chartFrame.origin.y += chartView.frame.origin.y + chart.canvas.frame.origin.y;
    chartImageView.frame = chartFrame;

    UIImage *viewImage = [self.view snapshotWithOpenGLViews:[NSArray arrayWithObject:chartImageView]];

    // Do something with the snapshot here
}

You’ll notice that I’ve had to move the origin of the image view containing the OpenGL capture so it is in the correct place on the screen.  This might be something you’ll have to play around with if you use this technique.

Customising the transition of views within a UINavigationController

I recently had a use case where I needed to customise the transition animation which was used when I pushed a view onto a UINavigationController stack.

I came across this excellent article, which described a way of doing this using the CALayer of the UINavigationControllers view: 
http://freelancemadscience.blogspot.co.uk/2010/10/changing-transition-animation-for.html

Armed with this technique, I created my own subclass of UINavigationController, and used the following code when I pushed a new view onto the stack:

CATransition* transition = [CATransition animation];
transition.duration = 0.4f;
transition.type = kCATransitionReveal;
transition.subtype = kCATransitionFromTop;
[self.view.layer addAnimation:transition forKey:kCATransition];
        
UIViewController *viewController = // create your view controller here
[self pushViewController:viewController animated:NO];
        
[self.view.layer addAnimation:nil forKey:kCATransition];

In the code above, I set up the custom transition I want to use for displaying the new view (in this case, basically the same as the default transition, but in a different direction), and then push the new view onto the stack.  Notice that we don’t animate the push operation – this is handled by the animation we set on the layer.  After we’re done, I clear the custom animation from the view layer.

Implementing a publish/subscribe design using the NSNotificationCenter

My background is mainly in Java, so if I want to create an object which has a set of listeners, I’m used to having to implement that design myself.  In the past, I’ve added a collection of listeners to the observable object, then added some methods to the class’ API to allow other objects to register as listeners for events.

I’m still quite new to iOS, so I haven’t really made a lot of use of NSNotificationCenter up till now, other than for registering for things like keyboard events.  I’ve found that it can really help with the publish/subscribe design pattern, and it reduces the amount of code you have to write.

To register an object to listen for a particular kind of event, you can use the following code:

[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(someMethod) name:kNotificationName object:nil];

You define the method in your listener class.  You pass in the name of notifications to listen for.  This means that whenever you post a notification with that name, the listeners which have registered for those kind of notifications will be notified.

To post a notification, you can use the following code:

[[NSNotificationCenter defaultCenter] postNotification:[NSNotification notificationWithName:kNotificationName object:nil]];

The other thing to remember is to remove your listener classes as observers to the notification center when they are deallocated.

MacRuby and CoreData

As part of a project I’ve been working on, we’ve needed to use CoreData within MacRuby.     MacRuby is an implementation of Ruby which runs on the Objective-C runtime and the CoreFoundation framework.  As such, it allows you to make calls to CoreData within Ruby.

We created a utility class to allow us to make calls to our data store.  It was as follows:

class CoreDataStore
    def create_entity name, props={}, relationships={}
        entity = OSX::NSEntityDescription.insertNewObjectForEntityForName_inManagedObjectContext(name, context)
        props.each do |k,v|
            entity.setValue_forKey v,k
        end

        relationships.each do |k, objects|
            collection = entity.mutableSetValueForKey(k)
            objects.each {|o| collection.addObject o}
        end
        entity
    end

    def get_entity name, key, value
        request = OSX::NSFetchRequest.alloc.init

        description = OSX::NSEntityDescription.entityForName_inManagedObjectContext(name, context)

        request.setEntity(description)

        valueString = "#{value}"
        if (value.is_a? String)
            valueString = "'#{value}'"
        end
        predicateString = "#{key} like[c] #{valueString}"
        predicate = OSX::NSPredicate.predicateWithFormat(predicateString)

        request.setPredicate(predicate)

        error = nil

        result = context.executeFetchRequest_error(request, error)
    end

    def initialize(data_store_path, mom_path)
        @data_store_path = data_store_path
        @mom_path = mom_path
    end

    def context
        @contect ||= OSX::NSManagedObjectContext.alloc.init.tap do |context|
            model = OSX::NSManagedObjectModel.alloc.initWithContentsOfURL(
                OSX::NSURL.fileURLWithPath(@mom_path))
            coordinator = OSX::NSPersistentStoreCoordinator.alloc.initWithManagedObjectModel(model)

            result, error = coordinator.addPersistentStoreWithType_configuration_URL_options_error(
                OSX::NSSQLiteStoreType, nil, OSX::NSURL.fileURLWithPath(@data_store_path), nil)
            if !result
                raise "Add persistent store failed: #{error.description}"
            end
            context.setPersistentStoreCoordinator coordinator
        end
    end

    def save
        res, error = context.save_
        if !res
            raise "Save failed: #{error.description}"
        end
        res
    end
end

The utility class allows us to create a new entity within the persistent store, and to make a simple query on the store.  It is by no means comprehensive, but this did everything we needed in our simple use case.