Binary search through commits with Git bisect

Ever had a problem come up somewhere within a chain of commits, and you’re not sure which change caused the problem?  Up to now, we’ve usually addressed this by doing a binary search between the last commit and the latest commit we can find where the problem hasn’t yet been introduced.  I consider myself fairly experienced with Git, but I had no idea that it would do it for you!

The git bisect command is your friend in this case.  An excellent blog post explaining how to use it can be found here:

http://webchick.net/node/99

Advertisements

Mocking gesture recognisers with OCMock

Full credit for this tip goes to my colleague, Tom Kelly.  He doesn’t have a blog at this point, so I figured I’d blog the tip for him.

You can mock gesture recognisers like any other type of object.  Its not something I’d considered before, but this means that you can unit test gesture handling functionality on your classes by calling their gesture handling methods and passing your mock gesture recogniser.

Below is an example of a mock tap gesture recogniser:

id gestureMock = [OCMockObject partialMockForObject:[[UITapGestureRecognizer new] autorelease]];
[[[gestureMock stub] andReturnValue:OCMOCK_VALUE(CGPointZero)] locationInView:[OCMArg any]];

Then, in your unit test, you call the tap handler on your class under test:

-(void)testTapBehaviour
{
    // Set up control under test here...

    // Create the mock tap gesture handler
    id gestureMock = [OCMockObject partialMockForObject:[[UITapGestureRecognizer new] autorelease]];

    // Expect:
    [[[gestureMock stub] andReturnValue:OCMOCK_VALUE(CGPointZero)] locationInView:[OCMArg any]];
    // Set up any expected behaviours on your custom class here...

    // Call the tap handler method on your custom class here...
}

Automating unit tests in XCode

I’m currently working on an iOS project.  Within the project we have a set of application unit tests.  We are using the Jenkins CI service to automate builds of our product.  We have written a build script, which Jenkins runs once it has checked out our code.  As part of the shell script, we would like to run the unit tests in the project as well, to make sure that the commit hasn’t broken anything.
The steps we have followed to achieve this are based on this excellent blog post: http://longweekendmobile.com/2011/04/17/xcode4-running-application-tests-from-the-command-line-in-ios/

By adding this line to our build script:

xcodebuild -target unitTestBundleTargetName -configuration Debug -sdk iphonesimulator5.1 TEST_AFTER_BUILD=YES clean build

and patching the /Developer/Platforms/iPhoneSimulator.platform/Developer/Tools/RunPlatformUnitTests file as described in the blog post above, we are able to run our unit tests.

That’s not quite it though.  We’ve updated to use XCode 4.5, which now includes the simulator for iOS6.  There are problems with iphonesimulator6.0.  If you update the line above to use iphonesimulator6.0, you will get the following error:

Unknown Device Type. Using UIUserInterfaceIdiomPhone based on screen size
Terminating since there is no workspace.

This isn’t a problem for us, as we’re not using iOS 6 features. However, if you are, and you need to run the iphonesimulator6.0, then I can imagine this will be a major issue.

I’m going to have a little rant at this point.  It is just good practice to use a continuous integration environment to produce your builds.  One of the prerequisites for this is being able to run and test your builds from the command line.  XCodebuild gets you some of the way, in that it will create a build for you, but it feels wrong that developers are having to patch the Apple developer tools in order to be able to run their unit tests from the command line.  And even if they do, they are not currently able to test iOS6-specific features.

XCode is a powerful tool, but it feels like Apple should be doing more to support good practice like continuous integration.  Coming from a Java background, where this kind of thing is well supported, it is surprising that it is not so in the iOS world.

Creating a screenshot of a view containing Shinobi charts

As part of a project I’ve been working on, I’ve had a use case to create a snapshot of a UIView, which contains a set of Shinobi charts.  For those of you which haven’t used the framework, the Shinobi charts framework is an excellent framework which allows you to quickly create high performance charts in your application.

The charts which it creates are rendered using OpenGL.  OpenGL content is not captured in the same way as UIKit components on a page, so in order to capture a screenshot, we are going to have to do a bit of work to capture both kinds of elements in our view.

The approach I have used in heavily based on that outlined in the excellent blog which Stuart Grey of the Shinobi team has produced, to capture a snapshot of a Shinobi chart.  It can be found here: http://www.shinobicontrols.com/blog/posts/2012/03/26/taking-a-shinobichart-screenshot-from-your-app/

In this blog, Stuart describes how you can capture the OpenGL content of a chart in an image view, then capture the UIKit content of the chart in a different image view, and merge the views together.

I will be doing something broadly similar, although I will also be capturing any UI components which are outside the chart.

In order to do this work, I have made use of the SChartGLView+Screenshot category which Stuart has written.

First, I will create a category on UIView, which allows the user to create a snapshot of the view, containing a set of OpenGL subviews.

#import "UIView+Screenshot.h"
#import <QuartzCore/QuartzCore.h>

@implementation UIView (Screenshot)

- (UIImage*)snapshotWithOpenGLViews:(NSArray*)openGlImageViews  {

    // Create an image of the whole view
    if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
        UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, [UIScreen mainScreen].scale);
    } else {
        UIGraphicsBeginImageContext(self.frame.size);
    }
    [self.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    UIImageView *imageView = [[[UIImageView alloc] initWithFrame:self.frame] autorelease];
    [imageView setImage:viewImage];

    //Add our GL captures to our main view capture
    for (UIImageView *glImageView in openGlImageViews)    {
        [imageView addSubview:glImageView];
    }

    //Turn our composite into a single image
    UIGraphicsBeginImageContext(imageView.bounds.size);
    [imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *completeViewImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return completeViewImage;
}

@end

Our category starts off by capturing the whole view, which won’t include the OpenGL components.  It creates an image view from the resultant image.  The method takes in an array of image views which hold the OpenGL captures.  The origins of the image views should be such that the OpenGL views are correctly positioned relative to the main view.  We then add the OpenGL views as sub-views of the main view, and take a snapshot of the composite.

An example of this method in action is below:

#import "UIView+Screenshot.h"
#import "SChartGLView+Screenshot.h"
#import <ShinobiCharts/SChartCanvas.h>

// Rest of code here

- (void)doSomethingWithSnapshot  {
    UIImageView *chartImageView = [[[UIImageView alloc] initWithImage:[chart.canvas.glView snapshot]] autorelease];
    CGRect chartFrame = chartImageView.frame;
    chartFrame.origin.x += chartView.frame.origin.x + chart.canvas.glView.frame.origin.x;
    chartFrame.origin.y += chartView.frame.origin.y + chart.canvas.frame.origin.y;
    chartImageView.frame = chartFrame;

    UIImage *viewImage = [self.view snapshotWithOpenGLViews:[NSArray arrayWithObject:chartImageView]];

    // Do something with the snapshot here
}

You’ll notice that I’ve had to move the origin of the image view containing the OpenGL capture so it is in the correct place on the screen.  This might be something you’ll have to play around with if you use this technique.

Customising the transition of views within a UINavigationController

I recently had a use case where I needed to customise the transition animation which was used when I pushed a view onto a UINavigationController stack.

I came across this excellent article, which described a way of doing this using the CALayer of the UINavigationControllers view: 
http://freelancemadscience.blogspot.co.uk/2010/10/changing-transition-animation-for.html

Armed with this technique, I created my own subclass of UINavigationController, and used the following code when I pushed a new view onto the stack:

CATransition* transition = [CATransition animation];
transition.duration = 0.4f;
transition.type = kCATransitionReveal;
transition.subtype = kCATransitionFromTop;
[self.view.layer addAnimation:transition forKey:kCATransition];
        
UIViewController *viewController = // create your view controller here
[self pushViewController:viewController animated:NO];
        
[self.view.layer addAnimation:nil forKey:kCATransition];

In the code above, I set up the custom transition I want to use for displaying the new view (in this case, basically the same as the default transition, but in a different direction), and then push the new view onto the stack.  Notice that we don’t animate the push operation – this is handled by the animation we set on the layer.  After we’re done, I clear the custom animation from the view layer.