In March 2016, I presented at the try! Swift conference in Tokyo on Core Animation! Here are my slides from the presentation, with the content themselves in Japanese, but my original English transcript in the notes.
19. CALayer
Where is it in UIKit?
public class UIView {
public var layer: CALayer { get }
}
UIView
20. Deeply integrated with UIView
public class UIView {
public var frame: CGRect {
get {
return self.layer.frame
}
set {
self.layer.frame = newValue
}
}
}
let newLayer = CALayer()
view.layer.addSublayer(newLayer)
• CALayerプロパティーが
UIViewで示されてる
• ‘frame’ は CALayerの
‘position’ と‘bounds’ プロパ
ティーで計っている.
21. Why is it not a superclass?
• UIViewのlayerのクラスが変わる事もある。
• 普通のサブクラスの実装で不可能
public class MyGradientClass : UIView {
override class func layerClass() -> AnyClass {
return CAGradientLayer.self
}
}
22. Mapping contents to CALayer
let trySwiftLogo = self.trySwiftLogo() as UIImage
let trySwiftLayer = CALayer()
trySwiftLayer.contents = trySwiftLogo.CGImage
(アニメーションも可!)
23. Managing the scale of CALayer
contentstrySwiftLayer.contentsGravity
kCAGravityResize
kCAGravityResizeAspectFill
kCAGravityResizeAspect
kCAGravityCenter
41. Timing Function
let timingFunction = CAMediaTimingFunction(controlPoints: .08, .04, .08, .99)
let myAnimation = CABasicAnimation()
myAnimation.timingFunction = timingFunction
http://cubic-bezier.com
42. Animating a CALayer’s Contents
let imageView = UIImageView()
let onImage = UIImage()
let offImage = UIImage()
let myAnim = CABasicAnimation(keyPath: “contents”)
myAnim.fromValue = offImage.CGImage
myAnim.toValue = onImage.CGImage
myAnim.duration = 0.15
imageView.layer.addAnimation(myCrossfadeAnimation,
forKey: “contents”)
imageView.image = onImage
43. CAKeyframeAnimation
let rect = CGRectMake(0, 0, 200, 200)
let circlePath = UIBezierPath(ovalInRect:rect)
let circleAnimation = CAKeyframeAnimation()
circleAnimation.keyPath = “position”
circleAnimation.path = circlePath.CGPath
circleAnimation.duration = 4
// Manually specify keyframe points
// circleAnimation.values = //…
// circleAnimation.keyTimes = //..
let trySwiftLayer = //…
trySwiftLayer.addAnimation(circleAnimation,
forKey: “position”)
44. CAAnimationGroup
let myPositionAnimation = CABasicAnimation.animation(keyPath: “position”)
let myAlphaAnimation = CABasicAnimation.animation(keyPath: “opacity”)
let animationGroup = CAAnimationGroup()
animationGroup.timingFunction = kCAMediaTimingFunctionEaseInEaseOut
animationGroup.duration = 2
animationGroup.animations = [myPositionAnimation, myAlphaAnimation]
let trySwiftLayer = CALayer()
trySwiftLayer.addAnimation(animationGroup, forKey: “myAnimations”)
45. Animation Completion Handling
// Set a delegate object
let myAnimation = CABasicAnimation()
myAnimation.delegate = self
// Animation completion sent to ‘animationDidStop(anim: finished flag:)
// ———
//Set a closure to be executed at the end of this transaction
CATransaction.begin()
CATransaction.setCompletionBlock({
// Logic to be performed, post animation
})
CATransaction.commit()
Good morning / afternoon! Welcome to Advanced Graphics with Core Animation!
Time is short, so let’s get started
Alrighty. We’re going to be covering three major points today
First: So we’re all on the same page, a general introduction to Core Animation, and how it differs to UIKit.
Second: How to set up and perform animations in Core Animation
Third: A quick walkthrough on some of the CALayer subclasses available to us on iOS.
Before we get started, let me introduce myself! My name’s Tim Oliver and I’m an engineer from Perth in Western Australia.
In the past, I’ve worked for both web and app design and development agencies, but I now work full-time for Realm, having started in March last year. I’ve been a huge fan of developing iOS apps since the iPhone 3G launched in Australia, and I’ve been doing iOS development in a professional capacity since mid-2009. Also, I love karaoke. Let me know if there’s plans for doing that after this! ;D
I also really like bad puns. :D
I feel I should also mention my relationship with Japan. I love Japan and I’ve been coming here for… a while.
Due to my father’s work, my family lived in Japan in 1996 where we made many friends and connections. My sister and I have been studying the language in Australia ever since. Since 1996, I’ve also lived in Japan twice since then: in 2007 when I did a working holiday in Niigata and Osaka, and in 2013 when I worked as a developer for a company named pixiv.
In my free time, I’m building a comic reader app named iComics.
In my free time, I’m building a comic reader app named iComics. The goal of the app being where users can read their own digital comics on their devices. Obviously the app is incredibly graphics heavy, and so I’ve spent many months playing with various features of UIKit and Core Animation in my goal of a constant 60FPS. This talk is going to be mostly around what I’ve learned from all of this ‘playing’. Additionally, iComics is powered by Realm, making me the only employee of Realm using it in a shipping app. Spoiler-alert: it’s really good!
So let’s get started! I’m not really sure what level everyone here is at, so I’m hoping these first few slides won’t be too boring for everyone. What exactly is Core Animation?
Simply put, Core Animation is the system framework that handles both the graphics rendering, and animation of native apps on iOS. From your app, it directly handles offloading work created by the CPU, to the GPU in a low-level, efficient API. Looking at Apple’s chart, UIKit actually sits on top of it, and hooks into Core animation on a very tightly integrated level. Core Animation itself sitting directly on OpenGL, and presumably Metal since iOS 9.
Similarly to UIView, Core Animation is mainly represented as a series of layer objects. which can be added to each other, just like subviews. In their most basic implementation, they are quads that are either a flat color, or can have content, such as a bitmap directly mapped to it. In my experience doing game development as a hobby, working with the graphics on this level feels very close to writing custom game UIs directly with OpenGL.
But with that all being said, why should we even care about Core Animation? UIKit is all we need right? Well, that’s 95% true, but there are a lot of advantages of learning how to work with Core Animation alongside UIKit. It’s the framework actually in charge of rendering the content on your screen, and so understanding how UIKit interacts with it lets you understand what is actually going on in your app. This subsequently lets you optimise your app’s speed as it runs, allowing you to fix performance bottlenecks more easily.
Another advantage is that not all of Core Animation is exposed via UIKit, so the amount of effects you can create on the GPU dramatically increases when you know how to incorporate. All of this has a really singular goal: by making animations, effects and optimising your performance, you can make an app that really stands out and impresses people.
One thing I got confused at A LOT when I was originally starting out with graphics on iOS was how is Core Animation different to Core Graphics? Surely Core Animation is just the ‘animation’ part of graphics on iOS, and Core Graphics is the actual ‘rendering’ part. As it turns out that’s not true, and personally, I think Core Graphics, while relevant, is a slightly confusing name. To show the difference, here’s some code I wrote. :)
A small amount of code… it took like 3 minutes to write. XD
This is a mixture of Core Graphics and UIKit code. Can you guess what it does?
It draws the try! Swift logo!
In case you didn’t really get what just happened, Core Graphics lets you dynamically draw complex shapes and manipulate image data. This is completely different to Core Animation in that it ALL happens statically on the CPU. This sort of drawing is too complex to perform on the GPU, and so Core Graphics inherently never leaves the GPU. As a result, it can be quite slow, especially on hardware, like the A5X chip that had a CPU disproportionate to its graphics hardware. While I’ve seen a few sample projects on GitHub rely on Core Graphics to animate their content, manually performing a redraw every 1/60 seconds, this is not great since it’s slow, and taxes the CPU a lot.
Instead, it’s often great to get Core Graphics and Core Animation to work together. Core Graphics can perform the initial image generation/processing, and then pass the finished content off to Core Animation, which can then manipulate it on the GPU as needed.
By the way. I lied. I didn’t write that code. There’s a really nice app out there named PaintCode that lets you take vector images, like SVG and convert them into raw Core Graphics code. I use it very often for icons in my app since it you can dynamically render them at any size, minimising redundant files. I seriously recommend this app.
Like I said earlier, Core Animation is a series of layer objects. Not surprisingly, the base layer class is called CALayer. Creating a layer object is a lot like creating a UIView, except all of the niceties of UIKit, like UIColor and UIImage aren’t available and needed to be converted to Core Graphics first. Additionally, it is always necessary to import the QuartzCore framework, in order to work with CALayers.
And then to make it even more interesting, you can apply a very quick rounded edge mask to it via the ‘cornerRadius’ feature. We’ll look more at Core Animation masks down the line.
In fact, what we actually see when we display a UIView is actually just a CALayer object! Every UIView has a CALayer ‘layer’ property, and THAT is responsible for what we see on the screen. That isn’t to say that UIView is a waste of effort. UIView add a lot of iOS’ core features such as auto-layout and gesture recognizers, and the ability to transparently configure the layer object via UIKit objects like UIColor. But all that being said, we credit the actual drawing of the view to the screen with Core Animation.
Since CALayer is the visual component of UIView, it makes sense that all of the properties relating to layout actually map straight to the same properties in CALayer. While this probably isn’t the exact code that happens, both CALayer and UIView’s ‘frame’ property can be used interchangably. But even that being said, the ‘frame’ property of CALayer is computed by two other properties: the ‘position’ value which maps the centre point of the layer, and the bounds, which contains the size.
If adding a whole new UIView might be overkill, it’s possible to create child layers of a UIVie’s layer by using the ‘addSublayer’ method. I use this a lot for elements that don’t require the additional overhead of a UIView, like a ‘dimming’ effect that only appears when a view is tapped.
It’s often asked, given that UIView is basically a higher-level derivative of CALayer, why isn’t UIView simply a subclass of it? The reason for this is that UIKit provides an interesting mechanism where it’s possible to ‘swap out’ the class of a layer for a subclass that provides alternative or additional effects. This is done by overriding the ‘layerClass’ method of your own UIView subclass and providing the class reference to your CALayer subclass of choice.
This wouldn’t be possible with the traditional subclass method, since it would always be locked to just CALayer then.
While setting the background color of a CALayer is a good start, that’s not great for a fully fleshed out UI. Instead, CALayers have a property named ‘contents’ that can be used to map content, usually bitmaps to the layer. This is also where ‘drawRect’ on UIView objects will map to. But for the sake of demonstration, we can take the try! Swift logo we procedurally created earlier, store it as a UIImage, and then directly map it to a CALayer. The result is a very low-level, but functionally identical version of a UIImageView.
Once a bitmap is mapped to a layer, it’s possible to configure how it will be rendered. By default, the bitmap will be resized to fit the frame of the layer. If the layer’s aspect ratio doesn’t match the image’s, then it will be distorted to fit. But. By changing the ‘contentsGravity’ property of the layer, it’s possible to change the behaviour of this bitmap scaling. Some of the more useful ones are ‘resize aspect fill’, where the content scales to fill the layer, or ‘aspect fit’, where the image stays at the right aspect ratio, but changes to fit the layer.
I should mention, this property is also exposed via the ‘contentMode’ of UIView, so if you want to do this sort of behaviour with a UIImageView, it’s not necessary to drop to the Core Animation layer for that.
If you want an example of where this sort of behaviour is useful, I absolutely recommend checking out the Tweetbot app. Hands-down, the most beautiful and elegantly designed app on iOS. When you view a Twitter account’s profile, and scroll beyond the bounds of the top, the background image grows to match the gap, but doesn’t distort. This is a much easier effect to achieve using ‘resize aspect fill’ since you only need to modify the height of the view, and keep the width constant.
Another interesting application of manipulating the content gravity of a layer is a technique I came up while building the ‘page scrubber’ view in iComics. I wanted both the track of page numbers, as well as the ‘handle’ control to be transparent. Without doing a costly masking operation, I discovered it was far easier and faster to simply map a bitmap of the page numbers track to two separate layers with ‘left’ and ‘right’ gravity properties respectively, and to simply resize their frames around the position of the handle control. This created the illusion of the handle control appearing over the page numbers track, but was still transparent, allowing the background content to still come through.
When I said ‘resized’ in the previous slides, I feel like I should have been more specific ‘how’. Since CALayer objects are rendered on the GPU, there’s no chance in there for the CPU to do a proper resampling pass. It’s up to the GPU to perform the bitmap rescaling. This is deferred to the texture resampling feature of the GPU itself (Now we’re REALLY entering game dev land!) via two properties: minificationFilter and magnificationFilter. By default, these properties are set to ‘linear’ which actually refers to the process known as bilinear filtering in graphics programming. This is a quick way of performing texture smoothing at different sizes, but starts to look really bad at very small sizes. Two other alternatives on iOS are ‘nearest’ and ‘trilinear’. ‘Nearest’ performs no smoothing at all and simply upscales and downscales the texels themselves. This can look utterly terrible, but is VERY fast to render, which may make it viable for certain cases. The alternative is ‘trilinear’ filtering, another term often used in game development. In this case, resized copies of the bitmap (named mipmaps) are created on the GPU and then blended together when the bitmap is scaled to certain scales. While this will definitely reduce the visual artefacts caused by bilinear filtering, it’s not great for real-time graphics like scroll views since there will be blocking on the main thread when generating the mipmaps.
An example of where ‘nearest’ filtering is useful is present in an action we do on iOS everyday! When opening and closing an app, the ‘screenshot’ of the app is rendered as ‘nearest’ as it scales. This is because as it gets smaller, it cross-fades with the app icon, so it’s not very easy to see the visual artefacts, but also maintains a proper 60FPS while doing so.
Another cool thing that CALayers can do is masking. Taking a CALayer, it’s possible to add another CALayer with an alpha channel as a ‘mask’. This will then clip the original CALayer to the shape of the mask. This can be used to create a wide-range of visual effects that wouldn’t otherwise be easily possible. It’s also the basis of a lot of popover views in iOS that have rounded corners.
In iComics, I wanted to create a ‘tool tip view’ that visually demonstrated what effect enabling a setting would have on the UI. To do this, I used PaintCode to generate a series of bitmaps, and a masking layer. These were then blended together and animated with Core Animation to create the following effect.
One very cool feature of Core Animation is the ability to add shadows to layers. While iOS 7’s visual aesthetic doesn’t rely on shadows as much as iOS 6 used to, they are still great to enhance the contrast between two elements when color alone isn’t enough.
One thing of note is that it’s very important to set a CGPath to the ‘shadowPath’ property of a layer. If this isn’t done, Core Animation will determine the shape of the shadow by testing the opacity of each pixel in the layer, which is incredibly time-consuming.
Another Core Animation level exclusive property I want to mention is the transform property. Unlike the transform property in UIView, the transform property of CALayer allows a full-blown 3D set of transformations. This means you can manipulate and animate views in 3D space, creating cool looking perspective views.
Even though iOS 7’s design language eschews most types of 3D graphics in favour of a clean, flat design, the use of 3D translated layers are still put to good use in certain parts of the system.
Another app that famously made good use of 3D transforms was ‘Flipboard’, which one iPad App of the Year in 2010. It uses 3D transforms for its transition animation when turning between pages.
Finally, one cool little feature of Core Animation is the ability to blend layers on top of each other. This isn’t officially supported on iOS, and requires the use of a private API call in order to enable it. However, given that it’s simply a string property, you PROBABLY would get away with it. ;)It’s possible for CALayers to performing blending operations on top of each other to produce very interesting, dynamic visual effects.
I originally discovered this when curiosity got the better of me, and I decided to deconstruct and introspect the famous iOS ‘slide to unlock’ visual effect. As it turns out, this effect is achieved through multiple layers being blended on top of each other, which creates a very visual appealling, ‘glinty’ effect
On that note, another piece of software I’d like to recommend is ‘Reveal’ by Itty Bitty Apps. This OS X tool makes it very easily to introspect the visible UI of an app, as it’s running, making it very easy to debug, and tweak UIView layouts in realtime. It was invaluable in letting me deconstruct the original ‘slide to unlock’ view.
Alright. Hopefully you all learned something new about Core Animation just now! :)
While we’ve got the rendering aspect down now, next we’ll talk briefly about how to go about animating these layers.
One strength of UIKit is its animation APIs. It’s VERY easy to setup a view animation, and Swift has further simplified the syntax with closures. For example, this is all that is needed to move a view 500 points to the left.
On Core Animation’s level, creating an animation is a bit more work. This is what’s actually happening behind the scenes on the UIView level. Each ‘change’ to layer is a discrete CABasicAnimation. You create an instance of an animation object, set the before and after values, and then apply it to the layer.
The animation is then applied. It is also worth noting that these changes aren’t applied to the object itself, so it’s usually necessary to update the properties of the object as well.
Unless the property you’re animating isn’t explicitly animatable on the UIView level, it’s usually best to animate as much on that level. Otherwise, one of the main reasons why you might want to drop to this level is the fine level of timing control you get. By explicitly entering the points of a cubic bezier curve, you can control the exact motion curve of an animation. A great site to work out these values is cubic-bezier.com
One awesome little feature of Core Animation is it’s possible to animate the ‘contents’ property of a layer. In practice, this means you can actually get a layer to perform a proper cross-fade animation between two images. This is in contrast to layering two views over each other and animating their alpha values at the same time, which would result in a brief ‘fade out’ artefact when both of the images hit ‘0.5’ in their animation cycle.
Another type of animation is keyframe animations, where you can specify an array of values, and and array of time intervals that the layer will animate to within the given duration.
More importantly, one extremely powerful feature of Core Animation, unavailable with UIView, is the ability to animate layers along CGPaths; in other words, along curves. This can help create much more dynamic animations, and can be used in subtle ways to improve your app’s experience.
It should be apparent by now that for every animation you wish to add to a layer, a separate CAAnimation object must be created. If you want to have multiple values change in a single animation session, it’s possible to group separate animations in a single animation group.
Finally, one of the more appreciated features of UIView animations is an optional closure that can be called when the animation is completed. Traditionally, in Core Animation, this has meant setting a delegate object on the CAAnimation in question, but this can end up being very messy to manage.
Instead, a more modern approach is the ability to encapsulate a group of animations inside a single CATransaction, and set a completion block on it.
As you can see, animating in Core Animation allows for much greater potential, but also requires a lot more code. Just like PaintCode however, certain apps exist that can help automate the amount of code generation required to pull off an effect.
CoreAnimator is one that has been getting a lot of praise lately. It’s been billed as being able to help animate simple app icons, all the way up to building games. I’ve played with it a few times and it looked really promising.
Finally, let’s look at some of the other CALayer types there are on iOS, and how they can be used.
These layers still take advantage of the GPU, so they’re still the recommended way over Core Graphics. That being said, some of them DO rely on a Core Graphics pass on the CPU as well, which may be needed to be considered. As mentioned earlier in this talk, inserting a new CALayer subclass into a UIView is as easy as overriding the layerClass method in your subclass.
Tile layers are really prominent in apps like iBooks. When placed in a scroll view and zoomed in, tile layers observe their zoom level, and at determined zoom levels, will trigger an asynchronous redraw on the CPU of their content at the current zoom level. This means for content like PDF files, the zoomed in, visible portion can be redrawn at a higher resolution. Additionally, tile layers cache these zoomed-in bitmaps and are very efficient at re-using them.
Gradient layers do exactly what they say. By providing an array of colours and an array of points, it’s possible to dynamically render a gradient pattern into a layer. This can be used for really good subtle effects. For example, in Safari’s tab view, not only are the layers there being translated into 3D, they also have a subtle dark gradient applied which enhances their depth.
Replicator layers are incredibly efficient. All you need to do is provide a single layer, and the replicator layer will duplicate it on the GPU. This means being able to have a huge number of layers on-screen, without the performance hit you’d suffer managing all of those on the CPU. The replicated layers can be modified in terms of their colour and position, but sadly not their contents. As such, this might be useful for games or a 3D column of thumbnails, but not really useful elsewhere.
Shape Layers are incredibly useful, and the effect they can produce is very prevalent on iOS 7. By providing a CGPath, it’s possible to get the shape layer to either fill the path, or stroke it with a line. This has most commonly been used for effects such as the loading indicator for apps in the App Store.
Most likely spawning from the smoke effects on OS X, emitter layers let you create a layer that displays a series of animated, emitted particles. This emission pattern can be very finely controlled, and depending on the texture used, a wide variety of effects is possible.
There may be a few instances in an app UI where this effect might be useful, but I can say it’s definitely useful for game UIs.
Apart from that, there are a few other layers worth mentioning.
CATextLayer - Similar to a UILabel in that it renders text to the layer.
CAScrollLayer - Scrolls large amounts of content. This is probably more useful on OS X than iOS on account of having UIScrollView.
CATransformLayer - A layer that converts the transformation space into proper 3D.
CAEAGLLayer / CAMetalLayer - The layers that render commands from the OpenGL/Metal APIs. Used as the entry point for game engines.
And there we have it! Hopefully you learned something new!
Thanks a lot for watching! And I hope you have an enjoyable week!