Flatiron students Steven Zhou and Heidi Hansen explain how core images work on iOS and OS X to help developers process images efficiently without dealing with low level interactions with GPU or CPU.
What's a Core Image? An Image-Processing Framework on iOS and OS X
Steve Zhou & Heidi Hansen
What is a Digital Image?
• It’s a collection of pixels and each pixel represents a speciﬁc
• Combining hundreds and thousands of pixels together forms a
• It’s a 2-dimensional arrays of pixels: X and Y or Rows and
• Usually digital image refers to raster image or bitmap image.
What is Core Image?
•It’s an image processing framework on both iOS and OS X. It was ﬁrst
introduced in iOS 5 and previously was only available on OS X.
•Its designed to do “real-time” image processing at pixel level for both
still image and video.
•It operates on image data types from multiple sources. ex: Core
Graphics,Core Video, Image I/O
•It allows developers to process images easily and efﬁciently without
dealing with low level interactions with GPU or CPU.
•It provides more than 90 built-in image processing ﬁlters in iOS, Feature
detection capability, and support for automatic image enhancement.
Key Core Image Classes
1. An object that represents an image.
2. You can create CIImage object with diﬀerent inputs.
1. An object that represents an eﬀect.
2. Has at least one input parameter.
3. Produces a CIImage object.
1.An object that draws the result produced by CIFilter
using either CPU or GPU rendering path.