4 October 2017 – I updated the code for Swift 4 and iOS 11. You can find it here.
When an iPhone is processing an image from one of its cameras, the main purpose is to make it look good for us. The iPhone has no interest in interpreting the image itself. It just wants to adjust the brightness and the colours, so that we can optimally enjoy the image.
There is however one exception. The iPhone can detect whether there is a human face in the image. It is not interested in who it is, merely that there is a face present. It can even keep track of multiple faces.
In an earlier post What can your iPhone see?, I mentioned that there is a class CIFaceFeature in the CoreImage framework that allows you to identify and track multiple faces. It also determines the bounding box and the locations of the eyes and the mouth. And it can even detect a wink and a smile. It can also give you the orientation of the line through the eyes in degrees, with 0 being the horizontal line.
What I somehow missed entirely is that since iOS 6 there is yet another face detection class AVMetadataFaceObject in the AVFoundation framework. It can also detect and track faces. Just like CIFaceFeature, it can determine the bounding box and the orientation of the face in the image in degrees. (I keep mentioning the angular units because there are plenty of classes that use radians instead of degrees.)
But here is the kicker. It can also detect the orientation in depth of the face, the rotation angle of your head when you shake no. In the example below it detects a yaw angle of 45 degrees.
To the best of my knowledge this is the only class in iOS that gives us a 3D feature about a visual object. That is true 3D computer vision in my book!
You can find the code here.
> You can sign up for my newsletter here.