There is presently a limited number of computer vision capabilities in Apple’s impressive technology stack. The main one that comes to mind is the face detection functionality. You can use it in the Camera app on your iDevice and in the Photos app on your Mac.
4 October 2017 – I updated the code for Swift 4 and iOS 11. You can find it here.
When an iPhone is processing an image from one of its cameras, the main purpose is to make it look good for us. The iPhone has no interest in interpreting the image itself. It just wants to adjust the brightness and the colours, so that we can optimally enjoy the image.
There is however one exception. The iPhone can detect whether there is a human face in the image. It is not interested in who it is, merely that there is a face present. It can even keep track of multiple faces. Continue reading “Detecting position and orientation of faces with iOS”
The short answer is: not much.
Well. Maybe we first have to talk about what it means to “see”. Vision is an extremely rich natural phenomenon. Most of us humans have the uncanny ability to turn light into meaning – as do many other species in the animal kingdom. Vision is mainly used for navigation and recognition. We use our eyes to detect objects in our environment and use the shapes and layout of these objects to navigate our way through life. Continue reading “What can your iPhone see?”
This week I seriously started learning Swift. Swift is a novel programming language developed by Apple to replace Objective-C. I already like it and I definitely enjoy the learning process. Some iOS developers I know are talking about how they love Swift. I am not there yet, although I have found three things that may ignite my love. Continue reading “Switching to Swift”