I have been watching Norman Wildberger’s videos on all things mathematics for about 10 years. To say that I have learned a lot is the understatement of the decade.
His most recent video is a recorded talk from July this year titled “How Chromogeometry transcends Klein’s Erlangen Program for Planar Geometries”. It is fascinating throughout but my interest was piqued when at 25:13 he starts talking about ellipses.
Continue reading “Diagonals of an ellipse”
I am building systems that can understand what they see. In this day and age, the necessary hardware is easily accessible since a digital camera and a computer can now be purchased for well under € 100. It is the software that is the real challenge.
Continue reading “Understanding events”
A major assumption in modern computer vision is that you have to track points on surfaces in order to see in 3D. You can use 2 images from 2 static cameras (“stereo”), or 2 images from 1 moving camera (“motion”). Continue reading “Solving correspondence”
One pleasant surprise for computer vision on a mobile device is that we can detect the 3D orientation of the camera from other sensors. An iPhone has an accelerometer and a gyroscope (among complementary sensors not discussed here). Continue reading “Internal inertial sensors”
What components do you need to get Augmented Reality (AR) technology running on your mobile phone? Continue reading “Mobile AR is a 3D revolution”
Our imagination is a powerful cognitive skill. When I walked into the living room of my new apartment, I experienced a rectangular empty space with a dusty concrete floor and hollow sounding acoustics. But in my mind I was already furnishing and decorating. I imagined a blue carpet on the floor, the walls lined with bookcases, a large table on the far end, and a comfortable couch near the window. Continue reading “Virtual furniture at the right scale”
Here is an interview that journalist Jim Jansen of the newspaper Het Parool had with me. He wanted to know how scientists spend their Summer holiday. It is in Dutch though, but I added a translation in English. Continue reading “Interview with newspaper “Het Parool””
There is presently a limited number of computer vision capabilities in Apple’s impressive technology stack. The main one that comes to mind is the face detection functionality. You can use it in the Camera app on your iDevice and in the Photos app on your Mac.
Continue reading “All eyes on WWDC 2016”
When I teach my workshop on 3D vision, the students also play with a 3D graphics framework called Scene Kit. It is very powerful and easy to use. With just a few lines of code they can create a vivid 3D scene. At some point in the tutorial, the student has created a scene with a red sphere on a grey shiny floor against a deep blue sky. Then they start playing with the controls, changing the distance, rotating the scene, or panning to the side. Invariably someone pushed the sphere to one of the corners of the image and then checks with me whether there is a mistake in the code. Continue reading “Spheres in perspective”
4 October 2017 – I updated the code for Swift 4 and iOS 11. You can find it here.
When an iPhone is processing an image from one of its cameras, the main purpose is to make it look good for us. The iPhone has no interest in interpreting the image itself. It just wants to adjust the brightness and the colours, so that we can optimally enjoy the image.
There is however one exception. The iPhone can detect whether there is a human face in the image. It is not interested in who it is, merely that there is a face present. It can even keep track of multiple faces. Continue reading “Detecting position and orientation of faces with iOS”