top of page

Researchers Develop Tiny Depth Sensor Inspired by Spider Eyes

Your smartphone might have a few different depth-sensing technologies for features like face unlock and portrait mode photos. The exact method of measuring the distance to a subject varies, but they might all one day end up being replaced by a new type of sensor based on nature. A team of Harvard researchers has designed the new 3D sensor using the same technique as a jumping spider.


Most depth-sensing systems in use today rely on stereo vision (multiple sensors a set distance apart) or projected light (IR illumination). A jumping spider has eight eyes, but it doesn’t use stereo vision like humans do to estimate distance. They don’t even have the brainpower to process vision as we do. Instead, each eye uses a multilayered retina to process images with different degrees of blur based on distance. As a result, jumping spiders can accurately determine the distance to their prey with incredible accuracy across a wide field of view. 



The Harvard team used this as a model for its new “metalens” sensor, which can calculate distance without any traditional optical elements. It doesn’t have layers like a spider eye, but it does split light to generate two differently de-focused images on a photosensor. This is known as “depth from defocus.” 

Of course, the key to the jumping spider’s hunting prowess is the way its nervous system interprets the blurred images as a depth map. The team developed an AI-powered version of that, too. Data from the metalens feeds into a custom algorithm that compares the split images. It can then generate a real-time depth map that tells you how far away your target is. Like the vision processing of the jumping spider, this process is highly efficient. You don’t need any bulky sensors or powerful CPUs to generate the distance map. The metalens sensor used in the experiment is only three millimeters across. 



The researchers see the potential for metalens depth sensing in self-driving cars and robots. Rather than having a few cameras spread around a vehicle and complex algorithms to generate depth maps, a larger number of tiny metalenses spread around could quickly and easily tell the computer how far away everything is. The technology could also come to phones in the future, replacing the bulky multi-sensor 3D sensor platforms like Apple Face ID and Google’s Face Match.


bottom of page