Perceives depth using one camera by iterating through the available focal lengths.


We aim to provide a tool to ease perception of depth from single sensor cameras. This solution is cheaper and less wasteful than manufacturing new camera arrays to replace old camera systems.

To perceive depth from a single position, the camera has to change its focal length to bring objects in and out of focus. The clarity of objects compared to others is directly related to the relative position of the focal plane.

Over the weekend, we are producing an API and an Android implementation. In the future, expansions can be performed in areas of optimizing for depth of field while keeping enough exposure to accurately interpret clarity, providing platforms for more systems such as iOS and PC, and drivers for DSLRs and and other higher performance cameras.

Logic Process: - Takes multiple pictures, all with different focal lengths - Divides each frame into 256 congruent regions - Compares each pixel to all adjacent pixels - Calculates a “focus score” for each region - Compares score of same region across all frames - Selects best frame for analysis - Correlates the focal length required for the optimal clarity with depth (distance to object)

Project Information

License: GNU General Public License version 3.0 (GPL-3.0)

Source Code/Project URL:


Presentation -
Music by BenSound -


  • Matt Merrill
  • Jacob  Wilson
  • Rohun Atluri