Perrone, J. A., Cree, M. J., Hedayati, M., & Corlett, D. (2018). Testing a biologically-based system for extracting depth from brief monocular 2-D video sequences. In 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). Auckland, New Zealand: IEEE. https://doi.org/10.1109/IVCNZ.2018.8634781
Permanent Research Commons link: https://hdl.handle.net/10289/12324
Knowledge of the 3-D layout in front of a moving robot or vehicle is essential for obstacle avoidance and navigation. Currently the most common methods for acquiring that information rely on ‘active’ technologies which project light into the world (e.g., LIDAR). Some passive (non-emitting) systems use stereo cameras but only a relatively small number of techniques attempt to solve the 3-D layout problem using the information from a single video camera. A single camera offers many advantages such as lighter weight and fewer video streams to process. The visual motion occurring in brief monocular video sequences contains information regarding the movement of the camera and the structure of the scene. Extracting that information is difficult however because it relies on accurate estimates of the image motion velocities (optical flow) and knowledge of the camera motion, especially the heading direction. We have solved these two problems and can now obtain image flow and heading direction using mechanisms based on the properties of motion sensitive neurones in the brain. This allows us to recover depth information from monocular video sequences and here we report on a series of tests that assess the accuracy of this novel approach to 3-D depth recovery.
This is an author’s accepted version of an article published in the Proceedings of 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). © 2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.