Loading...
Thumbnail Image
Item

Using the properties of Primate Motion Sensitive Neurons to extract camera motion and depth from brief 2-D Monocular Image Sequences

Abstract
Humans and most animals can run/fly and navigate efficiently through cluttered environments while avoiding obstacles in their way. Replicating this advanced skill in autonomous robotic vehicles currently requires a vast array of sensors coupled with computers that are bulky, heavy and power hungry. The human eye and brain have had millions of years to develop an efficient solution to the problem of visual navigation and we believe that it is the best system to reverse engineer. Our brain and visual system appear to use a very different solution to the visual odometry problem compared to most computer vision approaches. We show how a neural-based architecture is able to extract self-motion information and depth from monocular 2-D video sequences and highlight how this approach differs from standard CV techniques. We previously demonstrated how our system works during pure translation of a camera. Here, we extend this approach to the case of combined translation and rotation.
Type
Conference Contribution
Type of thesis
Series
Citation
Perrone, J. A., Cree, M. J., & Hedayati, M. (2019). Using the properties of Primate Motion Sensitive Neurons to extract camera motion and depth from brief 2-D Monocular Image Sequences. In International Conference on Computer Analysis of Images and Patterns (Vol. 11678, pp. 600–612). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-29888-3_49
Date
2019
Publisher
Springer
Degree
Supervisors
Rights
© Springer Nature Switzerland AG 2019. This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-29888-3_49