Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Arts and Social Sciences
      • Arts and Social Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Arts and Social Sciences
      • Arts and Social Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Using the properties of Primate Motion Sensitive Neurons to extract camera motion and depth from brief 2-D Monocular Image Sequences

      Perrone, John A.; Cree, Michael J.; Hedayati, Mohammad
      Thumbnail
      Files
      caip2019-paper-perrone-etal-2019.pdf
      Accepted version, 4.979Mb
      DOI
       10.1007/978-3-030-29888-3_49
      Find in your library  
      Citation
      Export citation
      Perrone, J. A., Cree, M. J., & Hedayati, M. (2019). Using the properties of Primate Motion Sensitive Neurons to extract camera motion and depth from brief 2-D Monocular Image Sequences. In International Conference on Computer Analysis of Images and Patterns (Vol. 11678, pp. 600–612). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-29888-3_49
      Permanent Research Commons link: https://hdl.handle.net/10289/13019
      Abstract
      Humans and most animals can run/fly and navigate efficiently through cluttered environments while avoiding obstacles in their way. Replicating this advanced skill in autonomous robotic vehicles currently requires a vast array of sensors coupled with computers that are bulky, heavy and power hungry. The human eye and brain have had millions of years to develop an efficient solution to the problem of visual navigation and we believe that it is the best system to reverse engineer. Our brain and visual system appear to use a very different solution to the visual odometry problem compared to most computer vision approaches. We show how a neural-based architecture is able to extract self-motion information and depth from monocular 2-D video sequences and highlight how this approach differs from standard CV techniques. We previously demonstrated how our system works during pure translation of a camera. Here, we extend this approach to the case of combined translation and rotation.
      Date
      2019
      Type
      Conference Contribution
      Publisher
      Springer
      Rights
      © Springer Nature Switzerland AG 2019. This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-29888-3_49
      Collections
      • Science and Engineering Papers [3069]
      • Arts and Social Sciences Papers [1403]
      Show full item record  

      Usage

      Downloads, last 12 months
      77
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement