Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Science and Engineering
      • Science and Engineering Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Science and Engineering
      • Science and Engineering Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Testing a biologically-based system for extracting depth from brief monocular 2-D video sequences

      Perrone, John A.; Cree, Michael J.; Hedayati, Mohammad; Corlett, Dale
      Thumbnail
      Files
      testing-biologically-perrone.pdf
      Accepted version, 13.82Mb
      DOI
       10.1109/IVCNZ.2018.8634781
      Find in your library  
      Citation
      Export citation
      Perrone, J. A., Cree, M. J., Hedayati, M., & Corlett, D. (2018). Testing a biologically-based system for extracting depth from brief monocular 2-D video sequences. In 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). Auckland, New Zealand: IEEE. https://doi.org/10.1109/IVCNZ.2018.8634781
      Permanent Research Commons link: https://hdl.handle.net/10289/12324
      Abstract
      Knowledge of the 3-D layout in front of a moving robot or vehicle is essential for obstacle avoidance and navigation. Currently the most common methods for acquiring that information rely on ‘active’ technologies which project light into the world (e.g., LIDAR). Some passive (non-emitting) systems use stereo cameras but only a relatively small number of techniques attempt to solve the 3-D layout problem using the information from a single video camera. A single camera offers many advantages such as lighter weight and fewer video streams to process. The visual motion occurring in brief monocular video sequences contains information regarding the movement of the camera and the structure of the scene. Extracting that information is difficult however because it relies on accurate estimates of the image motion velocities (optical flow) and knowledge of the camera motion, especially the heading direction. We have solved these two problems and can now obtain image flow and heading direction using mechanisms based on the properties of motion sensitive neurones in the brain. This allows us to recover depth information from monocular video sequences and here we report on a series of tests that assess the accuracy of this novel approach to 3-D depth recovery.
      Date
      2018
      Type
      Conference Contribution
      Publisher
      IEEE
      Rights
      This is an author’s accepted version of an article published in the Proceedings of 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). © 2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
      Collections
      • Science and Engineering Papers [3073]
      • Arts and Social Sciences Papers [1403]
      Show full item record  

      Usage

      Downloads, last 12 months
      78
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement