Show simple item record  

dc.contributor.authorPerrone, John A.en_NZ
dc.contributor.authorCree, Michael J.en_NZ
dc.contributor.authorHedayati, Mohammaden_NZ
dc.contributor.authorCorlett, Daleen_NZ
dc.coverage.spatialAuckland, New Zealanden_NZ
dc.date.accessioned2019-02-12T22:16:37Z
dc.date.available2018en_NZ
dc.date.available2019-02-12T22:16:37Z
dc.date.issued2018en_NZ
dc.identifier.citationPerrone, J. A., Cree, M. J., Hedayati, M., & Corlett, D. (2018). Testing a biologically-based system for extracting depth from brief monocular 2-D video sequences. In 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). Auckland, New Zealand: IEEE. https://doi.org/10.1109/IVCNZ.2018.8634781en
dc.identifier.isbn978-1-7281-0125-5en_NZ
dc.identifier.urihttps://hdl.handle.net/10289/12324
dc.description.abstractKnowledge of the 3-D layout in front of a moving robot or vehicle is essential for obstacle avoidance and navigation. Currently the most common methods for acquiring that information rely on ‘active’ technologies which project light into the world (e.g., LIDAR). Some passive (non-emitting) systems use stereo cameras but only a relatively small number of techniques attempt to solve the 3-D layout problem using the information from a single video camera. A single camera offers many advantages such as lighter weight and fewer video streams to process. The visual motion occurring in brief monocular video sequences contains information regarding the movement of the camera and the structure of the scene. Extracting that information is difficult however because it relies on accurate estimates of the image motion velocities (optical flow) and knowledge of the camera motion, especially the heading direction. We have solved these two problems and can now obtain image flow and heading direction using mechanisms based on the properties of motion sensitive neurones in the brain. This allows us to recover depth information from monocular video sequences and here we report on a series of tests that assess the accuracy of this novel approach to 3-D depth recovery.en_NZ
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherIEEEen_NZ
dc.rightsThis is an author’s accepted version of an article published in the Proceedings of 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ). © 2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
dc.subjectCamerasen_NZ
dc.subjectEstimation
dc.subjectMathematical model
dc.subjectVideo sequences
dc.subjectStreaming media
dc.subjectVisualization
dc.subjectOptical imaging
dc.subjectmonocular visual sensor
dc.subjectimage motion
dc.subjectdepth-from-motion
dc.titleTesting a biologically-based system for extracting depth from brief monocular 2-D video sequencesen_NZ
dc.typeConference Contribution
dc.identifier.doi10.1109/IVCNZ.2018.8634781en_NZ
dc.relation.isPartOf2018 International Conference on Image and Vision Computing New Zealand (IVCNZ)en_NZ
pubs.elements-id235276
pubs.finish-date2018-11-21en_NZ
pubs.publication-statusPublisheden_NZ
pubs.start-date2018-11-19en_NZ
dc.identifier.eissn2151-2205en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record