Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Arts and Social Sciences
      • Arts and Social Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Arts and Social Sciences
      • Arts and Social Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      A neural-based code for computing image velocity from small sets of middle temporal (MT/V5) neuron inputs

      Perrone, John A.
      DOI
       10.1167/12.8.1
      Find in your library  
      Citation
      Export citation
      Perrone, J A. (2012). A neural-based code for computing image velocity from small sets of middle temporal (MT/V5) neuron inputs. Journal of Vision, 12(8), 1-31.
      Permanent Research Commons link: https://hdl.handle.net/10289/6823
      Abstract
      It is still not known how the primate visual system is able to measure the velocity of moving stimuli such as edges and dots. Neurons have been found in the Medial Superior Temporal (MST) area of the primate brain that respond at a rate proportional to the speed of the stimulus but it is not clear how this property is derived from the speed-tuned Middle Temporal (MT) neurons that precede area MST along the visual motion pathway. I show that a population code based on the outputs from a number of MT neurons is susceptible to errors if the MT neurons are tuned to a broad range of spatial frequencies and have receptive fields that span a wide range of sizes. I present a solution that uses the activity of just three MT units within a velocity channel to estimate the velocity using a weighted vector average (centroid) technique. I use a range of velocity channels (1, 2, 4, and 88/s) with inhibition between them so that only a single channel passes the velocity estimate onto the next stage of processing (MST). I also include a contrast-dependent redundancy-removal stage which provides tighter spatial resolution for the velocity estimates under conditions of high contrast but which trades off spatial compactness for greater sensitivity at low contrast. The new model produces an output signal proportional to the stimulus input velocity (consistent with MST neurons) and its input stages have properties closely tied to those of neurons in areas V1 and MT.
      Date
      2012
      Type
      Journal Article
      Publisher
      ARVO
      Collections
      • Arts and Social Sciences Papers [1423]
      Show full item record  

      Usage

       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement