Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computer Science Working Paper Series
      • 1994 Working Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computer Science Working Paper Series
      • 1994 Working Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      The architecture of an optimistic CPU: the WarpEngine

      Cleary, John G.; Pearson, Murray W.; Kinawi, Husam
      Thumbnail
      Files
      uow-cs-wp-1994-16.pdf
      2.540Mb
      Find in your library  
      Citation
      Export citation
      Cleary, J.G., Pearson, M. & Kinawi, H. (1994). The architecture of an optimistic CPU: the WarpEngine. (Working paper 94/16). Hamilton, New Zealand: University of Waikato, Department of Computer Science.
      Permanent Research Commons link: https://hdl.handle.net/10289/1145
      Abstract
      The architecture for a shared memory CPU is described. The CPU allows for parallelism down to the level of single instructions and is tolerant of memory latency. All executable instructions and memory accesses are time stamped. The TimeWarp algorithm is used for managing synchronisation. This algorithm is optimistic and requires that all computations can be rolled back. The basic functions required for implementing the control and memory system used by TimeWarp are described. The memory model presented to the programmer is a single linear address space modified by a single thread of control. Thus, at the software level there is no need for explicit synchronising actions when accessing memory. The physical implementation, however, is multiple CPUs with their own caches and local memory with each CPU simultaneously executing multiple threads of control.
      Date
      1994-09
      Type
      Working Paper
      Series
      Computer Science Working Papers
      Report No.
      94/16
      Collections
      • 1994 Working Papers [18]
      Show full item record  

      Usage

      Downloads, last 12 months
      91
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement