Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Using the online cross-entropy method to learn relational policies for playing different games

      Sarjant, Samuel; Pfahringer, Bernhard; Driessens, Kurt; Smith, Tony C.
      Thumbnail
      Files
      Using the online cross-entropy method.pdf
      908.4Kb
      DOI
       10.1109/CIG.2011.6032005
      Link
       cilab.sejong.ac.kr
      Find in your library  
      Citation
      Export citation
      Sarjant, S., Pfahringer, B., Driessens, K. & Smith, T. (2011). Using the online cross-entropy method to learn relational policies for playing different games. In Proceeding of 2011 IEEE Conference on Computational Intelligence and Games, Seoul, South Korea, 31 August - 3 September (pp. 182-189).
      Permanent Research Commons link: https://hdl.handle.net/10289/5837
      Abstract
      By defining a video-game environment as a collection of objects, relations, actions and rewards, the relational reinforcement learning algorithm presented in this paper generates and optimises a set of concise, human-readable relational rules for achieving maximal reward. Rule learning is achieved using a combination of incremental specialisation of rules and a modified online cross-entropy method, which dynamically adjusts the rate of learning as the agent progresses. The algorithm is tested on the Ms. Pac-Man and Mario environments, with results indicating the agent learns an effective policy for acting within each environment.
      Date
      2011
      Type
      Conference Contribution
      Publisher
      IEEE
      Rights
      © 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
      Collections
      • Computing and Mathematical Sciences Papers [1454]
      Show full item record  

      Usage

      Downloads, last 12 months
      192
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement