Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Symbol grounding and its implications for artificial intelligence

      Mayo, Michael
      Thumbnail
      Files
      Symbol grounding and its implications for artificial intelligence.pdf
      143.6Kb
      Link
       crpit.com
      Citation
      Export citation
      Mayo, M. (2003). Symbol grounding and its implications for artificial intelligence. In Proceedings of the twenty-sixth Australasian computer science conference on Conference in research and practice in information technology, Adelaide, Australia (pp. 55-60). Darlinghurst, Australia: Australian Computer Society, Inc.
      Permanent Research Commons link: https://hdl.handle.net/10289/1391
      Abstract
      In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general.
      Date
      2003
      Type
      Conference Contribution
      Publisher
      Australia: Australian Computer Society, Inc. Darlinghurst, Australia
      Rights
      This is an author’s version of an article published in Proceedings of the twenty-sixth Australasian computer science conference on Conference in research and practice in information technology. Copyring © 2003. Australian Computer Society, Inc.
      Collections
      • Computing and Mathematical Sciences Papers [1431]
      Show full item record  

      Usage

      Downloads, last 12 months
      44
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement