Loading...
Abstract
In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general.
Type
Conference Contribution
Type of thesis
Series
Citation
Mayo, M. (2003). Symbol grounding and its implications for artificial intelligence. In Proceedings of the twenty-sixth Australasian computer science conference on Conference in research and practice in information technology, Adelaide, Australia (pp. 55-60). Darlinghurst, Australia: Australian Computer Society, Inc.
Date
2003
Publisher
Australia: Australian Computer Society, Inc. Darlinghurst, Australia
Degree
Supervisors
Rights
This is an author’s version of an article published in Proceedings of the twenty-sixth Australasian computer science conference on Conference in research and practice in information technology. Copyring © 2003. Australian Computer Society, Inc.