Be the first to like this
The Chinese Room (CR) argument outlines one of the core problems associated with formal symbolic logic: the symbol-grounding problem. That is, how do symbolic referents correspond with semantic information? However, symbolicism has grave implications for machine intelligence and natural language understanding. While human-like imitation may permit a machine to pass the Turing test, imitation alone does not permit a machine to experience meaning (as per the CR argument). An alternative to the Turing test, the Winograd Schema Challenge (WSC), illustrates the ways in which a human agent can outperform a machine intelligence when faced with problems of meaning that lack defined statistical solutions. The WSC supports the view that human language understanding cannot be fully explained using formal symbolic logic.
Supported by UC Berkeley’s Neural Theory of Language (NTL), I present an alternative. I show how human agents are able to solve Winograd problems almost effortlessly, and what we might achieve with machine intelligence if we reconsider the nature of natural language. I use evolutionary psychology to design a ‘thought experiment’ based on ‘evaluative perception’. Here I argue that meaning is grounded in embodied sensorimotor experience. More specifically, I argue that for a machine to experience meaning, we must first develop a parametric of value grounded in the material conditions of embodied experience. More crucially, I offer a dangerous solution: that a machine must know threat and death if it is ever to know what it means to really mean anything.