1. Natural Semantics for a Robot notes:
1) In conventional functionalism, the robot receives inputs and gives outputs that seem meaningful
to us because we are measuring them by our own standards. “All means are our meanings, the
machine has no independent understanding of anything” It is able to spit out responses that
seems like it knows what it is talking about, but really, it is like the fuzzy logic used by matt. The
computer really knows nothing, but it is able to build a knowledge base composed of pre-
programmed response schemes to certain questions.
2) A natural semantic system is “is one that acquires and maintains meanings for itself.” It would
be able to interact with the environment and assign meanings to things and know that it is the
thing assigning that meaning and it is interacting with that thing.
a. Cohen thinks that reinforcement learning is capable of producing a rudimentary natural
semantic system, but we use it to produce conventional systems.
b. Reinforcement learning from Wikipedia: reinforcement learning is a sub-area of
machine learning concerned with how an agent ought to take actions in an environment
so as to maximize some notion of long-term reward. Reinforcement learning algorithms
attempt to find a policy that maps states of the world to the actions the agent ought to
take in those states: http://en.wikipedia.org/wiki/Reinforcement_learning
c. This problem of natural semantics has to do with that of roles. Having the robot know
that it is an interacting agent and the things it is applying semantics to are objects that it
is interacting with.
i. However, I think that if we were able to provide a feed back loop between the
preposition representation space to the robot, it would have an understanding
of self-agency.
ii. This project could link to Steve Dee’s project in that respect. After the initial
phase of hooking up the linguistic software, we could try to connect Steve Dee’s
virtual space as a more complex representation space that would provide a
robust feedback loop for the robot.
3) Haugeland said “take care of the syntax, and the semantics will take care of itself” but Cohen
notes that “taking care of the syntax is very hard”
a. We have a tool that is able to handle syntax: Stemma. In later phases of the project, we
may be able to implement the parsing program to have the robot utilize this system of
syntax in order to acquire meaning. If they claim that semantics will follow syntax,
maybe we can implement our syntax system and see what happens.
4) How Cohen’s Robot perceives:
2. a. Clustering the robots experiences of 40 sensed values every 100msec by running a
statistical procedure to find common sequences that correspond to activities such as
bumping into a wall and grasping a cup.
i. They call these clusters prototypes.
b. They then transform these prototypes into planning algorithms, which they claim is not
hard. Next, let the robot roam and explore.
5) Roles Revisited
a. Using infants as an inspiration.
b. Ability to recognize an object as the same thing when encountered in different
conditions.
c. We mentioned that this may be overcome by holding a map and then checking around
to see if things are the same, checking for the error rates and going from there.
6) Image Schema
a. Infants skim information from the world in a process called perceptual redescription.
b. “They select or extract a subset of information from their percepts, at the cost of other
information”
i. Image Schemas
c. The robot will have many of the same actions as us
i. Horizontal movement
ii. Moving things
iii. turning
iv. in front of, behind
7) Cohen’s real progress:
a. “We have a robot that creates prototypes that correspond to its common experiences. It
can cluster these prototypes and discover, entirely without our supervision, common
experiences like moving pas a cup or bumping into a wall. It cal also recognize words
and link them up with experiences. Recently, it turned some of its prototypes into
planning operators, and built a plan for a goal of its own choosing. The missing piece of
our story concerns roles. The prototypes learned by the robot correspond to the
sensory experience of doing something, but they are not denoting representations. For
example, the robot knows how its sensors react to driving into a wall, but it has not
3. concept of wall, and it does not represent the episode as one entity doing something to
another.”