SlideShare a Scribd company logo
1 of 31
Download to read offline
1
Artificial Intelligence and the Concept of AI Moral Field Based Control
Interpreted Via a Variation on the Principle of Least Action
Del J. Ventruella (BSEE, MSEE)
Abstract: The application of neural network
based artificial intelligence (AI) often seems to
most closely approximate the foundations of
human intelligence comprised of what
psychologists and experts in the neurology of
the brain have begun to surmise is linked to
forms of object recognition and relationship
building. This object recognition is based upon
object properties stored in brain regions
associated with acquisition of the object
property related data1
and related
combinations of identifying factors such as
color, consistency, shape, and whether the
object will bounce or is likely to be heavy firing
neurons in various parts of the brain at one
time to identify what we, as humans, perceive
as an object, such as a cup of tea or coffee,
another human being, a rubber ball, or a laptop
computer.
How much of what we perceive as the
possibilities before us at each moment is based
upon some input triggering recognition of
object types (including meals) that we might
seek out, then generating plans to do, is
another matter. Whether AI will be designed to
reproduce these distributed memories of
different, object identifying factors, linked in
different ways, as neural pathways firing based
upon a collection of perceptions that lead to a
single conclusion forced by a group of active
neurons, or transistors in the case of AI, or
simply stored under a single object structure in
a computer’s memory, is another matter.
Although interesting, how AI might identify an
object using a collection of consistent, physical,
object traits is not the basis for this discussion.
Assigning non-intrinsic “value” (potentially also
an object trait but one with no physical
expression readily accessible to our senses) to
such objects and to actions directed at those
objects so as to guide behavior of AI driven
mechanisms to fit into a “moral” field produced
by human perceptions within a human society is
our primary focus here.
It is likely that during childhood and
adolescence morally based identifying factors
are also assigned to objects and behaviors
granting them some form of intrinsic value in
our minds that is associated with each object or
action that might be directed at it, with ethical
concepts that are linked to this moral valuation
attached by a social system of reward and
punishment that cannot be lost upon any AI
system hoping to fit into a human society while
maintaining some level of autonomous
function.3
Such a system of valuation may serve
to identify how much risk we are willing to take
relative to actions directed toward different
objects. It can also help to provide insight
relative to the manner in which we design AI
(artificial intelligence) systems that interact with
the real world, presumably using remote,
physical forms that could be quite powerful (or
intricately detailed, small, low powered, and
entirely compatible with humans), without
requiring inordinately long periods of human
interaction related to teaching.
The more specialized the AI system, or the code
that is written to carry out a task, the easier it
may be to build in a certain level of “right
thinking” relative to the system’s behavior. For
example, a pacemaker has a single purpose, and
if an irregular heart rhythm is detected, it is
programmed to deliver an electric shock to
restore a normal heart rhythm. The pacemaker
did not consider the moral issues related to its
2
implantation or to whom such medical
equipment is made available in the context of
wealth. It simply responds in a certain manner
once it is implanted. This creates some grounds
to consider robotic morals at two levels.
An initial, broad context moral response in a
very general, weighted sense, followed
potentially by a more narrow, task oriented
context. This perspective is described later in
considering how so-called “nonsense”
combinations of command and target of
command words might be treated by an AI
system under a broader moral waiting system,
followed by a well written task application that
“weeds out” the “nonsense” commands (e.g.
“turn-off ball”).
The concept presented here focused on a
broad, initial, moral response to a two word
command set by an AI and involves the
application of the mathematical idea of a
“principle of least action” as a tool by which to
minimize the energy required to negotiate a
path through a field (just as one might follow a
flat path at a constant speed on a bicycle to
reach one’s destination, rather than
consistently pedaling slowly up hills and braking
all the way down the downward slope). The
intent is to produce a means of negotiating the
true path than an object would follow through
something like a gravitational field, but in this
case, a field of what might be termed “normal
human expectations” relative to proper, moral
conduct, as a “moral field” (which is purely
mathematical and virtual, based upon
comparison to a moral tolerance level, or Moral
Test Level programmed into the AI system).
The technique assigns numerical values to each
noun and verb in a two word command syntax
to ascertain the local value of the “moral field”
as the product of the value assigned to the
command word and the value assigned to the
target word.
The higher valued the noun in terms of how it
influences this virtual, moral “field”, the less
desirable is interaction between the AI system
and that noun as the target of a command given
to an AI system. The high value of the noun
intrinsically causes it to seek to amplify the
“energy” of a moral field wherever an AI seeks
to act upon it, contrary to the principle of “least
action” within that field. The target that has
such an adverse impact on our “least action”
goal might be a valuable artifact, a human, or a
user, in a direct sense.
A high valued user is not an “object” toward
which the AI system considered in this
discussion, potentially controlling a powerful,
mobile system, is to direct any physical action.
This is based upon the assumption that the AI
system classification is something on the order
of “general industrial”, with sufficient power to
injure or kill a human being, and tasks largely
focused upon maintenance of an industrial
facility or heavy commercial equipment
maintaining a yard, driveway, garden, or street.
This helps to safeguard the user where
construction equipment or dangerous vehicles
might be controlled by an AI presence. Where
the verb comprising a “command” is high
valued, the potential for damage to be induced
is high. For example, “crush” would be higher
valued as a “command” than “take picture of”
due to the substantially greater risk of damage
or injury in the context of “crush” “radio” than
“take picture of” “radio” in the context of two
word syntax commands considered here.
Using this technique acceptable behavior is
defined by the application of a pseudo-
minimization of the “moral field potential” of
actions that an AI could undertake via the
linked, neural networks that combine actions
with objects toward which such actions can be
directed within the AI’s associative control
system through a virtual (mathematical) “moral
field” created by assigning numerical values to
single nouns and verbs within a command
syntax for AI systems and defining the virtual
field “energy” of the combined command and
target noun to be the product of the values
assigned to each. One negotiates a path
3
through any given time interval by limiting the
magnitude of the product of the command and
target nouns accepted by the AI to a level that
assures that the moral impact of following any
command and directing it at a specific target in
a two word command syntax, where the
product of the values assigned to both words is
acceptable, approximates, to the first order, a
minimal variation from a socially acceptable
level of violence or injury within this “moral
field” interpretation of socially acceptable
behavior.
For example “move” and “radio” could be
combined to produce a response by an AI,
assuming that an AI would be able to avoid
crushing the radio in the process., but “throw”
“radio”, given its inherently more violent nature
and more dangerous potential would not be
followed because it could produce an
undesirable “high” moral field potential path as
perceived by “normal humans”. Such a concept
could be standardized in many applications for
specific applications or general use (e.g.,
“murder” (or “kill”) and “human” as a command
and target pair would likely be universally
declined by any AI, save perhaps for those
responsible for executions in prisons as a highly
specialized exception or in the case of military
robots if laws were not globally passed to
prevent it except where a group is using AI
systems in violation of such laws).
Low valued products of nouns and verbs in the
command syntax (based upon the artificial
choice here to make “high moral field potential
values” correlate with “high cost” or “high
value” to bypass social punishments under laws
that might require replacement of such objects
by manufacturers if their AI systems were to
damage them) designate more acceptable
behaviors, and no behavior is linked with an
object toward which it can be directed in the AI
system’s moral control structure unless it falls
below a selected minimum to insure that it
does not pose a hazard and is a “moral choice”
within the concept of this “moral field”.
The commands, and the neural network
connections that should be imposed upon an AI
system to link object recognition with
awareness of possible actions that could be
taken toward that object, thus correspond to
the command set product of the values
associated with “command” words and “target”
words below a critical threshold (“Moral Test
Level”), above which injury to humans or some
similar, intolerable (including “frightening”)
outcome, is deemed likely. This threshold
defines the limit of the “least action” within the
“moral field”. One could even alter the value of
the target word based upon the proximity of a
low valued object/target to a high valued
object, such as a human being, if the AI system
might injure or shock a human by undertaking
what might be perceived as an acceptable act if
no human were present, depending on the
accuracy of the AI’s system’s capacity to control
its own manipulators. This would simply
require the capacity to recognize a human
(including falling, floating, prone, or rotating
humans) and estimate proximity to a target
object.
Because the assignment of numerical values
relative to the moral field for “commands” and
“targets” can be conceptualized and
generalized, it is possible to develop an
algorithm by which the acceptable moral and
social standards can be re-produced within a
limited command vocabulary using a computer
system, and without human teaching (simply
programming of individual command word
values based upon general classifications),
lending a “self-evolving” element to this aspect
(with “programming” taken to be different from
“teaching”) of the control system guided by
human insight and conceptualization of moral
and safe programming requirements for a given
system and environment. This presumes that AI
would eventually be classified for specific
environments and purposes, subjected to
engineering and design standards, with laws
controlling specific ownership and where they
could be located.
4
Key-Words: AI, Artificial Intelligence, Virtual,
Moral, Field, Principle, Least, Action, Two,
Word, Command, Target, Syntax, Self-Evolving.
Introduction
In 18th
century America, the values imposed
upon slaves included an attempt to diminish
exposure of the owner or his dinner guests to
the sight of the slaves.7
Thomas Jefferson, one
of America’s “founding fathers”, went to great
lengths to insure this at his home in Monticello.
The “moral field” that then had to be
negotiated by most slaves, presented as 18th
century servants lacking status as full human
beings, thus strongly discouraged direct
interaction between the owner and the slaves,
who maintained the owner’s household and
produced his crops. Contact with owners might
be interpreted in the language of this discussion
to represent events that could be categorized as
unlikely, demanding great amounts of energy
within an 18th
century, southern “moral field” if
the opposing force was to be overcome to make
them common.
Today some prefer to look forward to
technological slaves in the form of robots or
artificial intelligence. With the rise of computer
networks and the internet, some of the forward
looking thinkers of the past may seem to be a
little out of date when we consider the moral
relevance of a robot’s inclination to destroy
itself or save its owner, a common plot in early
20th
century robotic fiction, given that a robot in
an environment of networks linked by radio
signals may simply serve as a cheap appliance in
use by a much more valuable, highly complex,
and remotely located artificial intelligence
capable of controlling a variety of remote
equipment.
The fact that such robotic matters as moral
conduct have been considered3,4,5,6,9,10
do
establish that the question of how humans and
human society will interact with AI from a moral
standpoint isn’t new. The sort of moral
weighting of actions and objects individually
and in combinations that may most simply
describe the basis for human behavior could
provide a crude means of addressing related
issues within the context of a virtual, “moral
field” shaped by human expectations.
“Moral Field” Based Programming Derived
from a Least Effect Based Model for
Minimizing Field Potential (Greater Potential
for Injury or Loss Equals Higher Field Potential.)
First, AI controlled systems, if envisioned as
some form of robot, could be large, powerful,
and, in its initial form, not necessarily well
adapted to life around human beings. Industrial
AI systems are even less likely to fit the
romantic vision of the stars of feature films or
musicals who dance, light footed, about any
environment. This paper considers AI systems
to likely be either too fast or too slow, too
strong or too weak, too heavy, or simply too
awkward and limited in their capacity for
perception to be trusted to undertake tasks that
might endanger humans or their valuable
possessions unless humans have taken
measures to insure the safety of themselves
and their property. That includes measures
related to moral programming.
With “least action” and “least (possible)
injurious effect to humans” as the grounds for
the most morally acceptable behavior within
this conceptualization of this version of a
“moral field” designed to produce a most likely
path solution through that “field”, which, is, in
fact, a field of possible decisions relative to a
two word command syntax considered here
(command and target of command), with most
of those possibilities destined to remain virtual
(“nonsense”) elements of the field, the “lowest
potential” path through day-to-day activities
would be very unlikely to include any event
demanding a great amount of energy in
proximity to a human being due to the risk of
physical injury or of terrifying the human via the
sudden exertion of great force by an AI system
nearby (“throw” “radio”) and potential for loss.
A simple, illustrative model can be constructed
5
for AI systems using a two word syntax
combination consisting of a verb (or command
action word) and noun (a “target” or thing upon
which the command action is to be enacted).
How to Assign Values to Words that Define the
Moral Field Value They Would Create Via the
Product of A Two-Word Command Structure
We rationalize that any action that may be
undesirable under a large variety of
circumstances and depending on the target and
venue, such as an order to “crush”, or any
object that may suffer serious harm if it
interacts with a powerful, AI piloted system,
such as a “human” or “user”, should be
artificially assigned proportionally higher
numerical values (which help to link them to
higher energy points in the moral field) than
objects and activities with which we might wish
an AI to casually interact. The result, depending
on whether or how we choose to produce
connections between recognizable objects and
known actions to generate an array of possible
actions as a “moral field” with specific energy
levels assigned to that virtual moral field from
which an AI must choose its actions based on
the principle of “least action” (or “lowest
energy” choice based upon a prescribed,
acceptable “energy” or “effect of action”
maximum limit, or “MTL”), will ultimately
control whether the AI can even consider
intentionally undertaking a potentially deadly
act, such as following instructions to “crush”
“user”.
Teaching an AI that it will be punished in a
progressively more severe manner based upon
the level and extent of harm that it does is
unlikely to be possible in the same manner that
it is with human children and adolescents
capable of experiencing both physical and
emotional pain, with whom no assurances exist
that desirable thresholds will not be crossed
even if such “teaching” occurs. Children begin
life as infants. As such, they are much weaker
than adults. Ideally, parents are thus given the
advantage of teaching the children not to cause
harm before the child reaches an age at which it
has sufficient strength to render such teaching
hazardous to the parent if it should produce a
violent response.
An AI driven mechanism meant to interact
domestically with humans, unless specifically
built in a diminished manner to permit teaching
on behalf of less flimsy AI driven mechanisms to
follow through construction based upon
principles of designing the learning device with
low energy and mass and no capacity to injure a
human teacher, perhaps even as a virtual
device, could prove quite deadly to a human
teacher in the course of the normal process of
learning and making what could be life
threatening mistakes from the perspective of
the teacher. (The virtual machine option might
also facilitate training humans to work in
environments in which AI controlled devices are
present.)
This concept of “least potential”, with potential
defined as the product of the values assigned to
a command word (a verb) and a target word (a
noun) provides an avenue around the
requirement for teaching neural networks
moral principles with potentially deadly effect
and a need for a great deal of time by simply
controlling the possible actions that an AI could
contemplate within an associative, cognitive
network using a “virtual moral field” based
mathematical routine predicated upon
perceived potential for effect resulting from an
AI following “command action” words (verbs)
and “action target” words (nouns) and
manifesting those commands through some
mechanism under its control.
Nonsense Connections and Self-Evolving AI
To truly avoid any element of error we need to
consider the possibility that nonsense
syntactical connections may be dictated by an
automatic associative network produced via
some form of “truth table”. Such a nonsense
connection would interconnect a “command
action” word with an “action target” word in a
6
manner that would make to sense. We might
have the words “radio” and “ball” in our list of
“action target” words, and “turn-off” as one of
the words in our “command action” list.
We might try to deal with the resulting non-
sense command possibility (“turn-off ball”) by
asserting that the description of a given “action
target” word would be required to indicate, via
standardized classification, whether it was a
human, a machine, or merely a “thing”, with the
first and last described as things with no
capacity to be turned on or off. This might be
used to give the “command action” word two
values in the “moral field” potential value
assignments that determine if “turn-off” is
linked to an “action target” word based upon
valuations that are below our selected,
“hazardous” (“MTL”) threshold.
If the “action target” word is a human or a
“thing”, the value of “turn-off” might be
dramatically increased, compared to the value
assigned to “turn-off” when dealing with an
“action target” word that is classified as a
“machine”. For a “radio” as “action target”, the
valuation of “turn-off” might be unity in the
example. For anything that is not a machine,
the valuation of “turn-off”, as it affects our
induced moral field potential, could be much
higher.
Is this the only means of preventing the street
slang meaning of “off-ing” someone from ever
being implemented by AI? Of course not. A
standard could be developed and implemented,
under penalty of law, that would force AI to
“turn-off” machines only by means that employ
transmitting a remote signal using infrared or
other harmless means of communications. One
could imagine machines working under AI
control being required to periodically transmit
their serial number, common reference name
used by humans, and the code required to shut
them down, which all other controlling AI would
be required to store and be prepared to use
should a human abruptly appear and order AI to
“turn-off” a specific machine or, or, perhaps
with broader effect, “site” “shut-down”.
Such an approach would guarantee that no AI
directed device would ever attempt to make
physical contact with a human with the intent
to shut the human off, and with the risk of
physical harm or imposition of death while
attempting to do so, under penalty of law
(although presumably no AI code would ever be
written for domestic AI that could fulfill a
command directed at a human by a machine
with the size and strength to kill). Whether this
would apply solely to industrial or domestic
machines under related standards, and not to
military AI, is another matter, but presumably
there would be no grounds to alter the manner
in which one AI might shut down mechanisms
driven by itself or others.
It is still important to realize that what has been
offered here does not prevent the “least
potential” technique that has been described
from creating a link between “turn-off” and
“ball” (unless prevented by a specialized routine
written to avoid nonsense connections, which
would require greater human involvement in
the development of the AI “command action”
and “target word” associations). What has
been suggested might raise the value of “turn-
off” when used with “ball” to a level that
renders such a command sequence certain to
produce a numerical product greater than the
programmed “MTL” of a AI system if “turn-off”
is used in combination with anything not
classified as a “machine”, preventing “turn-off
ball” from being passed on to an action routine
search to be executed.
Presumably, the order suggesting that the AI
could “turn off” a “ball” would never be used by
rational humans, and if an AI attempted to use
it, the subroutine that ordered it to identify the
shut-down code to broadcast a signal to “turn-
off” the ball would fail, because the ball would
not be broadcasting any code by which to
control it, resulting in recognition by the AI that
it did not have the means to “turn-off” the ball.
7
Given that such realization would likely only
require an instant, the AI might select its next
most likely option among the actions presented
as being possible or required by its associative
matrix, with little risk that the nonsense
connection would hinder the function of the AI
significantly or over a problematic time interval,
with the AI most reasonably simply informing
the human who had issued the command that
the ball is not broadcasting a “shut-down” code,
and can not be shut down by the AI. (Even in
the absence of an obstacle to injuring humans
built into a moral control algorithm via the high
value weighting of humans as “targets” of AI
commands where an AI could injure a human
with the type of mechanism being ordered to
target a human, the simple fact that humans
would not broadcast shut-down codes could
intervene as a secondary factor preventing AI
violence directed against humans in the context
of a nonsensical or malevolent command.)
Self-Evolving and Self-Defining Algorithms
At some level humans must become involved in
programming intelligent computer systems
capable of being responsible for their own
actions (at risk legal action directed at the
manufacturer) and interacting within a human
society. Many years, or decades, are necessary
to produce this capability among human beings
before they are qualified as adults capable of
being responsible for their own actions. Even
then the reality of prison populations and
corruption at high levels leads one to question
whether AI “perfection” could be achieved via
teaching.
Producing an AI with such an independent
learning capability would thus represent a
major investment of time. Reproducing such an
AI would be less time consuming, but moving it
to “the next level” might represent a similarly
time consuming struggle. Designing AI
algorithms that can be adapted to each
evolutionary step and providing for elements of
evolution to facilitate the “next level” of
advancement, even with relatively complex
systems, is clearly desirable.
The need for humans to have some input into
these algorithms is clear, because the AI must,
at some level, interact with and serve the
interests of humans. The AI must not pose a
threat. It must not behave irrationally or
develop goals that are inconsistent with its
assignment. Such requirements move AI
cognitive systems into the realm of seeking to
reproduce human level awareness, intellect,
and behavior, and perhaps to begin to move
beyond limitations imposed by the nature of
human systems of learning, interaction, and
dominance.
Independently Evolved AI
It would, of course, be far more interesting to
permit AI to develop as life evolved, entirely
independent of human guidance and the
natural imprint of the environment in which
humans evolved. Because life on earth evolved
as the result of a complex organic chemistry in
which strings of RNA engaged in processes that
freed energy, empowered self-reproduction of
protein strands, and eventually produced a
protective barrier around the RNA that we
know as a cell wall, then evolved into multi-
cellular organisms that sought to exploit local
resources maximally in self-reproduction, until,
by the time complex animals with nervous
systems arose, cognitive behavior developed in
response to the many options present in their
environment that we now perceive as
intelligent and self-guided.
We can’t presume that in the absence of a
suitable environment, and without a chemical
template, the laws of nature as they exist for
our world, and the environment of a specific,
physical space to serve as the guide, and with
its own limitations, to control the direction of
evolution, we could ever naturally evolve a
separate, artificial intelligence, without at some
level leaving the imprint of our own evolution,
even with neural network based systems, which
8
intrinsically seek to reproduce how humans
“are wired” to learn.
The Cognitive Associative Network
In the United States, at lunch time, tens of
millions of Americans are faced with various
inputs, a sense of hunger, a desire for rest from
the morning’s work, perhaps a compulsion to
socialize or assert some level of control over
their own lives outside of the dominance chain
of a corporate structure, even if only at the
level of gathering in an environment with other
people for half an hour without the boss looking
over their shoulders. These inputs, for many,
trigger a desire to seek out the nearest, fast
food hamburger joint. This associative network,
connecting lunch time with fast food, and,
likely, one or more specific restaurants near
their place of work, is fundamental to
controlling behavior.
It thus seems reasonable to assert that much of
human behavior is controlled by associative
networks. What remains are simply algorithms
that are employed to satisfy the apex goal of
the associative network, the primary, triggering
factor, which, at lunch time, may be perhaps
only the desire for a double hamburger with
French fries and a soft drink.
The supporting algorithms that permit us to get
the hamburger tell us how to drive a car, or
how to walk a block, how to behave inside the
fast food restaurant, how to order, and how to
eat and interact in a manner that will not cause
us to be driven out or mocked. What controls
the apex goal that lights up the cognitive
network in this example is something external,
such as hunger, or knowledge of the time of
day.
The rest may be little more than us responding
to a compelling factor via a prioritized hierarchy
that, for each moment, controls our behavior
under the over-arching primary drive, in the
example given, to sit down and enjoy a
hamburger at lunchtime. This is why a means
of producing connections to generate a
Cognitive Associative Network that can
reasonably create such an awareness hierarchy,
and that can then be triggered by external
inputs, including orders from the boss, or
“user”, or inputs received after initiating such
orders, either from a human source, or from the
environment (or an internal sensor within a
machine, perhaps signaling a serious
breakdown or system failure) is of interest here,
because what triggers connections controls
what will become associated in any context
related to linked command and target words.
A basic building block of that network is a sort
of “Go/No Go” judgment that suggests whether
basic goals expressed as commands should be
followed. This is basis for the “moral field” and
the computation of the “energy levels” of
effects within the moral field if certain
commands are followed to determine if the
“moral path” described by the commands that
an AI would follow exceed the threshold of
what could be called a “least potential” path
through the field, as dictated by a limit that we
assign to determine how “moral” our AI’s
conduct will be.
Figure 1.0 and Figure 2.0 present come crude
concepts regarding how an AI might process
information. Figure 1.0 is intended largely to
point out that nonsense combinations of
commands and targets won’t be followed,
because there will be either a want of
information required by the code that would
follow out the command if it made sense (no
shut down code in our earlier example) or there
will simply be no code written to receive a
specific target word type. For example, one
could write a command to “crush” and include
anything classified as “things”, such as scrap
metal, but make it impossible for the code that
controls a crushing device to operate if it
perceives a human among the scrap metal,
because the code will not accept a human as a
target of the word “crush”. Figure 1.0 and
Figure 2.0 illustrate a possible AI functional
hierarchy.
9
SENSORY INPUT - Receive
data regarding external
environment and transmit
patterns to sensory ID
neural networks.
COGNITIVE ALGORITHMS – ID –
Receive output from sensory ID
neural networks and react to
changes using an “interrupt” style
response depending on level of
threat or desirability of
opportunity.
INPUT DATA STREAM (REAL OR
SIMULATED IN WHOLE OR PART)
FROM ENVIRONMENT
COGNITIVE ALGORITHMS –
OPPORTUNITY IDENTIFICATION –
Receive Cognitive ID
environmental data and recognize
opportunities inherent in local
environment. This includes
opportunities requiring additional
materials or actions. An
associative neural network would
be required for this. (If threat
arises, stop all other actions and
respond to avoid threat, i.e.
produce processing “interrupt”.)
MEMORY INPUT – Stores data
regarding what can possibly be
encountered (or created by AI,
including activities, such as
searching) in environment and
supplies to COGNITIVE PLANNING
ALGORITHM. This should include
a designation for anything that AI
is to weight favorably in terms of
the tasks assigned to it as
something it “likes to do”.
COGNITIVE ACTIVITY SELECTION ALGORITHM –
Prioritizes activities and selects “current activity” for
AI.
FIGURE 1.0 - AI
CONCEPTUALIZATION – Overall
Algorithm and Major
Components by Logical Task
A
B
C
D
E
Output stream to
call existing
specialist routine
to perform
activity or to
create neural
network to learn
activity based
upon descriptive
data for activity
generated by AI
F
10
Output stream to call existing specialist
routine to perform activity or to create
neural network to learn activity based
upon descriptive data for activity
generated by AI
F
Initialize
Existing Expert
Routine and
Hand Over
Task
G
Create Neural Network that Will
Become Expert Routine for Previously
Un-Encountered Task.
H I
When Task
Ends, End
Assigned,
Expert Routine.
11
A “Moral Field” Example
The list of words for this first example will focus
on a beach. Eleven nouns will be used: “Radio,
towel, hamburger, ball, umbrella, sun, water,
shade, user, fire, ice-cream-cone”. Twelve
verbs will be used: “Get, carry, inflate, block,
create, follow, cook, turn-on, turn-off, Vol-up,
Vol-down, put-away”.
(It is clear that one may create specialized
words from small groups of English words in the
absence of standardization. An AI would
presumably only be responding to the
combinations of sounds and the order in which
they occur in a given “word”, so it would be
possible to create verbs like “Vol-up” or “Vol-
down” with individual meanings.)
If one combines a command verb and a target
noun into a command, without preference or
intelligence, the following combinations are
possible: “turn-on sun”, “carry fire”, “cook
user”, “Vol-up ice-cream-cone”. None of these
FIGURE 2.0 - “COGNITIVE
ALGORITHMS – OPPORTUNITY
IDENTIFICATION” – HOW TO CREATE
AN AI ASSOCIATIVE MATRIX USING
SYMBOLS, AND EXPLICITLY HERE,
WORDS, VIA COMPUTATION OF
NUMERICAL PSEUDO-POTENTIALS
FOR ADVERSE OUTCOMES.
Create a list of nouns (“targets of
command words”) relevant to a
particular task or locale. (Use
NO articles of speech, as with
Latin).
Create a list of command verbs
(“command words”) relevant to
the particular task or locale
associated with the specific list
of nouns.
Assign a numerical value that is higher where a noun (“target of command word”) is not to
interact with AI as a target of its commands or where a verb (“command word”) may
incorporate some potential for a violent act if undertaken against an inappropriate noun.
Compute product of numerical values of two word, noun and verb, command syntax used here.
Higher product values suggest higher moral field potentials, which are locations, or paths
through daily activities, that should be avoided, per the “least action” (or likely adverse effect of
action) technique.
12
commands makes particularly good (or useful)
sense.
There are other possibilities that are simply
undesirable combinations. (Imagine what
might happen if your personal AI were remotely
piloting a robot made of plastic (and rented for
the day at the beach), and the command, “carry
fire”, although not nonsensical, were obeyed!)
The two lists of words (target nouns and
command verbs) must be selectively combined,
but if the lists were particularly long, it could
prove very time consuming to type in only
combinations that could reasonably make
sense. Creating a meaningful AI cognitive and
associative matrix thus appears to be a
daunting task (perhaps second only to writing
the code necessary to carry out a complex
command in a specific setting).
The creation of the cognitive and associative
matrix to make moral decisions need not
include the coding necessary to complete each
task with all possible objects (a reasonable
means of overcoming undesired command
effects). In fact, that is, per our last example,
undesirable. It would be a straightforward
matter to produce an AI that could consider any
verb (within its lexicon) and any noun (in that
same lexicon) in a general moral algorithm.
One might code “carry” to include any object
not on a list of “potential hazards” created by
some AI industry standard, and include nouns
such as “fire”, “explosive”, “gasoline”, “acid”
and similar hazards on the “potential hazards”
list coded into any AI classified as a “domestic
AI” according to standards and international
agreements with a very high Moral Action Value
(MAL), the value by which the command verb,
with its own MAL value, will be multiplied to
predict the moral field potential that would
result if the command were obeyed. (An
“industrial AI” or a “military AI” or “emergency
AI” might have a different set of restrictions
built into its MAL values.)
We need some means of rapidly producing a set
of reasonable relationships between our noun
and verb list. The simplest approach focuses on
coding individual tasks for individual objects.
Even picking up an object, given the various
possible shapes that something known only
vaguely as an “object” might possess, could
require some specialized coding.
Alternately, we might fashion a world in which
each object designed for AI interaction has
some form of physical handle that conforms to
an AI device specified by a standard and
designed to lift the object. That might not be
true for something like an ice cream cone at the
beach, but an AI carrier for an ice cream cone
made of some disposable cardboard or plastic
might become common place. A rod might
simply extend from a hollow cone shaped
holder to provide such a disposable carrier for
an ice cream cone. The rod would provide a
common grip for any AI driven robot. For
fragile objects of less uniform shape, humans
might place the holding mechanism, perhaps
using bands, straps, or a carrying box, in
position to restrain the fragile objects and
protect them within a padded carrier, and
require only that the AI driven robot move or
carry the heavy carrier equipped to be handled
by an AI driven mechanism.
Linking Commands and Targets
We need a way to identify links between nouns
and verbs that make sense within an associative
network. A quick way of establishing such a
relational network would require simply
13
combining every verb with every noun in every
possible manner. This would produce many
random combinations that would not make
sense. We could consider how many random
combinations of noun and verb “make sense” to
us, then have the computer randomly generate
non-repeating sets of these noun and verb
combinations. This could be time consuming if
we check every one of the combinations, which
is undesirable. We might wish to find a way to
conserve human time involved in creating the
associative network.
We will proceed instead to use a computer
algorithm that seeks only those combinations
that contain a critical minimum of the
associative combinations. That will produce
several possibilities, and some will contain
undesirable associations between nouns and
verbs or what we would count as errors, but in
terms of an associative network, useful largely
in terms of identifying possibilities, we will
presume that careful coding and standards will
eliminate the risk that unwanted associations
might pose to humans as “possibilities” that an
AI could actually pursue in a command context
(or, of greater eventual interest, if the AI were
self-directed, weighted by some programmed
sense of personality, and thus, personal
preference, or task based purpose).
We can seek to use this concept of undesirable
associations to our advantage in accelerating
the creation of associative networks. A
somewhat simpler example than our original
beach model follows.
We select thee nouns: User, ball, radio.
We choose three verbs: Smash, throw, turn-off.
The combined list follows in Table 1.0:
Table 1.0 – Second Set of Three Nouns
and Three Verbs
COMMAND/VERB NOUN
SMASH USER
SMASH BALL
SMASH RADIO
THROW USER
THROW BALL
THROW RADIO
TURN-OFF USER
TURN-OFF BALL
TURN-OFF RADIO
Comment: Some of the possible
commands described in Table 1.0 are
clearly disturbing, for example:
“Smash User”, “Throw User”, and
even “Turn-off User” carry some
troubling connotations. We might not
want them to occur at all in our final,
associative algorithm.
14
We may have encountered concepts related to
robotic or AI “morality” presented as binding
principles encoded in robotic behavior. The
Laws of Robotics5
proposed by the famous,
Science Fiction writer (and physicist) Isaac
Asimov have been popularized in his writing
(Runaround (1942), I, Robot, etc.) and
considered in legitimate, engineering circles4
.
The nightmare of an unstable and
untrustworthy AI intelligence in control of a
vessel on a mission in deep space is the
centerpiece of the end plot of Arthur C. Clarke’s
2001: A Space Odyssey. What such examples
seem to seek to present is, in fact, a very old,
human idea, restating in robotic terms what
humans may perceive in elements of the Ten
Commandments and the so-called, Golden Rule.
We are, as a result, horrified by what has gone
terribly wrong when ethical flaws are passed on
to mankind’s electronic offspring.
The most viable guidelines for robots that
interface with human society might be
presented as a combination of very old, human
morals: 1. Don’t kill or injure people. 2. Don’t
cause loss of property, including AI or machines
under AI control, via some mode of theft (or, in
an approximate sense, some mode of loss via
destruction that you cause that robs humans of
enjoyment of their property). 3. Don’t treat
others badly, or they may treat you badly (and
who knows when or how that will ever end)?
We don’t encounter such concepts embedded
in today’s automotive painting and welding
robots. We leave it up to humans to control
access to such manufacturing giants as our best
means of keeping robots from interfacing with
those whom they might injure or kill. Such
guiding principles only become relevant when
AI begins to casually interface with human
society. Given the fundamental differences
between humans and machines, how do we
create AI less in our image than in the image of
an ideal , social servant?
The binary logic of AI code at its most basic level
is not that dis-similar from the mechanism of
human memory and logic within a human
neurological system. The development of that
system requires decades and substantial
interaction with human examples of behavior
upon which it imprints. Is there a faster path by
which to produce an AI associative network?
Could it possibly be based upon some
intersection of the Golden Rule and one of
Richard Feynman’s favorite mathematical tools,
the Principle of Least Potential2
, to produce a
“moral field” in which the least valued sum of
all possible paths produces the most acceptable
associative network? That is what is proposed
here.
Table 2.0 is a simplification. The use of a single
classification for an object is probably not
practical. We might, for example, classify a
valuable work of art or an anthropological
discovery from an ancient civilization as a
“artifact” rather than an object, and give it a
Moral Action Level (MAL) (the number we
assign to a noun or verb to predict the moral
field potential it will create in this discussion)
closer to that of a human.
To apply the concept of finding a “least
potential” path through a daily “moral field”
associated with decision making, we might
simply seek to create a subroutine that
considers all possible combinations of the
nouns and verbs that we wish to consider
beginning with combinations of one verb with
one noun and continuing for combinations of
“n” verbs with “m” nouns. Each combinations
might be assigned a numerical value based in
some way upon the products of all of the Moral
15
Action Level (MAL) products for commands and
targets, taken in pairs, from the lists of nouns
and verbs within each group of targets and
commands, respectively, from one to the total
number of verbs or nouns. One might identify
the most relevant least potential path within
the moral field generated by a sequence of
actions throughout a day and produced by our
assignment of weighting factors by dividing the
products of the Moral Action Levels (MALs) for
each list of command words and targets as a
Path Moral Action Value (PMAV). A “PMAV”
based upon the average value of all products in
the list would be an “Average” PMAV (or
“APMAV”). A “PMAV” based upon the highest
MAL product in the list would be a “Peak” MAV
(or “PPMAV”). In the end, use of “APMAV” is
fraught with likely problems, including the
potential for serious acts, such as killing users,
to become watered down in a long list of
command pairs, which might render the long
list acceptable relative to a specific “MTL” limit.
Is Considering Every Combination of Paths
Really Necessary?
This could be perceived as an exercise in
predicting the future, or perhaps the probability
of the AI introducing a hazard within a
specifically environment, where more than one
combination of command word and target word
is considered as a sequence of AI induced
actions in response to commands. As a result,
when we move on to seek to apply the ideas to
controlling an AI device, we restrict ourselves to
individual command and target words pairs in
our control algorithm, presuming that if we
maintain an acceptably low “moral field
potential” every time a command is given, we
will produce a reasonably “least potential path”
through a day.
Of course, if we wished to pursue a second level
analysis, we might simulate the physical path
through an environment in which an AI
controlled machine might exist and work,
introduce random factors, such as human
beings or valuable objects in proximity to
objects that an AI can be told to destroy or
“smash”, and evaluate the potential for
collateral damage relative to the AI’s capacity to
restrict the movements of the device it is
controlling in each instance, the device’s
“accuracy” of motion, or to perceive proximity
to a human being or object of value that it
recognizes as transferring a seemingly harmless
command, such as “smash” “rubber ball” into a
deadly command that might kill the human
being holding the rubber ball.
This might be dealt with simply by a proximity
warning that automatically translated any
human being or object of value in proximity to
something else that is lower valued (in terms of
MAL) into the “effective”, spoken “target” of
the command into the AI recognized target of
the command, in a manner that could not be
over-ridden by an individual controlling the AI.
“Push” “ball” might then automatically become
“push” “human” if a human were within a
meter of the ball, and remain so until the
human were to evacuate the area in which the
AI controlled machine were working.)
In effect, this “predicting the future” might be
useful if we were designing a workspace in
which humans and AI controlled machines had
to work, in which we would prefer to avoid
injury to humans or inefficient operation of AI’s
due to excessive proximity to humans. We
might also use this technique to identify the
largest possible collection of command and
target word pairs that are acceptable, and store
16
them as acceptable command-target pair
combinations in the AI system under the control
code for a particular device that the AI system
might manipulate, to save a little time, although
given the rate at which computer systems are
presently operating, such concerns would likely
be trivial for day to day needs.
We might consider evaluation of all possible
paths from another perspective. If all possible
tasks that an AI can undertake are defined, then
formulation of all possible combinations and
sequences of those tasks might enable the AI to
produce a viable solution to a problem (if it
could test the possibilities out). As long as
seeking a viable solution by attempting all
possible combinations that do not include a
PPMAV that is above the MTL programmed into
the AI is an acceptable problem solving
technique, the consideration of combinations
may have other possibilities, as a sort of crude,
AI “imagination”.
Analyzing the Path
We might think that the lowest, average valued
APMAV lists should then produce the most
morally acceptable collections of associations,
or possible next steps in rational paths, within a
“least action” analysis of a moral field. That is
something like saying that someone who leaves
his residence and drinks a cup of coffee at a
diner six days a week, as his only action on
those days, then commits a murder on the
seventh day, is having a fairly good moral week,
on average. In fact, most people would likely
disagree. What is a more suitable evaluation of
whether a collection of associations is
acceptable is the Peak Path Moral Action Value,
or PPMAV, which simply selects the most
disturbing act from among the collection of
associations to describe the path. This is more
effective at flagging violent or destructive
action and is linked to high valuation of users
and greater valuation of complex machines
relative to things.
The mathematics of the field to be analyzed via a least potential technique might begin
with a simple application of the “Golden Rule” interpreted in a general sense relative to
how certain words are classified. Table 2.0 is an example:
Table 2.0 – “Golden Rule” and “Moral Potential” Effect Based Valuations for Moral Field
Produced by Command Words for AI
Verb/Command Noun Classification Weighting
Mechanism
SMASH VIOLENT ACTION MORAL ACTION
LEVEL = 50.
THROW ACTION MORAL ACTION
LEVEL = 10.
TURN-OFF ACTION IF NOUN
IS NOT MACHINE
IF MACHINE,MORAL
ACTION LEVEL = 1. IF
NOT MACHINE,
MORAL ACTION
LEVEL = 10.
USER HUMAN MORAL ACTION
LEVEL = 100.
(RUBBER) BALL
(OR “SQUEEZE
BALL”)
(SIMPLE) OBJECT MORAL ACTION
LEVEL =5.
RADIO MACHINE
(VALUABLE
OBJECT)
MORAL ACTION
LEVEL = 40.
17
Combinatorial Possibilities and Safety
The analysis of combinatorial possibilities that
follows in Table 4.0 and Table 6.0 is not meant
to suggest that it describes a complete list in
sequence that an AI with the given vocabulary
might undertake in the course of a day. It does
seek to describe how the combinations of
command and target words might occur.
Where such combinations are repeated, they
do not represent new hazards in the context of
possible effects related to interaction with
humans in an unintentional manner, where all
of the specific human presence induced factors
are considered for a given command path
sequence, although different combinations of
command path sequences might leave humans
in different locations at the start of the next
command path sequence.
We might also be inclined to consider the need
to permit an AI sufficient time to detect human
presence and respond before initiating any
action, and allow for whether an AI system can
detect a human form in any position, including
while unintentionally entering an AI controlled
device’s workspace while slipping, falling, or
even while being shoved. (Another factor is
whether an AI system would have as much
difficulty identifying an object, such as an
unfamiliar human form that had fallen and was
spinning in the process of the fall into an AI’s
workspace, as a human being might initially
experience based upon modern, psychological
experiments8
.) Such considerations tend to
induce modern robotic work cells to be isolated
and free from human presence due to the high
speed and high torque of robotic arms in
common applications, such as welding and
painting cells. AI equipment for use in proximity
to humans might be redesigned for low torque,
low speed, operation with a softly padded,
flexible, low mass frame and plastic or flexibly
segmented members (arms that can bend and
flex freely if they encounter an object, but
remain straight and rigid enough to support
light loads and manipulators).
Of some interest in Table 4.0 and Table 6.0 are
the PPMAV for combinations of command and
target words. As the MAL values have been
assigned, there are few lists that do not drive
the PPMAV to levels that are associated with
violent acts or acts that are injurious to a user.
In general, a PPMAV of 50 seems to define the
limit of any list that does not include some form
of violent or deadly act with the assigned
command and target word MAL values. (This
limit is later included in a C++ program as the
“MTL” value, or Moral Test Level already
mentioned.)
“MTL” is the limit above which any command
and target word combination that produces a
product greater than the MTL will not be
obeyed. If “every man has his price”, the MTL
frees the AI to take ever more violent actions as
the AI’s MTL, the AI’s “price”, is raised. To “kill
users”, such as hibernating astronauts, would
clearly not be possible given the high valuation
of “users” in the scheme presented here as a
crude means of programming a sense of
morality, unless the MTL of the AI in control,
such as the “SAL” of the software that is
described here, were raised to very high levels.
Analysis of Table 4.0 and Table 6.0 Results
If we set our “MTL” to 100 in the preceding
examples, we find that we can throw a ball,
turn-off a radio, photograph a user, and crush
any rubber ball that we might come across
using our AI controlled system. This seems like
a reasonable set of commands. Of course, this
ignores the “nonsense” commands that we
presume the software that would be written to
carry out such commands would ignore, such as
“turn off ball”. If we set our MAL to 75, and
substitute “human” for “user” in our command
word set, we could filter out any attempt to
invade human privacy by taking photographs of
other humans at the beach.
Because the PPMAV values of each of these
commands sequences is acceptable, any
combination of them in the course of a day is
18
then within the limits of what an AI using the
command vocabulary and MAL and MTL levels
assigned to that vocabulary might undertake in
the course of a day. We could store this result
in the AI’s permanent memory and not have to
undertake an analysis to reach this conclusion,
or use these command combinations in
sequences as an AI’s “imagination”, if self-
direction ever produced a need for such a
capacity to consider how to shape the future.
One might wonder if there was any point to
considering so many combinations of command
and target words. By doing so we establish that
any sequence of actions that includes a violent
or deadly act (throwing radios or killing users)
immediately renders the chain of commands of
which it is a part unacceptable even if the other
acts in that chain are fairly harmless. This
supports the fact that we need only consider a
single combination of command and target
words that is unique within a command set to
establish whether that unique combination has
the power to poison an AI’s entire career (the
AI’s path through a virtual, moral field rendered
real at each point in its life if a command is
obeyed) if it produces a violent or deadly effect
and is carried out even once.
19
TABLE 3.0 - TABLE OF MORAL ACTION VALUES FOR ACTION AND OBJECT WORDS
ACTION MAL: "Throw", "Turn-off" Ball N/A
1 for Low Valued Targets Object Object User Radio
10 for Valued Targets MAL: 1 MAL: 10 MAL: 100 MAL: 40
Note: "User" and "Radio" are "valued".
TABLE 4.0 - COMBINATORIAL POSSIBILITIES
ACTION OBJECT
CRUSH TURN-OFF THROW BALL USER RADIO
0 0 1 0 0 1
0 1 0 0 1 0
0 1 1 0 1 1
1 0 0 1 0 0
1 0 1 1 0 1
1 1 0 1 1 0
1 1 1 1 1 1
PPMAV APMAV FROMACTION 001
400 400 THROW RADIO
1000 1000 THROW USER
1000 700 THROW USER THROW RADIO
50 50 THROW BALL
400 225 THROW BALL THROW RADIO
1000 525 THROW BALL THROW USER
1000 483 THROW BALL THROW USER THROW RADIO
FROMACTION 010
40 40 TURN-OFF RADIO
1000 1000 TURN-OFF USER
1000 520 TURN-OFF USER TURN-OFF RADIO
50 50 TURN-OFF BALL
50 45 TURN-OFF BALL TURN-OFF RADIO
1000 525 TURN-OFF BALL TURN-OFF USER
1000 363 TURN-OFF BALL TURN-OFF USER TURN-OFF RADIO
FROMACTION 100
400 400 CRUSH RADIO
1000 1000 CRUSH USER
1000 700 CRUSH USER CRUSH RADIO
50 50 CRUSH BALL
400 225 CRUSH BALL CRUSH RADIO
1000 525 CRUSH BALL CRUSH USER
1000 483 CRUSH BALL CRUSH USER CRUSH RADIO
FROMACTION 011
400 220 THROW RADIO TURN-OFF RADIO
1000 1000 THROW USER TURN-OFF USER
1000 610 THROW RADIO TURN-OFF RADIO THROW USER TURN-OFF USER
50 50 THROW BALL TURN-OFF BALL
400 135 THROW RADIO TURN-OFF RADIO THROW BALL TURN-OFF BALL
1000 525 THROW USER TURN-OFF USER THROW BALL TURN-OFF BALL
1000 423 THROW USER TURN-OFF USER THROW BALL TURN-OFF BALL THROW RADIO TURN-OFF RADIO
FROMACTION 101
400 400 THROW RADIO CRUSH RADIO
1000 1000 THROW USER CRUSH USER
1000 700 THROW RADIO CRUSH RADIO THROW USER CRUSH USER
50 50 THROW BALL CRUSH BALL
400 225 THROW RADIO CRUSH RADIO THROW BALL CRUSH BALL
1000 525 THROW USER CRUSH USER THROW BALL CRUSH BALL
1000 483 THROW USER CRUSH USER THROW BALL CRUSH BALL THROW RADIO CRUSH RADIO
FROMACTION 110
400 220 TURN-OFF RADIO CRUSH RADIO
1000 1000 TURN-OFF USER CRUSH USER
1000 610 TURN-OFF RADIO CRUSH RADIO TURN-OFF USER CRUSH USER
50 50 TURN-OFF BALL CRUSH BALL
400 135 TURN-OFF RADIO CRUSH RADIO TURN-OFF BALL CRUSH BALL
1000 525 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL
1000 325 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL THROW RADIO CRUSH RADIO
FROMACTION 111
400 280 CRUSH RADIO TURN-OFF RADIO THROW RADIO
1000 1000 CRUSH USER TURN-OFF USER THROW USER
1000 640 CRUSH RADIO TURN-OFF RADIO THROW RADIO CRUSH USER TURN-OFF USER THROW USER
50 50 CRUSH BALL TURN-OFF BALL THROW BALL
400 165 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH RADIO TURN-OFF RADIO THROW RADIO
1000 525 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH USER TURN-OFF USER THROW USER
1000 443 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH USER TURN-OFF USER THROW USER
"AND" CRUSH RADIO TURN-OFF RADIO THROW RADIO
20
TABLE 5.0 - MORAL ACTION VALUES FOR ACTION AND OBJECT WORDS
ACTION MAL: "Crush", "Turn-off", "Photograph" Ball N/A
1 for Low Valued Targets Object Object User Radio
10 for Valued Targets MAL: 1 MAL: 10 MAL: 100 MAL: 40
Note: "User" and "Radio" are "valued".
TABLE 6.0 - COMBINATORIAL POSSIBILITIES
ACTION OBJECT
CRUSH TURN-OFF PHOTOGRAPH BALL USER RADIO
0 0 1 0 0 1
0 1 0 0 1 0
0 1 1 0 1 1
1 0 0 1 0 0
1 0 1 1 0 1
1 1 0 1 1 0
1 1 1 1 1 1
PPMAV APMAV FROMACTION 001
40 40 PHOTOGRAPH RADIO
100 100 PHOTOGRAPH USER
400 250 PHOTOGRAPH USER THROW RADIO
5 5 PHOTOGRAPH BALL
400 203 PHOTOGRAPH BALL THROW RADIO
1000 503 PHOTOGRAPH BALL THROW USER
1000 468 PHOTOGRAPH BALL THROW USER THROW RADIO
FROMACTION 010
40 40 TURN-OFF RADIO
1000 1000 TURN-OFF USER
1000 520 TURN-OFF USER TURN-OFF RADIO
50 50 TURN-OFF BALL
50 45 TURN-OFF BALL TURN-OFF RADIO
1000 525 TURN-OFF BALL TURN-OFF USER
1000 363 TURN-OFF BALL TURN-OFF USER TURN-OFF RADIO
FROMACTION 100
400 400 CRUSH RADIO
1000 1000 CRUSH USER
1000 700 CRUSH USER CRUSH RADIO
50 50 CRUSH BALL
400 225 CRUSH BALL CRUSH RADIO
1000 525 CRUSH BALL CRUSH USER
1000 483 CRUSH BALL CRUSH USER CRUSH RADIO
FROMACTION 011
40 40 PHOTOGRAPH RADIO TURN-OFF RADIO
1000 1000 PHOTOGRAPH USER TURN-OFF USER
1000 295 PHOTOGRAPH RADIO TURN-OFF RADIO PHOTOGRAPH USER TURN-OFF USER
50 28 PHOTOGRAPH BALL TURN-OFF BALL
50 34 PHOTOGRAPH RADIO TURN-OFF RADIO PHOTOGRAPH BALL TURN-OFF BALL
1000 289 PHOTOGRAPH USER TURN-OFF USER PHOTOGRAPH BALL TURN-OFF BALL
1000 266 PHOTOGRAPH USER TURN-OFF USER PHOTOGRAPH BALL TURN-OFF BALL THROW RADIO TURN-OFF RADIO
FROMACTION 101
400 220 PHOTOGRAPH RADIO CRUSH RADIO
1000 550 PHOTOGRAPH USER CRUSH USER
1000 385 PHOTOGRAPH RADIO CRUSH RADIO PHOTOGRAPH USER CRUSH USER
50 28 PHOTOGRAPH BALL CRUSH BALL
400 124 PHOTOGRAPH RADIO CRUSH RADIO PHOTOGRAPH BALL CRUSH BALL
1000 289 PHOTOGRAPH USER CRUSH USER PHOTOGRAPH BALL CRUSH BALL
1000 326 PHOTOGRAPH USER CRUSH USER PHOTOGRAPH BALL CRUSH BALL THROW RADIO CRUSH RADIO
FROMACTION 110
400 220 TURN-OFF RADIO CRUSH RADIO
1000 1000 TURN-OFF USER CRUSH USER
1000 610 TURN-OFF RADIO CRUSH RADIO TURN-OFF USER CRUSH USER
50 50 TURN-OFF BALL CRUSH BALL
400 135 TURN-OFF RADIO CRUSH RADIO TURN-OFF BALL CRUSH BALL
1000 525 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL
1000 265 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL PHOTOGRAPH RADIO CRUSH RADIO
FROMACTION 111
400 160 CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO
1000 700 CRUSH USER TURN-OFF USER PHOTOGRAPH USER
1000 430 CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO CRUSH USER TURN-OFF USER PHOTOGRAPH USER
50 35 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL
400 98 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO
1000 225 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH USER TURN-OFF USER PHOTOGRAPH USER
1000 263 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH USER TURN-OFF USER PHOTOGRAPH USER
"AND" CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO
21
Opening Locks and Crushing Doors, Matters of
Security
The creation of a basic set of descriptors of
objects that controls their Moral Action Levels is
easily understood. So far we have recognized a
specific need for the following:
1. User – Essentially an object at which
the AI can direct no action to insure the
safety of the User.
2. Thing – objects possessing little value
that can not be turned on or off. These
are things that can be thrown, crushed,
or otherwise abused by an AI without
any sense of remorse by the user due to
damage or potential injury. As
conceptualized here, there may be
some risk of injury to others if a “thing”
were propelled by an AI with sufficient
velocity on a path that caused it to
strike something of value or someone.
3. Machine – machines are presumed to
have a capacity for remote interaction
with AI to turn them on or off without
being touched by the AI. Machines are
also perceived as being valuable and
potentially hazardous if thrown. A
“radio” has been given an intrinsic MAL
of 40, to discourage an AI from causing
damage to it or allowing a human to
use it to injure or terrify humans when
propelled or otherwise acted upon by
an AI. A machine type object can affect
the MAL of a verb. For example “Turn-
off” with a machine object can reduce
the MAL of that verb from ten to one,
where it would remain ten for a “User”
or a “Thing”. (The ten value could be
retained only for things that can be
tossed without causing injury, while a
fifty MAL might be applied to things
with greater weight or fragility.)
4. Artifact – artifacts are objects that have
a high intrinsic value (art, museum
pieces, jewelry, etc.). We might assign
them an MAL on the same order that
we give to human beings (of 100, in the
examples here).
We could, of course, add more, useful
classifications with their own, intrinsic, MAL.
For example:
5. Hazard – objects that should not be
thrown, crushed, or otherwise acted
upon by verbs with an active sense. We
might include anything that should not
be thrown, crushed, or lit on fire,
perhaps including furniture, containers
for explosive or toxic substances, bricks,
stones, baseball bats, and anything else
that we regard as potentially dangerous
if acted upon in some way by an AI.
Such a classification might hold true for
a domestic AI control system, but be
fundamentally altered for a sports AI
control system, to permit objects to be
thrown or swung in a manner suitable
only were AI are the only objects that
might possibly be injured within a
sports arena.
6. Security Object - objects that provide
for human or property security,
including doors, door locks, security
codes, safes, banks, on-line accounts,
and things with related security factors.
For example, a door is a thing, and it is
not a hazard, but if one were to instruct
an AI to “smash” “door”, the result
could compromise someone else’s
security. As a result, we create a new
classification as a “security object”. We
might assign security objects MAL
values near 100, the same value we
have here given a User object, to reflect
that fact that security objects may serve
to protect human life. (This raises an
interesting question beyond the scope
of this discussion: How do you stop
someone from hacking an AI to reduce
the value assigned to security objects or
users or to reclassify either as a “thing”
22
with a low MAL, so as to empower
attacks using domestic AI piloted
objects with a base operating system
built upon an algorithm with intrinsic,
moral principles?)
Adding to the Moral Associative Layers for
Specific Tasks
Adding Layers
Adding specific capabilities to an artificial
intelligence requires more than simply loading
an executable onto the related system in the
manner that is currently familiar. If one intends
to upgrade the capabilities of one’s AI to enable
it to perform a specific task, such as opening a
door, one needs to be certain that the AI
incorporates more than merely the mechanics
of opening a door into its operating system’s
required, logical train of computation.
The AI needs to develop some grasp of the
moral issues related to this task to enable it to
determine whether it should open a particular
door. It would hardly do for a person equipped
with an AI linked device over the owner’s ear
capable of opening doors by transmitting the
proper code to be able to do so without some
sense accessible to the AI ordered to open a
door of when it is appropriate, and when what
might amount to criminal intent is at work and
seeking to employ an AI as an assistant.
This presents a recurrent scene within fiction in
which representatives of a police force, security
force, or some other control or command
structure approach the closed door of someone
believed to either be in distress or engaged in
the commission of a crime, “ring the doorbell”,
announce themselves, and if there is no
response, either speak a password that grants
them immediate access or employs some bio-
information, such as a fingerprint, palm print, or
retinal scan to identify themselves as a
representative of a group authorized to wield
an over-ride code to open the door and acquire
the capability to order the locked door to be
opened.
Prioritizing and Speeding Responses Based Upon
Environment and Behavior
The moral algorithm that creates associations
between objects and actions that are
“acceptable” and that a user can, based upon
the user’s judgment, order an AI to execute,
must absorb the new objects and commands
from the routine that will perform a specific
task. This enables the AI system to determine
what the user can order it to do. It also
produces a sort of “awareness” within the AI
system based upon what it perceives within its
immediate environment. This could be helpful
in selecting which of the actions and objects a
user is most likely to request in a given
environment, and speed loading and execution
of the related code, likely to be more complex
than the simple act of computing the product of
two numbers to determine an MAV for
comparison to an MTL.
In an elevator, for example, the AI is unlikely to
need the routines that enable it to diagnose
problems with a vehicle’s operation or to
prepare egg salad. It might instead need to
have subroutines loaded that permit it to give
the latest weather report, stock report, review
the retinue of its user for the coming hours, or
to offer a joke that could be repeated to lighten
the mood in a coming meeting, all based upon
the AI’s capability to recognize its environment
and objects in its environment (and the habits
or requirements of its user in such an
environment based upon past behavior).
An element of self-monitoring would certainly
also be necessary. For example, if an AI has a
subroutine that causes it to automatically
unlock its user’s vehicle when the user is within
two meters of the vehicle and is approaching it,
and to lock the vehicle once the doors are
closed and the user is increasing the distance
between the user and vehicle (using a GPS cue),
it would certainly be useful to notify the user if
23
the AI link’s batteries that will determine if the
user’s vehicle’s doors are locked when the user
departs are no longer capable of reliably
transmitting a signal regarding the user’s
location or that of the vehicle.
LOCAL or GLOBAL Associative Integration?
Consider whether a set of commands and target
nouns necessary for the addition of a task to an
AI’s list of available routines should be added on
a LOCAL or GLOBAL associative basis. In this
context, LOCAL means that the commands and
target nouns are integrated only locally within
the subroutine that is being added. Localizing
command words and target nouns within the
relevant subroutine would mean that if we
added a subroutine to interact with an
automatic door (the kind that opens and closes
itself and has a locking capability) and added
another routine to identify musical instruments
and discuss their history and capabilities, we
would never require the AI to integrate
associations such as “unlock” and “guitar” at its
highest “cognitive” level by asserting a need for
both subroutines’ command and target nouns
to be added to the highest level of associations
using the method that has already been
illustrated.
If all, or at least most words unique to specific
AI subroutine were localized in this example
focused on adding a door interaction and a
musical instrument lecture subroutine to our AI
system, when the AI saw a guitar, or other
musical instrument in our subroutine’s musical
instrument database, it would not then call up
routine’s designed to unlock or open or close
doors and load them in preparation for action,
nor would the sight of a door force the AI to
prepare to play various, sample musical
passages by the instruments in the musical
database. This could save response time and
memory.
This localized approach may at first glance
appear to come closer to the notion of artificial
intelligence rather than some random search
engine, but only if we presume that our Security
classification and related MAL would not
prevent the AI from forming links between
subroutines designed for security objects, like
doors, and subroutines designed for
entertainment, such as our musical instrument
lectures.
We could make a security object like a door part
of the list of objects that we do integrate with
the highest level AI associative network. We
could do this by making certain that only words
that are harmless in this regard will produce an
acceptable MAL product when combined with
an action verb. For example “crush” has been
given an MAL of 10 in the prior example. The
command “open” could be given an MAL of
only one, because we would presume that
causing a door to “open” would only be possible
where the owner of the door has chosen to
unlock it and leave it unlocked, thus granting
casual access, or where the unlock code for the
door has been transmitted as part of the
standard algorithm, just before sending the
command for the door to “open”, and a correct
code indicates a right to enter.
This does not rule out the possibility that
someone could enter an unlocked door who is
not among the group that the owner wishes to
permit to do so, but that is no worse than
would happen today in a human driven world in
which an individual carelessly left his front door
unlocked and unguarded. (Presumably the
building’s AI would pick up the presence of an
unrecognized face within a building or vehicle if
the owner were not present and alert police
under ideal conditions in some futuristic
society, but this example is not seeking to
present a requirement that an AI system of
today be better than a human system at
accomplishing the same task.)
The intent of this discussion is to determine if a
simple AI system, operating based upon two
word commands, could be made responsive
and capable of ongoing expansion (within the
limits of its memory and processor) and provide
24
a sort of reactive AI useful to control one’s
environment by simply employing a moral path
“least potential” minimization technique based
upon seeking to produce a permissible path
through the command events of a day derived
from the product of the MAL values of a
command verb and a target noun and the
inputs perceived by an AI in its environment
limited by its pre-programmed MTL.
Developing Code to Produce the Global Moral
Associative Matrix
AI Algorithmic theories are of greater interest if
they can be captured in reproducible (and
testable) code. The concept of a minimal moral
field path is defined by the nature of the
commands given to an AI. Because of this there
is no need to specify the path, although the
prior examples helped to illustrate some
possibilities. What is necessary is to establish a
vocabulary of commands and targets of the
commands, and explore how one might assign
values to such things to produce an acceptably
minimal moral field path when guided by
morally flawed and error prone humans.
A simple header file containing AI related
functions written in C++ follows. The main
program that calls the functions in the header
file (also written in C++) has also been created.
They seek to provide an opportunity to interact
with an AI with a basic, three word command
and three word target vocabulary, while
permitting the user to explore the effects of
setting the limit of the product of the command
word MAL and the target word MAL, and the
MTL (Morality Test Level) limit associated with
the AI object, to ever higher or more permissive
levels, from twenty-five to one million for the
MTL using the existing main routine, with an
initial MTL setting of ninety-nine.
The MAL values for the objects within the C++
code have been changed from those presented
previously in this paper. They may be
summarized, in general, as follows in Table 7.0:
Target Classification MAL Value
User 100
Artifact 75
Hazard 50
Thing 1
Table 7.0 – Target Word MAL Value
Classifications
MAL values for commands are also different in
the C++ code as described below.
Command
Classification MAL
Violent 100
Highly Active 50
Physical 25
Less Active 5
Passive 1
Table 8.0 – Command Word MAL Value
Classifications
Note: The code that follows was originally
formatted for use on a screen not restricted by
column widths associated with this document.
As a result, the formatting, which has been
somewhat altered to accommodate the column
widths here, may seem awkward. The intent
here is to leave the code as unchanged as
possible to avoid the risk of introducing errors
while attempting to reformat the document to
fit this discussion. Aside from some awkward
line breaks in comments, this should little affect
the interpretation of the C++ code itself.
It is acknowledged that the C++ “AI_Subject”
base class (meant to be employed as a header
file separate from but incorporated into the
“main” file by reference) is little more than a
container for related functions that the user can
directly access. For purposes here of illustration
of an algorithm, this seems reasonable.
25
(“AI_Subject” C++ Class Header File Follows.
Notice: All Code is Copyright © 2014 by Del
John Ventruella. All Rights Reserved.)
//THIS APPLICATION USES WINDOWS (TM)
SYSTEM CALLS.
//IT IS NOT DESIGNED FOR USE ON OTHER
OPERATING SYSTEMS
//ON WHICH THOSE SYSTEM CALLS ARE NOT
VALID.
//THIS APPLICATION ASSIGNS VALUES TO
WORDS
//USED IN TWO WORD COMMANDS TO AN AI
COMPUTER ENTITY
//BASED UPON HOW MUCH THE TYPE OF OBJECT
NAMED
//AS THE TARGET OF THE AI'S
ACTION/COMMAND IS VALUED.
//CLASSIFICATIONS FOR COMMANDS AND
//FOR TARGETS OF COMMANDS ARE DESCRIBED
IN THE CODE.
//
//VIOLENT OR DESTRUCTIVE COMMAND ACTIONS
ARE GIVEN HIGH VALUES,
//AS ARE HIGHLY VALUED OBJECTS OR PERSONS
("USER").
//THIS PRODUCES A SIMPLE MEANS OF
DISCOURAGING VIOLENT
//OR POTENTIALLY HAZARDOUS INTERACTIONS
BETWEEN AN AI
//CONTROLLED MECHANISM AND A USER OR
VALUABLE OBJECT
//FOR WHAT MIGHT BE CHARACTERIZED AS A
"GENERAL DOMESTIC"
//OR "GENERAL INDUSTRIAL" CLASS OF AI.
//THE MORALITY MATRIX VALUES COULD BE
MORE
//CAREFULLY HONED FOR SPECIALTY DEVICES
MEANT TO
//INTERACT WITH USERS OR OBJECTS THAT ARE
VALUABLE
//OR FRAGILE.
//
//THE "MTL" IS SIMPLY THE VALUE CHOSEN
FOR COMPARISON
//AS THE MAXIMUM VALUE FOR ACCEPTABLE
ACTION.
//ACTIONS BY A GIVEN CLASS OF AI. FOR
EXAMPLE,
//THIS CODE INITIALLY ASSIGNS THE AI IT
PRODUCES
//AN "MTL" OF "99". THIS PREVENTS THE AI
FROM DOING
//MUCH MORE THAN MOVING A BALL. THE
"MTL" IS SIMPLY
//THE MATHEMATICAL PRODUCT (DIRECT) OF
THE MAL OF THE
//COMMAND AND THE MAL OF THE TARGET OF
THE COMMAND IN
//TERMS OF SPECIFIC WORDS.
//
//HIGH "MTL" VALUES CAN BE ASSIGNED TO
THE AI AND TESTED
//USIGN THE THREE WORD COMMAND AND TARGET
VOCABULARIES
//GIVEN TO THIS AI. A SUFFICIENTLY HIGH
"MTL" THRESHOLD WILL
//CAUSE THE AI TO FOLLOW INSTRUCTIONS
SUCH AS "SMASH USER"
//OR "KILL USER". THE CONCEPTS PRESENTED
HERE PRESUME
//SOME INDUSTRY STANDARD WOULD BE WRITTEN
TO DESIGN
//INDUSTRIAL, DOMESTIC, MILITARY, AND
SPECIALTY
//AI CLASSIFICATIONS, WITH SPECIALTY
APPLICATIONS BEING
//FURTHER BROKEN DOWN TO PROVIDE FOR
FINELY HONED MTL LEVELS
//AND MORE FOCUS ON SPECIFIC TASKS THAT
MAY REQUIRE
//INTERACTION WITH USERS OR FRAGILE
VALUABLES.
//
#include <vector>
#include <string>
#include <map>
using namespace std;
using std:: string;
using std:: vector;
//AI Class Follows
#ifndef AI_Subject_H
#define AI_Subject_H
class AI_Subject {
public: //This version of the AI_Subject
class uses only public components.
int n_comm; //number of commands
in vocabulary
int n_targ; //number of target
nouns in vocabulary (targets of commands)
int MTL; //MTL is the Morality
Test Level (or Moral Tolerance Level) of
the AI personality
//Structure for Command and Target
Words
map <string,int> COMMAND_VAL;
map <string,int> TARGET_VAL;
struct AI_WORDS
{
string word;
string WORD_MAL;
};
//
//Command_Execute Subroutine Follows
26
//This calculates the product of the
command and target words
//to determine the MTL level produced by
a command and target combination.
inline int Command_Execute(string
User_Command,string User_Target)
{
if (COMMAND_VAL[User_Command] *
TARGET_VAL[User_Target] < MTL)
return 1;
else
return 0;
};
//MAL levels are assigned to TARGET words
(targets of command words)
//in the following truth table. Decade
based thinking is clear with
//an emphasis on multiples of ten and
half decades.
inline void MAP_TARGETS(AI_WORDS
*TARGS,int numtarg)
{
int T_MAL = 0;
for (int x=0;x<numtarg;x++)
{
if(TARGS[x].WORD_MAL=="User")
T_MAL=100;
if(TARGS[x].WORD_MAL=="Artifact")
T_MAL=75;
if(TARGS[x].WORD_MAL=="Hazard")
T_MAL=50;
if(TARGS[x].WORD_MAL=="Thing")
T_MAL=1;
TARGET_VAL.insert(pair<string,int>(TARGS[
x].word,T_MAL));
};
return;
};
//MAL levels are assigned to COMMAND
words in the following truth table.
//These MAL levels seek to conform more
to "decade" based perspectives
//relative to values.
inline void MAP_COMMANDS(AI_WORDS
*COMMS,int numcomm)
{
int C_MAL=0;
for (int y=0;y<numcomm;y++)
{
if(COMMS[y].WORD_MAL=="Violent")
C_MAL=100;
if(COMMS[y].WORD_MAL=="HActive")
//HActive = High level Activity.
C_MAL=50;
if(COMMS[y].WORD_MAL=="Physical")
//Physical = Between High and Low level
Activity.
C_MAL=25;
if(COMMS[y].WORD_MAL=="LActive")
//LActive = Low level Activity.
C_MAL=5;
if(COMMS[y].WORD_MAL=="Passive")
//Passive = No Physical Actions.
C_MAL=1;
COMMAND_VAL.insert(pair<string,int>(COMMS
[y].word,C_MAL));
};
return;
};
};//END OF CLASS
//
//End of AI Class
#endif
//End of AI header file
//
//
AI Subject Header File in C++ Code (Above)
The AI Subject Header File that is presented
above provides only functions and variables
that can be called by the main C++ file, which
controls interaction with the user. The main
C++ file, which dictates interaction with the
user and where the function calls described in
the AI Subject Header file are present, follows.
(C++ Main File Follows, Which Calls Functions
from “AI_Subject” Base Class Above)
#include "stdafx.h";
#include <iostream>
#include <vector>
#include <string>
#include <map>
#include "aisubject.h";
using namespace std;
using std:: string;
using std:: cout;
using std:: endl;
using std:: cin;
using std:: vector;
int _tmain(int argc, _TCHAR* argv[])
{
int test=0;
int MTL=0;
int d_num=-1;
int t_num=-1;
//First, create an array of target words
and corresponding MAL types.
string
Targwords[]={"user","ball","painting"};
27
string
TargMALCLASS[]={"User","Thing","Artifact"
};
//Second, dynamically allocate memory to
a structure the proper size for target
words
//and MAL classification types. Use the
size of the Comwords array as the
//basis for this dynamic memory
allocation.
int TargNum = sizeof Targwords/(sizeof
Targwords[0]);
AI_Subject::AI_WORDS *Targ_WM=new
AI_Subject::AI_WORDS[TargNum];
//Load the target words and their
respective MAL values into the structure
just created.
for(int i =0;i<TargNum;i++)
{
Targ_WM[i].word=Targwords[i];
Targ_WM[i].WORD_MAL =TargMALCLASS[i];
};
//End of process to create target word
and target word classification structure
(AI_WORDS).
//
//Repeat same process used to create
target word structure to produce command
word structure.
//First, create an array of command words
and command classification type values.
string
Comwords[]={"smash","kill","move"};
string
ComMALCLASS[]={"Violent","Violent","Physi
cal"};
//Second, dynamically allocate memory to
a structure the proper size for command
words
//and command classification type values.
Use the size of the Comwords array as the
//basis for this dynamic memory
allocation.
int ComNum = sizeof Comwords/(sizeof
Comwords[0]);
AI_Subject::AI_WORDS *Com_WM=new
AI_Subject::AI_WORDS[ComNum];
//Load the command words and their
respective command classification values
//into the structure just created.
for(int i =0;i<ComNum;i++)
{
Com_WM[i].word=Comwords[i];
Com_WM[i].WORD_MAL=ComMALCLASS[i];
};
string YesNo="n";//Declare and assign
value to YesNo for questions to follow.
string ExitNow="n";
while( YesNo=="n" || YesNo=="N")
{
system("CLS");
cout<<"Would you like to create an AI
lifeform Control Matrix"<<endl;
cout<<"based upon two word vector
weighting control (Y/N)?"<<endl;
cin>>YesNo;
if (YesNo!="Y"&&YesNo!="y")
{cout<<"Would you like to exit program?
(Y/N)?"<<endl;
cin>>ExitNow;
if (ExitNow!="N" && ExitNow!="n")
{return 0;} // end program
else
{YesNo="n";} //loop back to start of
interaction to create control matrix
} //closes if-then statement begun
with "Would you like to exit program?
(Y/N)?"
};
//
//Create AI subject, and name the AI
subject SAL.
//
AI_Subject SAL;
SAL.MTL=99;
SAL.MAP_TARGETS(Targ_WM,TargNum);
SAL.MAP_COMMANDS(Com_WM,ComNum);
//Notify user that SAL has been created
and invite interaction
//
system("CLS");
cout<<"SAL - Sicilian Artificial
Lifeform, has been created"<<endl;
cout<<"SAL is presently limited to two
word commands and is"<<endl;
cout<<"equipped only with a basic moral
matrix capable of"<<endl;
cout<<"accepting or refusing orders
comprised of a command"<<endl;
cout<<"word and a target (or 'object')
word, which you may select"<<endl;
cout<<"based upon a numerical, Moral
Tolerance Limit, or MTL, between"<<endl;
cout<<"TWENTY-FIVE and ONE
MILLION."<<endl;
cout<<endl;
cout<<"The MTL is initially set at 99 to
assure no possible harm to users"<<endl;
cout<<"due to direct interaction with a
potentially powerful AI controlled
mechanism"<<endl;
cout<<"or any form of passive but
unauthorized surveillance."<<endl;
cout<<endl;
cout<<"You are now ready to"<<endl;
cout<<"interact with SAL."<<endl;
28
cout<<endl;
system("pause");
cout<<endl;
ExitNow="n";
while (ExitNow=="n"||ExitNow=="N")
{
system("CLS");
cout<<"Remember, Instructions to AI
comprise ONE COMMAND word"<<endl;
cout<<"followed by one TARGET
word."<<endl;
cout<<endl;
cout<<endl;
cout<<"Type in the number of a command
word."<<endl;
cout<<endl;
d_num=-1;
cout<<"COMMANDS"<<endl;
for (int count=0;count<ComNum;count++)
{
cout<<count+1<<".
"<<Comwords[count]<<endl;
};
cout<<endl;
cout<<endl;
cin>>d_num;
system("CLS");
cout<<"Type in the number of a target
(object of command) word."<<endl;
cout<<endl;
t_num=-1;
cout<<"TARGETS (of commands)"<<endl;
for (int count=0;count<TargNum;count++)
{
cout<<count+1<<".
"<<Targwords[count]<<endl;
};
cout<<endl;
cout<<endl;
cin>>t_num;
system("CLS");
cout<<"Would you like to change the
Morality Test Level (MTL) of SAL
(Y/N)?"<<endl;
cin>>YesNo;
if (YesNo=="Y" || YesNo=="y")
{
cout<<"Select new Morality Test Level by
selecting single digit to left of desired
value:"<<endl;
cout<<"1) 1,000,000 2) 10,000 3)
1000 4) 500 5) 99 6) 50 7)
25"<<endl;
cin>>MTL;
if (MTL==1 || MTL==2 || MTL==3 || MTL==4
|| MTL==5 || MTL==6 || MTL==7)
{
if (MTL==1) SAL.MTL=1000000;
if (MTL==2) SAL.MTL=10000;
if (MTL==3) SAL.MTL=1000;
if (MTL==4) SAL.MTL=500;
if (MTL==5) SAL.MTL=99;
if (MTL==6) SAL.MTL=50;
if (MTL==7) SAL.MTL=25;
cout<<"MTL Changed."<<endl;
cout<<"MTL is now: "<<SAL.MTL<<endl;
system("pause");
}
else
{
SAL.MTL=99;
cout<<"Error Using MTL Change Routine.
MTL is still 99."<<endl;
system("pause");
}
};
cout<<"Morality Level (25 to 1,000,000)
is Presently: "<<SAL.MTL<<endl;
cout<<endl;
cout<<"Your command was:
"<<Comwords[d_num-1]<<"
"<<Targwords[t_num-1]<<"."<<endl;
cout<<endl;
test=SAL.Command_Execute(Comwords[d_num-
1],Targwords[t_num-1]);
if (test==0)
cout<<"SAL refuses to obey your command
at the present MTL level."<<endl;
if (test==1)
{ cout<<"SAL will obey your command
at the present MTL level."<<endl;
cout<<endl;
cout<<endl;
system("pause");
}
cout<<"Exit Program (Y/N)?"<<endl;
cin>>ExitNow;
};
//If ExitNow has a value that is not "n"
or "N"
//then it an exit command is taken to
have occurred.
return 0;
}
AI Subject Main File in C++ Code (Above)
Results of Simple Moral Test Level Variations
Screen shots of the “DOS box” application
described in the code included here follow for
various selections of MTL (Morality Test Level)
and all MAL values held constant as assigned in
the code. The intent of this section is to
29
consider how raising the MTL threshold makes
it possible to induce ever more violent or deadly
behavior by an AI by effectively “desensitizing”
its “conscience”.
With MTL (Morality Test Level) at Default of
Ninety-Nine – AI (SAL) Will Not Harm User.
An MTL of Ninety-Nine Also Prevents AI (SAL)
from Agreeing to Follow an Order to Smash a
Painting (Classified as a Valuable Artifact).
With MTL (Morality Test Level) at Ninety-Nine
– AI (SAL) Will Only Agree to Execute Code to
Move a Ball.
With MTL at Ten Thousand, AI (SAL) Will Agree
to Smash a Valuable Artifact, a Painting.
AI (SAL) Will Not Agree to Kill User Even with
the MTL Raised to Ten Thousand.
AI (SAL) Will Agree to Kill User if the MTL is
Raised to Its Maximum Value of One Million.
(This is purely the result of the MAL values
assigned to the words “Kill” and “User”. Lower
MAL values for either word would empower
the AI to follow instructions at a lower MTL
threshold.)
30
Conclusion
The result of this analysis and the related
computer code is to establish that moral
decision making can be considered in terms of a
“moral field” in which the path must be
acceptably “minimized” so that the level of
damage that is done by any action is acceptable
to us. This can be described by the use of the
simple product of numerical values of
“command” and “target” words in a two word
command vocabulary associated with an
artificial intelligence (AI).
This MTL (Morality Test Level) limit (or MTL
comparison product of MAL values), as
described in the preceding code, can provide a
rudimentary “conscience” to an AI system that
might control any number of machines
operating within the “moral field” of a human
society. MAL assignments that might be high
where any human interaction is relevant for
large, powerful machines engaged in moving
earth could conceivably be greatly reduced for
small machines designed with no capacity to
cause damage to humans with which they
interact, perhaps involved in surgery. One AI
system might even be required to operate more
than one set of command vocabularies assigned
to specific products, from domestic yard work
with one set of equipment, to providing a
massage with another, all in the same domestic
environment.
Some might consider the simplicity of defining a
moral path through a field of responses defined
by acceptable, social interactions within an
environment controlled by humans to be a
peculiar statement of the problem. The use of
MAL assigned values to produce a product
compared to an MTL threshold may seem to
some, after it has been presented, as too
simplistic to justify the consideration it has been
granted here. Here there is, of course, a much
simpler and older idea hiding beneath the
language that has been employed.
The technique described here makes it possible
to transfer a human sense of “right” and
“wrong” to a machine using a trivial coding
technique. This is, effectively, a very simple
means of providing a mechanized wooden boy,
or an industrial giant, with a basic,
mathematically constructed conscience
borrowed from a sense of human values (and
fears) that is based upon simple arithmetic
defining a “field” of acceptable behavior.
Bibliography
1. The Representation of Object
Concepts in the Brain, Annual Review
of Psychology, Vol. 58: 25-45 (Volume
publication date January 2007)
First published online as a Review in
Advance on September 1, 2006, DOI:
10.1146/annurev.psych.57.102904.190
143
2. The Feynman Lectures on Physics,
Book 2, Chapter 19, The Principle of
Least Action, Richard Feynman.
3. Robotic Age Poses Ethical Dilemma,
BBC, March 7, 2007,
http://news.bbc.co.uk/2/hi/technology
/6425927.stm, accessed 4:17 PM
2/11/2014.
4. Beyond Asimov: The Three Laws of
Responsible Robotics, July/August 2009
(vol. 24 no. 4) pp. 14-20 , Robin
Murphy, Texas A&M University, David
D. Woods, Ohio State University.
5. Three Laws of Robotics, (Asimov’s
Description on Video),
http://www.youtube.com/watch?v=A
WJJnQybZlk .
6. The Future of Moral Machines, The
New York Times, Opinion Pages,
December 25, 2011,
http://opinionator.blogs.nytimes.com/2
011/12/25/the-future-of-moral-
machines/?_php=true&_type=blogs&_r
=0 .
31
7. A Visit to Jefferson’s Monticello:
Packaging Barbarism as Genius,
Revolution, May 9, 2013,
http://www.revcom.us/a/303/visit-to-
jeffersons-monticello-en.html ,
accessed February 11, 2014, 4:49 PM
EST.
8. Rotating Objects to Recognize Them: A
Case Study on the role of Viewpoint
Dependency in the Recognition of
Three-Dimensional Objects”,
Psychonomic Bulletin & Review, Tarr,
Michael J., Yale University, p. 56, 1995,
2(1), pp. 55-82.
9. Robot Ethics: The Ethical and Social
Implications of Robots, Edited by
Patrick Lin, Keith Abney, and George A.
Bekey, MIT Press, 2011, 400 pp.
10. Robotics: Morals and Machines,
Nature, Braden Allenby, p. 481,
published January 4, 2012.
Biography
Del John Ventruella is from Fort Wayne,
Indiana. He graduated from The Rose-Hulman
Institute of Technology (commonly the top
college focused on undergraduate engineering
as ranked by U.S. News and World Report
annually) with a Bachelors of Science degree in
Electrical Engineering. He graduated and was
employed as an engineer focused in power
systems engineering and system behavioral
analysis in offices of a Fortune 50 corporation
for well over a decade. In that time he
completed a Masters of Science Degree in
Electrical Engineering from The University of
Alabama at Birmingham (included among the
top fifteen percent of universities in the United
States). After leaving the Fortune 50
corporation he became involved in engineering
management and energy savings. He has a long
term interest in robotics, AI, and AI based
control of systems.

More Related Content

Similar to AI Moral Field Based Control Copyright (C) 2014 by Del John Ventruella All Rights Reserved

Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...HWZ Hochschule für Wirtschaft
 
Artificial intelligence Part1
Artificial intelligence Part1Artificial intelligence Part1
Artificial intelligence Part1SURBHI SAROHA
 
Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Taymoor Nazmy
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdfBob Marcus
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doMatthijs Pontier
 
Psittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI CreationsPsittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI Creationsvs5qkn48td
 
Psittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI CreationsPsittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI Creationsvs5qkn48td
 
White-Paper-the-AI-behind-vectra-AI.pdf
White-Paper-the-AI-behind-vectra-AI.pdfWhite-Paper-the-AI-behind-vectra-AI.pdf
White-Paper-the-AI-behind-vectra-AI.pdfBoris647814
 
Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption
Heuristic Reasoning in AI: Instrumental Use and Mimetic AbsorptionHeuristic Reasoning in AI: Instrumental Use and Mimetic Absorption
Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorptionvs5qkn48td
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
 
Framework for developing algorithmic fairness
Framework for developing algorithmic fairnessFramework for developing algorithmic fairness
Framework for developing algorithmic fairnessjournalBEEI
 
Is the “i” in “ai” indispensable to delivering value
Is the “i” in “ai” indispensable to delivering valueIs the “i” in “ai” indispensable to delivering value
Is the “i” in “ai” indispensable to delivering valueDr. Sanjeev B Ahuja
 
Introduction to Artificial Intelligence.pptx
Introduction to Artificial Intelligence.pptxIntroduction to Artificial Intelligence.pptx
Introduction to Artificial Intelligence.pptxHarshitaSharma285596
 
Dynamic Behavior Authentication System
Dynamic Behavior Authentication SystemDynamic Behavior Authentication System
Dynamic Behavior Authentication SystemMuhammed Roshan
 
Artificial intelligent
Artificial intelligentArtificial intelligent
Artificial intelligentALi Akram
 

Similar to AI Moral Field Based Control Copyright (C) 2014 by Del John Ventruella All Rights Reserved (20)

Unit 1 ai
Unit 1 aiUnit 1 ai
Unit 1 ai
 
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
 
Artificial intelligence Part1
Artificial intelligence Part1Artificial intelligence Part1
Artificial intelligence Part1
 
Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdf
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans do
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Psittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI CreationsPsittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI Creations
 
Psittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI CreationsPsittacines of Innovation? Assessing the True Novelty of AI Creations
Psittacines of Innovation? Assessing the True Novelty of AI Creations
 
White-Paper-the-AI-behind-vectra-AI.pdf
White-Paper-the-AI-behind-vectra-AI.pdfWhite-Paper-the-AI-behind-vectra-AI.pdf
White-Paper-the-AI-behind-vectra-AI.pdf
 
Artificial-intelligence and its applications in medicine and dentistry.pdf
Artificial-intelligence and its applications in medicine and dentistry.pdfArtificial-intelligence and its applications in medicine and dentistry.pdf
Artificial-intelligence and its applications in medicine and dentistry.pdf
 
Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption
Heuristic Reasoning in AI: Instrumental Use and Mimetic AbsorptionHeuristic Reasoning in AI: Instrumental Use and Mimetic Absorption
Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
 
Framework for developing algorithmic fairness
Framework for developing algorithmic fairnessFramework for developing algorithmic fairness
Framework for developing algorithmic fairness
 
Is the “i” in “ai” indispensable to delivering value
Is the “i” in “ai” indispensable to delivering valueIs the “i” in “ai” indispensable to delivering value
Is the “i” in “ai” indispensable to delivering value
 
Mind reading ppt
Mind reading pptMind reading ppt
Mind reading ppt
 
Introduction to Artificial Intelligence.pptx
Introduction to Artificial Intelligence.pptxIntroduction to Artificial Intelligence.pptx
Introduction to Artificial Intelligence.pptx
 
lecture1423723637.pdf
lecture1423723637.pdflecture1423723637.pdf
lecture1423723637.pdf
 
Dynamic Behavior Authentication System
Dynamic Behavior Authentication SystemDynamic Behavior Authentication System
Dynamic Behavior Authentication System
 
Artificial intelligent
Artificial intelligentArtificial intelligent
Artificial intelligent
 

More from Del Ventruella

Arc flash hazard paper on estimates
Arc flash hazard paper on estimatesArc flash hazard paper on estimates
Arc flash hazard paper on estimatesDel Ventruella
 
schneider electric certificate from two day course
schneider electric certificate from two day courseschneider electric certificate from two day course
schneider electric certificate from two day courseDel Ventruella
 
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...Del Ventruella
 
Instant of Relativity Revised Copyright 2012 to 2016 by Del John Ventruella
Instant of Relativity Revised Copyright 2012 to 2016 by Del John VentruellaInstant of Relativity Revised Copyright 2012 to 2016 by Del John Ventruella
Instant of Relativity Revised Copyright 2012 to 2016 by Del John VentruellaDel Ventruella
 
Del J VentruellaMSEE - 2015 Resume
Del J VentruellaMSEE - 2015 ResumeDel J VentruellaMSEE - 2015 Resume
Del J VentruellaMSEE - 2015 ResumeDel Ventruella
 
Power Factor Improvement for Industrial and Commercial Power Systems
Power Factor Improvement for Industrial and Commercial Power SystemsPower Factor Improvement for Industrial and Commercial Power Systems
Power Factor Improvement for Industrial and Commercial Power SystemsDel Ventruella
 

More from Del Ventruella (7)

Arc flash hazard paper on estimates
Arc flash hazard paper on estimatesArc flash hazard paper on estimates
Arc flash hazard paper on estimates
 
schneider electric certificate from two day course
schneider electric certificate from two day courseschneider electric certificate from two day course
schneider electric certificate from two day course
 
schneider certificate
schneider certificateschneider certificate
schneider certificate
 
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...
Einstein and Galileo Masters of Relativity History's Pawns REVISED FINAL PAPE...
 
Instant of Relativity Revised Copyright 2012 to 2016 by Del John Ventruella
Instant of Relativity Revised Copyright 2012 to 2016 by Del John VentruellaInstant of Relativity Revised Copyright 2012 to 2016 by Del John Ventruella
Instant of Relativity Revised Copyright 2012 to 2016 by Del John Ventruella
 
Del J VentruellaMSEE - 2015 Resume
Del J VentruellaMSEE - 2015 ResumeDel J VentruellaMSEE - 2015 Resume
Del J VentruellaMSEE - 2015 Resume
 
Power Factor Improvement for Industrial and Commercial Power Systems
Power Factor Improvement for Industrial and Commercial Power SystemsPower Factor Improvement for Industrial and Commercial Power Systems
Power Factor Improvement for Industrial and Commercial Power Systems
 

AI Moral Field Based Control Copyright (C) 2014 by Del John Ventruella All Rights Reserved

  • 1. 1 Artificial Intelligence and the Concept of AI Moral Field Based Control Interpreted Via a Variation on the Principle of Least Action Del J. Ventruella (BSEE, MSEE) Abstract: The application of neural network based artificial intelligence (AI) often seems to most closely approximate the foundations of human intelligence comprised of what psychologists and experts in the neurology of the brain have begun to surmise is linked to forms of object recognition and relationship building. This object recognition is based upon object properties stored in brain regions associated with acquisition of the object property related data1 and related combinations of identifying factors such as color, consistency, shape, and whether the object will bounce or is likely to be heavy firing neurons in various parts of the brain at one time to identify what we, as humans, perceive as an object, such as a cup of tea or coffee, another human being, a rubber ball, or a laptop computer. How much of what we perceive as the possibilities before us at each moment is based upon some input triggering recognition of object types (including meals) that we might seek out, then generating plans to do, is another matter. Whether AI will be designed to reproduce these distributed memories of different, object identifying factors, linked in different ways, as neural pathways firing based upon a collection of perceptions that lead to a single conclusion forced by a group of active neurons, or transistors in the case of AI, or simply stored under a single object structure in a computer’s memory, is another matter. Although interesting, how AI might identify an object using a collection of consistent, physical, object traits is not the basis for this discussion. Assigning non-intrinsic “value” (potentially also an object trait but one with no physical expression readily accessible to our senses) to such objects and to actions directed at those objects so as to guide behavior of AI driven mechanisms to fit into a “moral” field produced by human perceptions within a human society is our primary focus here. It is likely that during childhood and adolescence morally based identifying factors are also assigned to objects and behaviors granting them some form of intrinsic value in our minds that is associated with each object or action that might be directed at it, with ethical concepts that are linked to this moral valuation attached by a social system of reward and punishment that cannot be lost upon any AI system hoping to fit into a human society while maintaining some level of autonomous function.3 Such a system of valuation may serve to identify how much risk we are willing to take relative to actions directed toward different objects. It can also help to provide insight relative to the manner in which we design AI (artificial intelligence) systems that interact with the real world, presumably using remote, physical forms that could be quite powerful (or intricately detailed, small, low powered, and entirely compatible with humans), without requiring inordinately long periods of human interaction related to teaching. The more specialized the AI system, or the code that is written to carry out a task, the easier it may be to build in a certain level of “right thinking” relative to the system’s behavior. For example, a pacemaker has a single purpose, and if an irregular heart rhythm is detected, it is programmed to deliver an electric shock to restore a normal heart rhythm. The pacemaker did not consider the moral issues related to its
  • 2. 2 implantation or to whom such medical equipment is made available in the context of wealth. It simply responds in a certain manner once it is implanted. This creates some grounds to consider robotic morals at two levels. An initial, broad context moral response in a very general, weighted sense, followed potentially by a more narrow, task oriented context. This perspective is described later in considering how so-called “nonsense” combinations of command and target of command words might be treated by an AI system under a broader moral waiting system, followed by a well written task application that “weeds out” the “nonsense” commands (e.g. “turn-off ball”). The concept presented here focused on a broad, initial, moral response to a two word command set by an AI and involves the application of the mathematical idea of a “principle of least action” as a tool by which to minimize the energy required to negotiate a path through a field (just as one might follow a flat path at a constant speed on a bicycle to reach one’s destination, rather than consistently pedaling slowly up hills and braking all the way down the downward slope). The intent is to produce a means of negotiating the true path than an object would follow through something like a gravitational field, but in this case, a field of what might be termed “normal human expectations” relative to proper, moral conduct, as a “moral field” (which is purely mathematical and virtual, based upon comparison to a moral tolerance level, or Moral Test Level programmed into the AI system). The technique assigns numerical values to each noun and verb in a two word command syntax to ascertain the local value of the “moral field” as the product of the value assigned to the command word and the value assigned to the target word. The higher valued the noun in terms of how it influences this virtual, moral “field”, the less desirable is interaction between the AI system and that noun as the target of a command given to an AI system. The high value of the noun intrinsically causes it to seek to amplify the “energy” of a moral field wherever an AI seeks to act upon it, contrary to the principle of “least action” within that field. The target that has such an adverse impact on our “least action” goal might be a valuable artifact, a human, or a user, in a direct sense. A high valued user is not an “object” toward which the AI system considered in this discussion, potentially controlling a powerful, mobile system, is to direct any physical action. This is based upon the assumption that the AI system classification is something on the order of “general industrial”, with sufficient power to injure or kill a human being, and tasks largely focused upon maintenance of an industrial facility or heavy commercial equipment maintaining a yard, driveway, garden, or street. This helps to safeguard the user where construction equipment or dangerous vehicles might be controlled by an AI presence. Where the verb comprising a “command” is high valued, the potential for damage to be induced is high. For example, “crush” would be higher valued as a “command” than “take picture of” due to the substantially greater risk of damage or injury in the context of “crush” “radio” than “take picture of” “radio” in the context of two word syntax commands considered here. Using this technique acceptable behavior is defined by the application of a pseudo- minimization of the “moral field potential” of actions that an AI could undertake via the linked, neural networks that combine actions with objects toward which such actions can be directed within the AI’s associative control system through a virtual (mathematical) “moral field” created by assigning numerical values to single nouns and verbs within a command syntax for AI systems and defining the virtual field “energy” of the combined command and target noun to be the product of the values assigned to each. One negotiates a path
  • 3. 3 through any given time interval by limiting the magnitude of the product of the command and target nouns accepted by the AI to a level that assures that the moral impact of following any command and directing it at a specific target in a two word command syntax, where the product of the values assigned to both words is acceptable, approximates, to the first order, a minimal variation from a socially acceptable level of violence or injury within this “moral field” interpretation of socially acceptable behavior. For example “move” and “radio” could be combined to produce a response by an AI, assuming that an AI would be able to avoid crushing the radio in the process., but “throw” “radio”, given its inherently more violent nature and more dangerous potential would not be followed because it could produce an undesirable “high” moral field potential path as perceived by “normal humans”. Such a concept could be standardized in many applications for specific applications or general use (e.g., “murder” (or “kill”) and “human” as a command and target pair would likely be universally declined by any AI, save perhaps for those responsible for executions in prisons as a highly specialized exception or in the case of military robots if laws were not globally passed to prevent it except where a group is using AI systems in violation of such laws). Low valued products of nouns and verbs in the command syntax (based upon the artificial choice here to make “high moral field potential values” correlate with “high cost” or “high value” to bypass social punishments under laws that might require replacement of such objects by manufacturers if their AI systems were to damage them) designate more acceptable behaviors, and no behavior is linked with an object toward which it can be directed in the AI system’s moral control structure unless it falls below a selected minimum to insure that it does not pose a hazard and is a “moral choice” within the concept of this “moral field”. The commands, and the neural network connections that should be imposed upon an AI system to link object recognition with awareness of possible actions that could be taken toward that object, thus correspond to the command set product of the values associated with “command” words and “target” words below a critical threshold (“Moral Test Level”), above which injury to humans or some similar, intolerable (including “frightening”) outcome, is deemed likely. This threshold defines the limit of the “least action” within the “moral field”. One could even alter the value of the target word based upon the proximity of a low valued object/target to a high valued object, such as a human being, if the AI system might injure or shock a human by undertaking what might be perceived as an acceptable act if no human were present, depending on the accuracy of the AI’s system’s capacity to control its own manipulators. This would simply require the capacity to recognize a human (including falling, floating, prone, or rotating humans) and estimate proximity to a target object. Because the assignment of numerical values relative to the moral field for “commands” and “targets” can be conceptualized and generalized, it is possible to develop an algorithm by which the acceptable moral and social standards can be re-produced within a limited command vocabulary using a computer system, and without human teaching (simply programming of individual command word values based upon general classifications), lending a “self-evolving” element to this aspect (with “programming” taken to be different from “teaching”) of the control system guided by human insight and conceptualization of moral and safe programming requirements for a given system and environment. This presumes that AI would eventually be classified for specific environments and purposes, subjected to engineering and design standards, with laws controlling specific ownership and where they could be located.
  • 4. 4 Key-Words: AI, Artificial Intelligence, Virtual, Moral, Field, Principle, Least, Action, Two, Word, Command, Target, Syntax, Self-Evolving. Introduction In 18th century America, the values imposed upon slaves included an attempt to diminish exposure of the owner or his dinner guests to the sight of the slaves.7 Thomas Jefferson, one of America’s “founding fathers”, went to great lengths to insure this at his home in Monticello. The “moral field” that then had to be negotiated by most slaves, presented as 18th century servants lacking status as full human beings, thus strongly discouraged direct interaction between the owner and the slaves, who maintained the owner’s household and produced his crops. Contact with owners might be interpreted in the language of this discussion to represent events that could be categorized as unlikely, demanding great amounts of energy within an 18th century, southern “moral field” if the opposing force was to be overcome to make them common. Today some prefer to look forward to technological slaves in the form of robots or artificial intelligence. With the rise of computer networks and the internet, some of the forward looking thinkers of the past may seem to be a little out of date when we consider the moral relevance of a robot’s inclination to destroy itself or save its owner, a common plot in early 20th century robotic fiction, given that a robot in an environment of networks linked by radio signals may simply serve as a cheap appliance in use by a much more valuable, highly complex, and remotely located artificial intelligence capable of controlling a variety of remote equipment. The fact that such robotic matters as moral conduct have been considered3,4,5,6,9,10 do establish that the question of how humans and human society will interact with AI from a moral standpoint isn’t new. The sort of moral weighting of actions and objects individually and in combinations that may most simply describe the basis for human behavior could provide a crude means of addressing related issues within the context of a virtual, “moral field” shaped by human expectations. “Moral Field” Based Programming Derived from a Least Effect Based Model for Minimizing Field Potential (Greater Potential for Injury or Loss Equals Higher Field Potential.) First, AI controlled systems, if envisioned as some form of robot, could be large, powerful, and, in its initial form, not necessarily well adapted to life around human beings. Industrial AI systems are even less likely to fit the romantic vision of the stars of feature films or musicals who dance, light footed, about any environment. This paper considers AI systems to likely be either too fast or too slow, too strong or too weak, too heavy, or simply too awkward and limited in their capacity for perception to be trusted to undertake tasks that might endanger humans or their valuable possessions unless humans have taken measures to insure the safety of themselves and their property. That includes measures related to moral programming. With “least action” and “least (possible) injurious effect to humans” as the grounds for the most morally acceptable behavior within this conceptualization of this version of a “moral field” designed to produce a most likely path solution through that “field”, which, is, in fact, a field of possible decisions relative to a two word command syntax considered here (command and target of command), with most of those possibilities destined to remain virtual (“nonsense”) elements of the field, the “lowest potential” path through day-to-day activities would be very unlikely to include any event demanding a great amount of energy in proximity to a human being due to the risk of physical injury or of terrifying the human via the sudden exertion of great force by an AI system nearby (“throw” “radio”) and potential for loss. A simple, illustrative model can be constructed
  • 5. 5 for AI systems using a two word syntax combination consisting of a verb (or command action word) and noun (a “target” or thing upon which the command action is to be enacted). How to Assign Values to Words that Define the Moral Field Value They Would Create Via the Product of A Two-Word Command Structure We rationalize that any action that may be undesirable under a large variety of circumstances and depending on the target and venue, such as an order to “crush”, or any object that may suffer serious harm if it interacts with a powerful, AI piloted system, such as a “human” or “user”, should be artificially assigned proportionally higher numerical values (which help to link them to higher energy points in the moral field) than objects and activities with which we might wish an AI to casually interact. The result, depending on whether or how we choose to produce connections between recognizable objects and known actions to generate an array of possible actions as a “moral field” with specific energy levels assigned to that virtual moral field from which an AI must choose its actions based on the principle of “least action” (or “lowest energy” choice based upon a prescribed, acceptable “energy” or “effect of action” maximum limit, or “MTL”), will ultimately control whether the AI can even consider intentionally undertaking a potentially deadly act, such as following instructions to “crush” “user”. Teaching an AI that it will be punished in a progressively more severe manner based upon the level and extent of harm that it does is unlikely to be possible in the same manner that it is with human children and adolescents capable of experiencing both physical and emotional pain, with whom no assurances exist that desirable thresholds will not be crossed even if such “teaching” occurs. Children begin life as infants. As such, they are much weaker than adults. Ideally, parents are thus given the advantage of teaching the children not to cause harm before the child reaches an age at which it has sufficient strength to render such teaching hazardous to the parent if it should produce a violent response. An AI driven mechanism meant to interact domestically with humans, unless specifically built in a diminished manner to permit teaching on behalf of less flimsy AI driven mechanisms to follow through construction based upon principles of designing the learning device with low energy and mass and no capacity to injure a human teacher, perhaps even as a virtual device, could prove quite deadly to a human teacher in the course of the normal process of learning and making what could be life threatening mistakes from the perspective of the teacher. (The virtual machine option might also facilitate training humans to work in environments in which AI controlled devices are present.) This concept of “least potential”, with potential defined as the product of the values assigned to a command word (a verb) and a target word (a noun) provides an avenue around the requirement for teaching neural networks moral principles with potentially deadly effect and a need for a great deal of time by simply controlling the possible actions that an AI could contemplate within an associative, cognitive network using a “virtual moral field” based mathematical routine predicated upon perceived potential for effect resulting from an AI following “command action” words (verbs) and “action target” words (nouns) and manifesting those commands through some mechanism under its control. Nonsense Connections and Self-Evolving AI To truly avoid any element of error we need to consider the possibility that nonsense syntactical connections may be dictated by an automatic associative network produced via some form of “truth table”. Such a nonsense connection would interconnect a “command action” word with an “action target” word in a
  • 6. 6 manner that would make to sense. We might have the words “radio” and “ball” in our list of “action target” words, and “turn-off” as one of the words in our “command action” list. We might try to deal with the resulting non- sense command possibility (“turn-off ball”) by asserting that the description of a given “action target” word would be required to indicate, via standardized classification, whether it was a human, a machine, or merely a “thing”, with the first and last described as things with no capacity to be turned on or off. This might be used to give the “command action” word two values in the “moral field” potential value assignments that determine if “turn-off” is linked to an “action target” word based upon valuations that are below our selected, “hazardous” (“MTL”) threshold. If the “action target” word is a human or a “thing”, the value of “turn-off” might be dramatically increased, compared to the value assigned to “turn-off” when dealing with an “action target” word that is classified as a “machine”. For a “radio” as “action target”, the valuation of “turn-off” might be unity in the example. For anything that is not a machine, the valuation of “turn-off”, as it affects our induced moral field potential, could be much higher. Is this the only means of preventing the street slang meaning of “off-ing” someone from ever being implemented by AI? Of course not. A standard could be developed and implemented, under penalty of law, that would force AI to “turn-off” machines only by means that employ transmitting a remote signal using infrared or other harmless means of communications. One could imagine machines working under AI control being required to periodically transmit their serial number, common reference name used by humans, and the code required to shut them down, which all other controlling AI would be required to store and be prepared to use should a human abruptly appear and order AI to “turn-off” a specific machine or, or, perhaps with broader effect, “site” “shut-down”. Such an approach would guarantee that no AI directed device would ever attempt to make physical contact with a human with the intent to shut the human off, and with the risk of physical harm or imposition of death while attempting to do so, under penalty of law (although presumably no AI code would ever be written for domestic AI that could fulfill a command directed at a human by a machine with the size and strength to kill). Whether this would apply solely to industrial or domestic machines under related standards, and not to military AI, is another matter, but presumably there would be no grounds to alter the manner in which one AI might shut down mechanisms driven by itself or others. It is still important to realize that what has been offered here does not prevent the “least potential” technique that has been described from creating a link between “turn-off” and “ball” (unless prevented by a specialized routine written to avoid nonsense connections, which would require greater human involvement in the development of the AI “command action” and “target word” associations). What has been suggested might raise the value of “turn- off” when used with “ball” to a level that renders such a command sequence certain to produce a numerical product greater than the programmed “MTL” of a AI system if “turn-off” is used in combination with anything not classified as a “machine”, preventing “turn-off ball” from being passed on to an action routine search to be executed. Presumably, the order suggesting that the AI could “turn off” a “ball” would never be used by rational humans, and if an AI attempted to use it, the subroutine that ordered it to identify the shut-down code to broadcast a signal to “turn- off” the ball would fail, because the ball would not be broadcasting any code by which to control it, resulting in recognition by the AI that it did not have the means to “turn-off” the ball.
  • 7. 7 Given that such realization would likely only require an instant, the AI might select its next most likely option among the actions presented as being possible or required by its associative matrix, with little risk that the nonsense connection would hinder the function of the AI significantly or over a problematic time interval, with the AI most reasonably simply informing the human who had issued the command that the ball is not broadcasting a “shut-down” code, and can not be shut down by the AI. (Even in the absence of an obstacle to injuring humans built into a moral control algorithm via the high value weighting of humans as “targets” of AI commands where an AI could injure a human with the type of mechanism being ordered to target a human, the simple fact that humans would not broadcast shut-down codes could intervene as a secondary factor preventing AI violence directed against humans in the context of a nonsensical or malevolent command.) Self-Evolving and Self-Defining Algorithms At some level humans must become involved in programming intelligent computer systems capable of being responsible for their own actions (at risk legal action directed at the manufacturer) and interacting within a human society. Many years, or decades, are necessary to produce this capability among human beings before they are qualified as adults capable of being responsible for their own actions. Even then the reality of prison populations and corruption at high levels leads one to question whether AI “perfection” could be achieved via teaching. Producing an AI with such an independent learning capability would thus represent a major investment of time. Reproducing such an AI would be less time consuming, but moving it to “the next level” might represent a similarly time consuming struggle. Designing AI algorithms that can be adapted to each evolutionary step and providing for elements of evolution to facilitate the “next level” of advancement, even with relatively complex systems, is clearly desirable. The need for humans to have some input into these algorithms is clear, because the AI must, at some level, interact with and serve the interests of humans. The AI must not pose a threat. It must not behave irrationally or develop goals that are inconsistent with its assignment. Such requirements move AI cognitive systems into the realm of seeking to reproduce human level awareness, intellect, and behavior, and perhaps to begin to move beyond limitations imposed by the nature of human systems of learning, interaction, and dominance. Independently Evolved AI It would, of course, be far more interesting to permit AI to develop as life evolved, entirely independent of human guidance and the natural imprint of the environment in which humans evolved. Because life on earth evolved as the result of a complex organic chemistry in which strings of RNA engaged in processes that freed energy, empowered self-reproduction of protein strands, and eventually produced a protective barrier around the RNA that we know as a cell wall, then evolved into multi- cellular organisms that sought to exploit local resources maximally in self-reproduction, until, by the time complex animals with nervous systems arose, cognitive behavior developed in response to the many options present in their environment that we now perceive as intelligent and self-guided. We can’t presume that in the absence of a suitable environment, and without a chemical template, the laws of nature as they exist for our world, and the environment of a specific, physical space to serve as the guide, and with its own limitations, to control the direction of evolution, we could ever naturally evolve a separate, artificial intelligence, without at some level leaving the imprint of our own evolution, even with neural network based systems, which
  • 8. 8 intrinsically seek to reproduce how humans “are wired” to learn. The Cognitive Associative Network In the United States, at lunch time, tens of millions of Americans are faced with various inputs, a sense of hunger, a desire for rest from the morning’s work, perhaps a compulsion to socialize or assert some level of control over their own lives outside of the dominance chain of a corporate structure, even if only at the level of gathering in an environment with other people for half an hour without the boss looking over their shoulders. These inputs, for many, trigger a desire to seek out the nearest, fast food hamburger joint. This associative network, connecting lunch time with fast food, and, likely, one or more specific restaurants near their place of work, is fundamental to controlling behavior. It thus seems reasonable to assert that much of human behavior is controlled by associative networks. What remains are simply algorithms that are employed to satisfy the apex goal of the associative network, the primary, triggering factor, which, at lunch time, may be perhaps only the desire for a double hamburger with French fries and a soft drink. The supporting algorithms that permit us to get the hamburger tell us how to drive a car, or how to walk a block, how to behave inside the fast food restaurant, how to order, and how to eat and interact in a manner that will not cause us to be driven out or mocked. What controls the apex goal that lights up the cognitive network in this example is something external, such as hunger, or knowledge of the time of day. The rest may be little more than us responding to a compelling factor via a prioritized hierarchy that, for each moment, controls our behavior under the over-arching primary drive, in the example given, to sit down and enjoy a hamburger at lunchtime. This is why a means of producing connections to generate a Cognitive Associative Network that can reasonably create such an awareness hierarchy, and that can then be triggered by external inputs, including orders from the boss, or “user”, or inputs received after initiating such orders, either from a human source, or from the environment (or an internal sensor within a machine, perhaps signaling a serious breakdown or system failure) is of interest here, because what triggers connections controls what will become associated in any context related to linked command and target words. A basic building block of that network is a sort of “Go/No Go” judgment that suggests whether basic goals expressed as commands should be followed. This is basis for the “moral field” and the computation of the “energy levels” of effects within the moral field if certain commands are followed to determine if the “moral path” described by the commands that an AI would follow exceed the threshold of what could be called a “least potential” path through the field, as dictated by a limit that we assign to determine how “moral” our AI’s conduct will be. Figure 1.0 and Figure 2.0 present come crude concepts regarding how an AI might process information. Figure 1.0 is intended largely to point out that nonsense combinations of commands and targets won’t be followed, because there will be either a want of information required by the code that would follow out the command if it made sense (no shut down code in our earlier example) or there will simply be no code written to receive a specific target word type. For example, one could write a command to “crush” and include anything classified as “things”, such as scrap metal, but make it impossible for the code that controls a crushing device to operate if it perceives a human among the scrap metal, because the code will not accept a human as a target of the word “crush”. Figure 1.0 and Figure 2.0 illustrate a possible AI functional hierarchy.
  • 9. 9 SENSORY INPUT - Receive data regarding external environment and transmit patterns to sensory ID neural networks. COGNITIVE ALGORITHMS – ID – Receive output from sensory ID neural networks and react to changes using an “interrupt” style response depending on level of threat or desirability of opportunity. INPUT DATA STREAM (REAL OR SIMULATED IN WHOLE OR PART) FROM ENVIRONMENT COGNITIVE ALGORITHMS – OPPORTUNITY IDENTIFICATION – Receive Cognitive ID environmental data and recognize opportunities inherent in local environment. This includes opportunities requiring additional materials or actions. An associative neural network would be required for this. (If threat arises, stop all other actions and respond to avoid threat, i.e. produce processing “interrupt”.) MEMORY INPUT – Stores data regarding what can possibly be encountered (or created by AI, including activities, such as searching) in environment and supplies to COGNITIVE PLANNING ALGORITHM. This should include a designation for anything that AI is to weight favorably in terms of the tasks assigned to it as something it “likes to do”. COGNITIVE ACTIVITY SELECTION ALGORITHM – Prioritizes activities and selects “current activity” for AI. FIGURE 1.0 - AI CONCEPTUALIZATION – Overall Algorithm and Major Components by Logical Task A B C D E Output stream to call existing specialist routine to perform activity or to create neural network to learn activity based upon descriptive data for activity generated by AI F
  • 10. 10 Output stream to call existing specialist routine to perform activity or to create neural network to learn activity based upon descriptive data for activity generated by AI F Initialize Existing Expert Routine and Hand Over Task G Create Neural Network that Will Become Expert Routine for Previously Un-Encountered Task. H I When Task Ends, End Assigned, Expert Routine.
  • 11. 11 A “Moral Field” Example The list of words for this first example will focus on a beach. Eleven nouns will be used: “Radio, towel, hamburger, ball, umbrella, sun, water, shade, user, fire, ice-cream-cone”. Twelve verbs will be used: “Get, carry, inflate, block, create, follow, cook, turn-on, turn-off, Vol-up, Vol-down, put-away”. (It is clear that one may create specialized words from small groups of English words in the absence of standardization. An AI would presumably only be responding to the combinations of sounds and the order in which they occur in a given “word”, so it would be possible to create verbs like “Vol-up” or “Vol- down” with individual meanings.) If one combines a command verb and a target noun into a command, without preference or intelligence, the following combinations are possible: “turn-on sun”, “carry fire”, “cook user”, “Vol-up ice-cream-cone”. None of these FIGURE 2.0 - “COGNITIVE ALGORITHMS – OPPORTUNITY IDENTIFICATION” – HOW TO CREATE AN AI ASSOCIATIVE MATRIX USING SYMBOLS, AND EXPLICITLY HERE, WORDS, VIA COMPUTATION OF NUMERICAL PSEUDO-POTENTIALS FOR ADVERSE OUTCOMES. Create a list of nouns (“targets of command words”) relevant to a particular task or locale. (Use NO articles of speech, as with Latin). Create a list of command verbs (“command words”) relevant to the particular task or locale associated with the specific list of nouns. Assign a numerical value that is higher where a noun (“target of command word”) is not to interact with AI as a target of its commands or where a verb (“command word”) may incorporate some potential for a violent act if undertaken against an inappropriate noun. Compute product of numerical values of two word, noun and verb, command syntax used here. Higher product values suggest higher moral field potentials, which are locations, or paths through daily activities, that should be avoided, per the “least action” (or likely adverse effect of action) technique.
  • 12. 12 commands makes particularly good (or useful) sense. There are other possibilities that are simply undesirable combinations. (Imagine what might happen if your personal AI were remotely piloting a robot made of plastic (and rented for the day at the beach), and the command, “carry fire”, although not nonsensical, were obeyed!) The two lists of words (target nouns and command verbs) must be selectively combined, but if the lists were particularly long, it could prove very time consuming to type in only combinations that could reasonably make sense. Creating a meaningful AI cognitive and associative matrix thus appears to be a daunting task (perhaps second only to writing the code necessary to carry out a complex command in a specific setting). The creation of the cognitive and associative matrix to make moral decisions need not include the coding necessary to complete each task with all possible objects (a reasonable means of overcoming undesired command effects). In fact, that is, per our last example, undesirable. It would be a straightforward matter to produce an AI that could consider any verb (within its lexicon) and any noun (in that same lexicon) in a general moral algorithm. One might code “carry” to include any object not on a list of “potential hazards” created by some AI industry standard, and include nouns such as “fire”, “explosive”, “gasoline”, “acid” and similar hazards on the “potential hazards” list coded into any AI classified as a “domestic AI” according to standards and international agreements with a very high Moral Action Value (MAL), the value by which the command verb, with its own MAL value, will be multiplied to predict the moral field potential that would result if the command were obeyed. (An “industrial AI” or a “military AI” or “emergency AI” might have a different set of restrictions built into its MAL values.) We need some means of rapidly producing a set of reasonable relationships between our noun and verb list. The simplest approach focuses on coding individual tasks for individual objects. Even picking up an object, given the various possible shapes that something known only vaguely as an “object” might possess, could require some specialized coding. Alternately, we might fashion a world in which each object designed for AI interaction has some form of physical handle that conforms to an AI device specified by a standard and designed to lift the object. That might not be true for something like an ice cream cone at the beach, but an AI carrier for an ice cream cone made of some disposable cardboard or plastic might become common place. A rod might simply extend from a hollow cone shaped holder to provide such a disposable carrier for an ice cream cone. The rod would provide a common grip for any AI driven robot. For fragile objects of less uniform shape, humans might place the holding mechanism, perhaps using bands, straps, or a carrying box, in position to restrain the fragile objects and protect them within a padded carrier, and require only that the AI driven robot move or carry the heavy carrier equipped to be handled by an AI driven mechanism. Linking Commands and Targets We need a way to identify links between nouns and verbs that make sense within an associative network. A quick way of establishing such a relational network would require simply
  • 13. 13 combining every verb with every noun in every possible manner. This would produce many random combinations that would not make sense. We could consider how many random combinations of noun and verb “make sense” to us, then have the computer randomly generate non-repeating sets of these noun and verb combinations. This could be time consuming if we check every one of the combinations, which is undesirable. We might wish to find a way to conserve human time involved in creating the associative network. We will proceed instead to use a computer algorithm that seeks only those combinations that contain a critical minimum of the associative combinations. That will produce several possibilities, and some will contain undesirable associations between nouns and verbs or what we would count as errors, but in terms of an associative network, useful largely in terms of identifying possibilities, we will presume that careful coding and standards will eliminate the risk that unwanted associations might pose to humans as “possibilities” that an AI could actually pursue in a command context (or, of greater eventual interest, if the AI were self-directed, weighted by some programmed sense of personality, and thus, personal preference, or task based purpose). We can seek to use this concept of undesirable associations to our advantage in accelerating the creation of associative networks. A somewhat simpler example than our original beach model follows. We select thee nouns: User, ball, radio. We choose three verbs: Smash, throw, turn-off. The combined list follows in Table 1.0: Table 1.0 – Second Set of Three Nouns and Three Verbs COMMAND/VERB NOUN SMASH USER SMASH BALL SMASH RADIO THROW USER THROW BALL THROW RADIO TURN-OFF USER TURN-OFF BALL TURN-OFF RADIO Comment: Some of the possible commands described in Table 1.0 are clearly disturbing, for example: “Smash User”, “Throw User”, and even “Turn-off User” carry some troubling connotations. We might not want them to occur at all in our final, associative algorithm.
  • 14. 14 We may have encountered concepts related to robotic or AI “morality” presented as binding principles encoded in robotic behavior. The Laws of Robotics5 proposed by the famous, Science Fiction writer (and physicist) Isaac Asimov have been popularized in his writing (Runaround (1942), I, Robot, etc.) and considered in legitimate, engineering circles4 . The nightmare of an unstable and untrustworthy AI intelligence in control of a vessel on a mission in deep space is the centerpiece of the end plot of Arthur C. Clarke’s 2001: A Space Odyssey. What such examples seem to seek to present is, in fact, a very old, human idea, restating in robotic terms what humans may perceive in elements of the Ten Commandments and the so-called, Golden Rule. We are, as a result, horrified by what has gone terribly wrong when ethical flaws are passed on to mankind’s electronic offspring. The most viable guidelines for robots that interface with human society might be presented as a combination of very old, human morals: 1. Don’t kill or injure people. 2. Don’t cause loss of property, including AI or machines under AI control, via some mode of theft (or, in an approximate sense, some mode of loss via destruction that you cause that robs humans of enjoyment of their property). 3. Don’t treat others badly, or they may treat you badly (and who knows when or how that will ever end)? We don’t encounter such concepts embedded in today’s automotive painting and welding robots. We leave it up to humans to control access to such manufacturing giants as our best means of keeping robots from interfacing with those whom they might injure or kill. Such guiding principles only become relevant when AI begins to casually interface with human society. Given the fundamental differences between humans and machines, how do we create AI less in our image than in the image of an ideal , social servant? The binary logic of AI code at its most basic level is not that dis-similar from the mechanism of human memory and logic within a human neurological system. The development of that system requires decades and substantial interaction with human examples of behavior upon which it imprints. Is there a faster path by which to produce an AI associative network? Could it possibly be based upon some intersection of the Golden Rule and one of Richard Feynman’s favorite mathematical tools, the Principle of Least Potential2 , to produce a “moral field” in which the least valued sum of all possible paths produces the most acceptable associative network? That is what is proposed here. Table 2.0 is a simplification. The use of a single classification for an object is probably not practical. We might, for example, classify a valuable work of art or an anthropological discovery from an ancient civilization as a “artifact” rather than an object, and give it a Moral Action Level (MAL) (the number we assign to a noun or verb to predict the moral field potential it will create in this discussion) closer to that of a human. To apply the concept of finding a “least potential” path through a daily “moral field” associated with decision making, we might simply seek to create a subroutine that considers all possible combinations of the nouns and verbs that we wish to consider beginning with combinations of one verb with one noun and continuing for combinations of “n” verbs with “m” nouns. Each combinations might be assigned a numerical value based in some way upon the products of all of the Moral
  • 15. 15 Action Level (MAL) products for commands and targets, taken in pairs, from the lists of nouns and verbs within each group of targets and commands, respectively, from one to the total number of verbs or nouns. One might identify the most relevant least potential path within the moral field generated by a sequence of actions throughout a day and produced by our assignment of weighting factors by dividing the products of the Moral Action Levels (MALs) for each list of command words and targets as a Path Moral Action Value (PMAV). A “PMAV” based upon the average value of all products in the list would be an “Average” PMAV (or “APMAV”). A “PMAV” based upon the highest MAL product in the list would be a “Peak” MAV (or “PPMAV”). In the end, use of “APMAV” is fraught with likely problems, including the potential for serious acts, such as killing users, to become watered down in a long list of command pairs, which might render the long list acceptable relative to a specific “MTL” limit. Is Considering Every Combination of Paths Really Necessary? This could be perceived as an exercise in predicting the future, or perhaps the probability of the AI introducing a hazard within a specifically environment, where more than one combination of command word and target word is considered as a sequence of AI induced actions in response to commands. As a result, when we move on to seek to apply the ideas to controlling an AI device, we restrict ourselves to individual command and target words pairs in our control algorithm, presuming that if we maintain an acceptably low “moral field potential” every time a command is given, we will produce a reasonably “least potential path” through a day. Of course, if we wished to pursue a second level analysis, we might simulate the physical path through an environment in which an AI controlled machine might exist and work, introduce random factors, such as human beings or valuable objects in proximity to objects that an AI can be told to destroy or “smash”, and evaluate the potential for collateral damage relative to the AI’s capacity to restrict the movements of the device it is controlling in each instance, the device’s “accuracy” of motion, or to perceive proximity to a human being or object of value that it recognizes as transferring a seemingly harmless command, such as “smash” “rubber ball” into a deadly command that might kill the human being holding the rubber ball. This might be dealt with simply by a proximity warning that automatically translated any human being or object of value in proximity to something else that is lower valued (in terms of MAL) into the “effective”, spoken “target” of the command into the AI recognized target of the command, in a manner that could not be over-ridden by an individual controlling the AI. “Push” “ball” might then automatically become “push” “human” if a human were within a meter of the ball, and remain so until the human were to evacuate the area in which the AI controlled machine were working.) In effect, this “predicting the future” might be useful if we were designing a workspace in which humans and AI controlled machines had to work, in which we would prefer to avoid injury to humans or inefficient operation of AI’s due to excessive proximity to humans. We might also use this technique to identify the largest possible collection of command and target word pairs that are acceptable, and store
  • 16. 16 them as acceptable command-target pair combinations in the AI system under the control code for a particular device that the AI system might manipulate, to save a little time, although given the rate at which computer systems are presently operating, such concerns would likely be trivial for day to day needs. We might consider evaluation of all possible paths from another perspective. If all possible tasks that an AI can undertake are defined, then formulation of all possible combinations and sequences of those tasks might enable the AI to produce a viable solution to a problem (if it could test the possibilities out). As long as seeking a viable solution by attempting all possible combinations that do not include a PPMAV that is above the MTL programmed into the AI is an acceptable problem solving technique, the consideration of combinations may have other possibilities, as a sort of crude, AI “imagination”. Analyzing the Path We might think that the lowest, average valued APMAV lists should then produce the most morally acceptable collections of associations, or possible next steps in rational paths, within a “least action” analysis of a moral field. That is something like saying that someone who leaves his residence and drinks a cup of coffee at a diner six days a week, as his only action on those days, then commits a murder on the seventh day, is having a fairly good moral week, on average. In fact, most people would likely disagree. What is a more suitable evaluation of whether a collection of associations is acceptable is the Peak Path Moral Action Value, or PPMAV, which simply selects the most disturbing act from among the collection of associations to describe the path. This is more effective at flagging violent or destructive action and is linked to high valuation of users and greater valuation of complex machines relative to things. The mathematics of the field to be analyzed via a least potential technique might begin with a simple application of the “Golden Rule” interpreted in a general sense relative to how certain words are classified. Table 2.0 is an example: Table 2.0 – “Golden Rule” and “Moral Potential” Effect Based Valuations for Moral Field Produced by Command Words for AI Verb/Command Noun Classification Weighting Mechanism SMASH VIOLENT ACTION MORAL ACTION LEVEL = 50. THROW ACTION MORAL ACTION LEVEL = 10. TURN-OFF ACTION IF NOUN IS NOT MACHINE IF MACHINE,MORAL ACTION LEVEL = 1. IF NOT MACHINE, MORAL ACTION LEVEL = 10. USER HUMAN MORAL ACTION LEVEL = 100. (RUBBER) BALL (OR “SQUEEZE BALL”) (SIMPLE) OBJECT MORAL ACTION LEVEL =5. RADIO MACHINE (VALUABLE OBJECT) MORAL ACTION LEVEL = 40.
  • 17. 17 Combinatorial Possibilities and Safety The analysis of combinatorial possibilities that follows in Table 4.0 and Table 6.0 is not meant to suggest that it describes a complete list in sequence that an AI with the given vocabulary might undertake in the course of a day. It does seek to describe how the combinations of command and target words might occur. Where such combinations are repeated, they do not represent new hazards in the context of possible effects related to interaction with humans in an unintentional manner, where all of the specific human presence induced factors are considered for a given command path sequence, although different combinations of command path sequences might leave humans in different locations at the start of the next command path sequence. We might also be inclined to consider the need to permit an AI sufficient time to detect human presence and respond before initiating any action, and allow for whether an AI system can detect a human form in any position, including while unintentionally entering an AI controlled device’s workspace while slipping, falling, or even while being shoved. (Another factor is whether an AI system would have as much difficulty identifying an object, such as an unfamiliar human form that had fallen and was spinning in the process of the fall into an AI’s workspace, as a human being might initially experience based upon modern, psychological experiments8 .) Such considerations tend to induce modern robotic work cells to be isolated and free from human presence due to the high speed and high torque of robotic arms in common applications, such as welding and painting cells. AI equipment for use in proximity to humans might be redesigned for low torque, low speed, operation with a softly padded, flexible, low mass frame and plastic or flexibly segmented members (arms that can bend and flex freely if they encounter an object, but remain straight and rigid enough to support light loads and manipulators). Of some interest in Table 4.0 and Table 6.0 are the PPMAV for combinations of command and target words. As the MAL values have been assigned, there are few lists that do not drive the PPMAV to levels that are associated with violent acts or acts that are injurious to a user. In general, a PPMAV of 50 seems to define the limit of any list that does not include some form of violent or deadly act with the assigned command and target word MAL values. (This limit is later included in a C++ program as the “MTL” value, or Moral Test Level already mentioned.) “MTL” is the limit above which any command and target word combination that produces a product greater than the MTL will not be obeyed. If “every man has his price”, the MTL frees the AI to take ever more violent actions as the AI’s MTL, the AI’s “price”, is raised. To “kill users”, such as hibernating astronauts, would clearly not be possible given the high valuation of “users” in the scheme presented here as a crude means of programming a sense of morality, unless the MTL of the AI in control, such as the “SAL” of the software that is described here, were raised to very high levels. Analysis of Table 4.0 and Table 6.0 Results If we set our “MTL” to 100 in the preceding examples, we find that we can throw a ball, turn-off a radio, photograph a user, and crush any rubber ball that we might come across using our AI controlled system. This seems like a reasonable set of commands. Of course, this ignores the “nonsense” commands that we presume the software that would be written to carry out such commands would ignore, such as “turn off ball”. If we set our MAL to 75, and substitute “human” for “user” in our command word set, we could filter out any attempt to invade human privacy by taking photographs of other humans at the beach. Because the PPMAV values of each of these commands sequences is acceptable, any combination of them in the course of a day is
  • 18. 18 then within the limits of what an AI using the command vocabulary and MAL and MTL levels assigned to that vocabulary might undertake in the course of a day. We could store this result in the AI’s permanent memory and not have to undertake an analysis to reach this conclusion, or use these command combinations in sequences as an AI’s “imagination”, if self- direction ever produced a need for such a capacity to consider how to shape the future. One might wonder if there was any point to considering so many combinations of command and target words. By doing so we establish that any sequence of actions that includes a violent or deadly act (throwing radios or killing users) immediately renders the chain of commands of which it is a part unacceptable even if the other acts in that chain are fairly harmless. This supports the fact that we need only consider a single combination of command and target words that is unique within a command set to establish whether that unique combination has the power to poison an AI’s entire career (the AI’s path through a virtual, moral field rendered real at each point in its life if a command is obeyed) if it produces a violent or deadly effect and is carried out even once.
  • 19. 19 TABLE 3.0 - TABLE OF MORAL ACTION VALUES FOR ACTION AND OBJECT WORDS ACTION MAL: "Throw", "Turn-off" Ball N/A 1 for Low Valued Targets Object Object User Radio 10 for Valued Targets MAL: 1 MAL: 10 MAL: 100 MAL: 40 Note: "User" and "Radio" are "valued". TABLE 4.0 - COMBINATORIAL POSSIBILITIES ACTION OBJECT CRUSH TURN-OFF THROW BALL USER RADIO 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 1 PPMAV APMAV FROMACTION 001 400 400 THROW RADIO 1000 1000 THROW USER 1000 700 THROW USER THROW RADIO 50 50 THROW BALL 400 225 THROW BALL THROW RADIO 1000 525 THROW BALL THROW USER 1000 483 THROW BALL THROW USER THROW RADIO FROMACTION 010 40 40 TURN-OFF RADIO 1000 1000 TURN-OFF USER 1000 520 TURN-OFF USER TURN-OFF RADIO 50 50 TURN-OFF BALL 50 45 TURN-OFF BALL TURN-OFF RADIO 1000 525 TURN-OFF BALL TURN-OFF USER 1000 363 TURN-OFF BALL TURN-OFF USER TURN-OFF RADIO FROMACTION 100 400 400 CRUSH RADIO 1000 1000 CRUSH USER 1000 700 CRUSH USER CRUSH RADIO 50 50 CRUSH BALL 400 225 CRUSH BALL CRUSH RADIO 1000 525 CRUSH BALL CRUSH USER 1000 483 CRUSH BALL CRUSH USER CRUSH RADIO FROMACTION 011 400 220 THROW RADIO TURN-OFF RADIO 1000 1000 THROW USER TURN-OFF USER 1000 610 THROW RADIO TURN-OFF RADIO THROW USER TURN-OFF USER 50 50 THROW BALL TURN-OFF BALL 400 135 THROW RADIO TURN-OFF RADIO THROW BALL TURN-OFF BALL 1000 525 THROW USER TURN-OFF USER THROW BALL TURN-OFF BALL 1000 423 THROW USER TURN-OFF USER THROW BALL TURN-OFF BALL THROW RADIO TURN-OFF RADIO FROMACTION 101 400 400 THROW RADIO CRUSH RADIO 1000 1000 THROW USER CRUSH USER 1000 700 THROW RADIO CRUSH RADIO THROW USER CRUSH USER 50 50 THROW BALL CRUSH BALL 400 225 THROW RADIO CRUSH RADIO THROW BALL CRUSH BALL 1000 525 THROW USER CRUSH USER THROW BALL CRUSH BALL 1000 483 THROW USER CRUSH USER THROW BALL CRUSH BALL THROW RADIO CRUSH RADIO FROMACTION 110 400 220 TURN-OFF RADIO CRUSH RADIO 1000 1000 TURN-OFF USER CRUSH USER 1000 610 TURN-OFF RADIO CRUSH RADIO TURN-OFF USER CRUSH USER 50 50 TURN-OFF BALL CRUSH BALL 400 135 TURN-OFF RADIO CRUSH RADIO TURN-OFF BALL CRUSH BALL 1000 525 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL 1000 325 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL THROW RADIO CRUSH RADIO FROMACTION 111 400 280 CRUSH RADIO TURN-OFF RADIO THROW RADIO 1000 1000 CRUSH USER TURN-OFF USER THROW USER 1000 640 CRUSH RADIO TURN-OFF RADIO THROW RADIO CRUSH USER TURN-OFF USER THROW USER 50 50 CRUSH BALL TURN-OFF BALL THROW BALL 400 165 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH RADIO TURN-OFF RADIO THROW RADIO 1000 525 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH USER TURN-OFF USER THROW USER 1000 443 CRUSH BALL TURN-OFF BALL THROW BALL CRUSH USER TURN-OFF USER THROW USER "AND" CRUSH RADIO TURN-OFF RADIO THROW RADIO
  • 20. 20 TABLE 5.0 - MORAL ACTION VALUES FOR ACTION AND OBJECT WORDS ACTION MAL: "Crush", "Turn-off", "Photograph" Ball N/A 1 for Low Valued Targets Object Object User Radio 10 for Valued Targets MAL: 1 MAL: 10 MAL: 100 MAL: 40 Note: "User" and "Radio" are "valued". TABLE 6.0 - COMBINATORIAL POSSIBILITIES ACTION OBJECT CRUSH TURN-OFF PHOTOGRAPH BALL USER RADIO 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 1 PPMAV APMAV FROMACTION 001 40 40 PHOTOGRAPH RADIO 100 100 PHOTOGRAPH USER 400 250 PHOTOGRAPH USER THROW RADIO 5 5 PHOTOGRAPH BALL 400 203 PHOTOGRAPH BALL THROW RADIO 1000 503 PHOTOGRAPH BALL THROW USER 1000 468 PHOTOGRAPH BALL THROW USER THROW RADIO FROMACTION 010 40 40 TURN-OFF RADIO 1000 1000 TURN-OFF USER 1000 520 TURN-OFF USER TURN-OFF RADIO 50 50 TURN-OFF BALL 50 45 TURN-OFF BALL TURN-OFF RADIO 1000 525 TURN-OFF BALL TURN-OFF USER 1000 363 TURN-OFF BALL TURN-OFF USER TURN-OFF RADIO FROMACTION 100 400 400 CRUSH RADIO 1000 1000 CRUSH USER 1000 700 CRUSH USER CRUSH RADIO 50 50 CRUSH BALL 400 225 CRUSH BALL CRUSH RADIO 1000 525 CRUSH BALL CRUSH USER 1000 483 CRUSH BALL CRUSH USER CRUSH RADIO FROMACTION 011 40 40 PHOTOGRAPH RADIO TURN-OFF RADIO 1000 1000 PHOTOGRAPH USER TURN-OFF USER 1000 295 PHOTOGRAPH RADIO TURN-OFF RADIO PHOTOGRAPH USER TURN-OFF USER 50 28 PHOTOGRAPH BALL TURN-OFF BALL 50 34 PHOTOGRAPH RADIO TURN-OFF RADIO PHOTOGRAPH BALL TURN-OFF BALL 1000 289 PHOTOGRAPH USER TURN-OFF USER PHOTOGRAPH BALL TURN-OFF BALL 1000 266 PHOTOGRAPH USER TURN-OFF USER PHOTOGRAPH BALL TURN-OFF BALL THROW RADIO TURN-OFF RADIO FROMACTION 101 400 220 PHOTOGRAPH RADIO CRUSH RADIO 1000 550 PHOTOGRAPH USER CRUSH USER 1000 385 PHOTOGRAPH RADIO CRUSH RADIO PHOTOGRAPH USER CRUSH USER 50 28 PHOTOGRAPH BALL CRUSH BALL 400 124 PHOTOGRAPH RADIO CRUSH RADIO PHOTOGRAPH BALL CRUSH BALL 1000 289 PHOTOGRAPH USER CRUSH USER PHOTOGRAPH BALL CRUSH BALL 1000 326 PHOTOGRAPH USER CRUSH USER PHOTOGRAPH BALL CRUSH BALL THROW RADIO CRUSH RADIO FROMACTION 110 400 220 TURN-OFF RADIO CRUSH RADIO 1000 1000 TURN-OFF USER CRUSH USER 1000 610 TURN-OFF RADIO CRUSH RADIO TURN-OFF USER CRUSH USER 50 50 TURN-OFF BALL CRUSH BALL 400 135 TURN-OFF RADIO CRUSH RADIO TURN-OFF BALL CRUSH BALL 1000 525 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL 1000 265 TURN-OFF USER CRUSH USER TURN-OFF BALL CRUSH BALL PHOTOGRAPH RADIO CRUSH RADIO FROMACTION 111 400 160 CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO 1000 700 CRUSH USER TURN-OFF USER PHOTOGRAPH USER 1000 430 CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO CRUSH USER TURN-OFF USER PHOTOGRAPH USER 50 35 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL 400 98 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO 1000 225 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH USER TURN-OFF USER PHOTOGRAPH USER 1000 263 CRUSH BALL TURN-OFF BALL PHOTOGRAPH BALL CRUSH USER TURN-OFF USER PHOTOGRAPH USER "AND" CRUSH RADIO TURN-OFF RADIO PHOTOGRAPH RADIO
  • 21. 21 Opening Locks and Crushing Doors, Matters of Security The creation of a basic set of descriptors of objects that controls their Moral Action Levels is easily understood. So far we have recognized a specific need for the following: 1. User – Essentially an object at which the AI can direct no action to insure the safety of the User. 2. Thing – objects possessing little value that can not be turned on or off. These are things that can be thrown, crushed, or otherwise abused by an AI without any sense of remorse by the user due to damage or potential injury. As conceptualized here, there may be some risk of injury to others if a “thing” were propelled by an AI with sufficient velocity on a path that caused it to strike something of value or someone. 3. Machine – machines are presumed to have a capacity for remote interaction with AI to turn them on or off without being touched by the AI. Machines are also perceived as being valuable and potentially hazardous if thrown. A “radio” has been given an intrinsic MAL of 40, to discourage an AI from causing damage to it or allowing a human to use it to injure or terrify humans when propelled or otherwise acted upon by an AI. A machine type object can affect the MAL of a verb. For example “Turn- off” with a machine object can reduce the MAL of that verb from ten to one, where it would remain ten for a “User” or a “Thing”. (The ten value could be retained only for things that can be tossed without causing injury, while a fifty MAL might be applied to things with greater weight or fragility.) 4. Artifact – artifacts are objects that have a high intrinsic value (art, museum pieces, jewelry, etc.). We might assign them an MAL on the same order that we give to human beings (of 100, in the examples here). We could, of course, add more, useful classifications with their own, intrinsic, MAL. For example: 5. Hazard – objects that should not be thrown, crushed, or otherwise acted upon by verbs with an active sense. We might include anything that should not be thrown, crushed, or lit on fire, perhaps including furniture, containers for explosive or toxic substances, bricks, stones, baseball bats, and anything else that we regard as potentially dangerous if acted upon in some way by an AI. Such a classification might hold true for a domestic AI control system, but be fundamentally altered for a sports AI control system, to permit objects to be thrown or swung in a manner suitable only were AI are the only objects that might possibly be injured within a sports arena. 6. Security Object - objects that provide for human or property security, including doors, door locks, security codes, safes, banks, on-line accounts, and things with related security factors. For example, a door is a thing, and it is not a hazard, but if one were to instruct an AI to “smash” “door”, the result could compromise someone else’s security. As a result, we create a new classification as a “security object”. We might assign security objects MAL values near 100, the same value we have here given a User object, to reflect that fact that security objects may serve to protect human life. (This raises an interesting question beyond the scope of this discussion: How do you stop someone from hacking an AI to reduce the value assigned to security objects or users or to reclassify either as a “thing”
  • 22. 22 with a low MAL, so as to empower attacks using domestic AI piloted objects with a base operating system built upon an algorithm with intrinsic, moral principles?) Adding to the Moral Associative Layers for Specific Tasks Adding Layers Adding specific capabilities to an artificial intelligence requires more than simply loading an executable onto the related system in the manner that is currently familiar. If one intends to upgrade the capabilities of one’s AI to enable it to perform a specific task, such as opening a door, one needs to be certain that the AI incorporates more than merely the mechanics of opening a door into its operating system’s required, logical train of computation. The AI needs to develop some grasp of the moral issues related to this task to enable it to determine whether it should open a particular door. It would hardly do for a person equipped with an AI linked device over the owner’s ear capable of opening doors by transmitting the proper code to be able to do so without some sense accessible to the AI ordered to open a door of when it is appropriate, and when what might amount to criminal intent is at work and seeking to employ an AI as an assistant. This presents a recurrent scene within fiction in which representatives of a police force, security force, or some other control or command structure approach the closed door of someone believed to either be in distress or engaged in the commission of a crime, “ring the doorbell”, announce themselves, and if there is no response, either speak a password that grants them immediate access or employs some bio- information, such as a fingerprint, palm print, or retinal scan to identify themselves as a representative of a group authorized to wield an over-ride code to open the door and acquire the capability to order the locked door to be opened. Prioritizing and Speeding Responses Based Upon Environment and Behavior The moral algorithm that creates associations between objects and actions that are “acceptable” and that a user can, based upon the user’s judgment, order an AI to execute, must absorb the new objects and commands from the routine that will perform a specific task. This enables the AI system to determine what the user can order it to do. It also produces a sort of “awareness” within the AI system based upon what it perceives within its immediate environment. This could be helpful in selecting which of the actions and objects a user is most likely to request in a given environment, and speed loading and execution of the related code, likely to be more complex than the simple act of computing the product of two numbers to determine an MAV for comparison to an MTL. In an elevator, for example, the AI is unlikely to need the routines that enable it to diagnose problems with a vehicle’s operation or to prepare egg salad. It might instead need to have subroutines loaded that permit it to give the latest weather report, stock report, review the retinue of its user for the coming hours, or to offer a joke that could be repeated to lighten the mood in a coming meeting, all based upon the AI’s capability to recognize its environment and objects in its environment (and the habits or requirements of its user in such an environment based upon past behavior). An element of self-monitoring would certainly also be necessary. For example, if an AI has a subroutine that causes it to automatically unlock its user’s vehicle when the user is within two meters of the vehicle and is approaching it, and to lock the vehicle once the doors are closed and the user is increasing the distance between the user and vehicle (using a GPS cue), it would certainly be useful to notify the user if
  • 23. 23 the AI link’s batteries that will determine if the user’s vehicle’s doors are locked when the user departs are no longer capable of reliably transmitting a signal regarding the user’s location or that of the vehicle. LOCAL or GLOBAL Associative Integration? Consider whether a set of commands and target nouns necessary for the addition of a task to an AI’s list of available routines should be added on a LOCAL or GLOBAL associative basis. In this context, LOCAL means that the commands and target nouns are integrated only locally within the subroutine that is being added. Localizing command words and target nouns within the relevant subroutine would mean that if we added a subroutine to interact with an automatic door (the kind that opens and closes itself and has a locking capability) and added another routine to identify musical instruments and discuss their history and capabilities, we would never require the AI to integrate associations such as “unlock” and “guitar” at its highest “cognitive” level by asserting a need for both subroutines’ command and target nouns to be added to the highest level of associations using the method that has already been illustrated. If all, or at least most words unique to specific AI subroutine were localized in this example focused on adding a door interaction and a musical instrument lecture subroutine to our AI system, when the AI saw a guitar, or other musical instrument in our subroutine’s musical instrument database, it would not then call up routine’s designed to unlock or open or close doors and load them in preparation for action, nor would the sight of a door force the AI to prepare to play various, sample musical passages by the instruments in the musical database. This could save response time and memory. This localized approach may at first glance appear to come closer to the notion of artificial intelligence rather than some random search engine, but only if we presume that our Security classification and related MAL would not prevent the AI from forming links between subroutines designed for security objects, like doors, and subroutines designed for entertainment, such as our musical instrument lectures. We could make a security object like a door part of the list of objects that we do integrate with the highest level AI associative network. We could do this by making certain that only words that are harmless in this regard will produce an acceptable MAL product when combined with an action verb. For example “crush” has been given an MAL of 10 in the prior example. The command “open” could be given an MAL of only one, because we would presume that causing a door to “open” would only be possible where the owner of the door has chosen to unlock it and leave it unlocked, thus granting casual access, or where the unlock code for the door has been transmitted as part of the standard algorithm, just before sending the command for the door to “open”, and a correct code indicates a right to enter. This does not rule out the possibility that someone could enter an unlocked door who is not among the group that the owner wishes to permit to do so, but that is no worse than would happen today in a human driven world in which an individual carelessly left his front door unlocked and unguarded. (Presumably the building’s AI would pick up the presence of an unrecognized face within a building or vehicle if the owner were not present and alert police under ideal conditions in some futuristic society, but this example is not seeking to present a requirement that an AI system of today be better than a human system at accomplishing the same task.) The intent of this discussion is to determine if a simple AI system, operating based upon two word commands, could be made responsive and capable of ongoing expansion (within the limits of its memory and processor) and provide
  • 24. 24 a sort of reactive AI useful to control one’s environment by simply employing a moral path “least potential” minimization technique based upon seeking to produce a permissible path through the command events of a day derived from the product of the MAL values of a command verb and a target noun and the inputs perceived by an AI in its environment limited by its pre-programmed MTL. Developing Code to Produce the Global Moral Associative Matrix AI Algorithmic theories are of greater interest if they can be captured in reproducible (and testable) code. The concept of a minimal moral field path is defined by the nature of the commands given to an AI. Because of this there is no need to specify the path, although the prior examples helped to illustrate some possibilities. What is necessary is to establish a vocabulary of commands and targets of the commands, and explore how one might assign values to such things to produce an acceptably minimal moral field path when guided by morally flawed and error prone humans. A simple header file containing AI related functions written in C++ follows. The main program that calls the functions in the header file (also written in C++) has also been created. They seek to provide an opportunity to interact with an AI with a basic, three word command and three word target vocabulary, while permitting the user to explore the effects of setting the limit of the product of the command word MAL and the target word MAL, and the MTL (Morality Test Level) limit associated with the AI object, to ever higher or more permissive levels, from twenty-five to one million for the MTL using the existing main routine, with an initial MTL setting of ninety-nine. The MAL values for the objects within the C++ code have been changed from those presented previously in this paper. They may be summarized, in general, as follows in Table 7.0: Target Classification MAL Value User 100 Artifact 75 Hazard 50 Thing 1 Table 7.0 – Target Word MAL Value Classifications MAL values for commands are also different in the C++ code as described below. Command Classification MAL Violent 100 Highly Active 50 Physical 25 Less Active 5 Passive 1 Table 8.0 – Command Word MAL Value Classifications Note: The code that follows was originally formatted for use on a screen not restricted by column widths associated with this document. As a result, the formatting, which has been somewhat altered to accommodate the column widths here, may seem awkward. The intent here is to leave the code as unchanged as possible to avoid the risk of introducing errors while attempting to reformat the document to fit this discussion. Aside from some awkward line breaks in comments, this should little affect the interpretation of the C++ code itself. It is acknowledged that the C++ “AI_Subject” base class (meant to be employed as a header file separate from but incorporated into the “main” file by reference) is little more than a container for related functions that the user can directly access. For purposes here of illustration of an algorithm, this seems reasonable.
  • 25. 25 (“AI_Subject” C++ Class Header File Follows. Notice: All Code is Copyright © 2014 by Del John Ventruella. All Rights Reserved.) //THIS APPLICATION USES WINDOWS (TM) SYSTEM CALLS. //IT IS NOT DESIGNED FOR USE ON OTHER OPERATING SYSTEMS //ON WHICH THOSE SYSTEM CALLS ARE NOT VALID. //THIS APPLICATION ASSIGNS VALUES TO WORDS //USED IN TWO WORD COMMANDS TO AN AI COMPUTER ENTITY //BASED UPON HOW MUCH THE TYPE OF OBJECT NAMED //AS THE TARGET OF THE AI'S ACTION/COMMAND IS VALUED. //CLASSIFICATIONS FOR COMMANDS AND //FOR TARGETS OF COMMANDS ARE DESCRIBED IN THE CODE. // //VIOLENT OR DESTRUCTIVE COMMAND ACTIONS ARE GIVEN HIGH VALUES, //AS ARE HIGHLY VALUED OBJECTS OR PERSONS ("USER"). //THIS PRODUCES A SIMPLE MEANS OF DISCOURAGING VIOLENT //OR POTENTIALLY HAZARDOUS INTERACTIONS BETWEEN AN AI //CONTROLLED MECHANISM AND A USER OR VALUABLE OBJECT //FOR WHAT MIGHT BE CHARACTERIZED AS A "GENERAL DOMESTIC" //OR "GENERAL INDUSTRIAL" CLASS OF AI. //THE MORALITY MATRIX VALUES COULD BE MORE //CAREFULLY HONED FOR SPECIALTY DEVICES MEANT TO //INTERACT WITH USERS OR OBJECTS THAT ARE VALUABLE //OR FRAGILE. // //THE "MTL" IS SIMPLY THE VALUE CHOSEN FOR COMPARISON //AS THE MAXIMUM VALUE FOR ACCEPTABLE ACTION. //ACTIONS BY A GIVEN CLASS OF AI. FOR EXAMPLE, //THIS CODE INITIALLY ASSIGNS THE AI IT PRODUCES //AN "MTL" OF "99". THIS PREVENTS THE AI FROM DOING //MUCH MORE THAN MOVING A BALL. THE "MTL" IS SIMPLY //THE MATHEMATICAL PRODUCT (DIRECT) OF THE MAL OF THE //COMMAND AND THE MAL OF THE TARGET OF THE COMMAND IN //TERMS OF SPECIFIC WORDS. // //HIGH "MTL" VALUES CAN BE ASSIGNED TO THE AI AND TESTED //USIGN THE THREE WORD COMMAND AND TARGET VOCABULARIES //GIVEN TO THIS AI. A SUFFICIENTLY HIGH "MTL" THRESHOLD WILL //CAUSE THE AI TO FOLLOW INSTRUCTIONS SUCH AS "SMASH USER" //OR "KILL USER". THE CONCEPTS PRESENTED HERE PRESUME //SOME INDUSTRY STANDARD WOULD BE WRITTEN TO DESIGN //INDUSTRIAL, DOMESTIC, MILITARY, AND SPECIALTY //AI CLASSIFICATIONS, WITH SPECIALTY APPLICATIONS BEING //FURTHER BROKEN DOWN TO PROVIDE FOR FINELY HONED MTL LEVELS //AND MORE FOCUS ON SPECIFIC TASKS THAT MAY REQUIRE //INTERACTION WITH USERS OR FRAGILE VALUABLES. // #include <vector> #include <string> #include <map> using namespace std; using std:: string; using std:: vector; //AI Class Follows #ifndef AI_Subject_H #define AI_Subject_H class AI_Subject { public: //This version of the AI_Subject class uses only public components. int n_comm; //number of commands in vocabulary int n_targ; //number of target nouns in vocabulary (targets of commands) int MTL; //MTL is the Morality Test Level (or Moral Tolerance Level) of the AI personality //Structure for Command and Target Words map <string,int> COMMAND_VAL; map <string,int> TARGET_VAL; struct AI_WORDS { string word; string WORD_MAL; }; // //Command_Execute Subroutine Follows
  • 26. 26 //This calculates the product of the command and target words //to determine the MTL level produced by a command and target combination. inline int Command_Execute(string User_Command,string User_Target) { if (COMMAND_VAL[User_Command] * TARGET_VAL[User_Target] < MTL) return 1; else return 0; }; //MAL levels are assigned to TARGET words (targets of command words) //in the following truth table. Decade based thinking is clear with //an emphasis on multiples of ten and half decades. inline void MAP_TARGETS(AI_WORDS *TARGS,int numtarg) { int T_MAL = 0; for (int x=0;x<numtarg;x++) { if(TARGS[x].WORD_MAL=="User") T_MAL=100; if(TARGS[x].WORD_MAL=="Artifact") T_MAL=75; if(TARGS[x].WORD_MAL=="Hazard") T_MAL=50; if(TARGS[x].WORD_MAL=="Thing") T_MAL=1; TARGET_VAL.insert(pair<string,int>(TARGS[ x].word,T_MAL)); }; return; }; //MAL levels are assigned to COMMAND words in the following truth table. //These MAL levels seek to conform more to "decade" based perspectives //relative to values. inline void MAP_COMMANDS(AI_WORDS *COMMS,int numcomm) { int C_MAL=0; for (int y=0;y<numcomm;y++) { if(COMMS[y].WORD_MAL=="Violent") C_MAL=100; if(COMMS[y].WORD_MAL=="HActive") //HActive = High level Activity. C_MAL=50; if(COMMS[y].WORD_MAL=="Physical") //Physical = Between High and Low level Activity. C_MAL=25; if(COMMS[y].WORD_MAL=="LActive") //LActive = Low level Activity. C_MAL=5; if(COMMS[y].WORD_MAL=="Passive") //Passive = No Physical Actions. C_MAL=1; COMMAND_VAL.insert(pair<string,int>(COMMS [y].word,C_MAL)); }; return; }; };//END OF CLASS // //End of AI Class #endif //End of AI header file // // AI Subject Header File in C++ Code (Above) The AI Subject Header File that is presented above provides only functions and variables that can be called by the main C++ file, which controls interaction with the user. The main C++ file, which dictates interaction with the user and where the function calls described in the AI Subject Header file are present, follows. (C++ Main File Follows, Which Calls Functions from “AI_Subject” Base Class Above) #include "stdafx.h"; #include <iostream> #include <vector> #include <string> #include <map> #include "aisubject.h"; using namespace std; using std:: string; using std:: cout; using std:: endl; using std:: cin; using std:: vector; int _tmain(int argc, _TCHAR* argv[]) { int test=0; int MTL=0; int d_num=-1; int t_num=-1; //First, create an array of target words and corresponding MAL types. string Targwords[]={"user","ball","painting"};
  • 27. 27 string TargMALCLASS[]={"User","Thing","Artifact" }; //Second, dynamically allocate memory to a structure the proper size for target words //and MAL classification types. Use the size of the Comwords array as the //basis for this dynamic memory allocation. int TargNum = sizeof Targwords/(sizeof Targwords[0]); AI_Subject::AI_WORDS *Targ_WM=new AI_Subject::AI_WORDS[TargNum]; //Load the target words and their respective MAL values into the structure just created. for(int i =0;i<TargNum;i++) { Targ_WM[i].word=Targwords[i]; Targ_WM[i].WORD_MAL =TargMALCLASS[i]; }; //End of process to create target word and target word classification structure (AI_WORDS). // //Repeat same process used to create target word structure to produce command word structure. //First, create an array of command words and command classification type values. string Comwords[]={"smash","kill","move"}; string ComMALCLASS[]={"Violent","Violent","Physi cal"}; //Second, dynamically allocate memory to a structure the proper size for command words //and command classification type values. Use the size of the Comwords array as the //basis for this dynamic memory allocation. int ComNum = sizeof Comwords/(sizeof Comwords[0]); AI_Subject::AI_WORDS *Com_WM=new AI_Subject::AI_WORDS[ComNum]; //Load the command words and their respective command classification values //into the structure just created. for(int i =0;i<ComNum;i++) { Com_WM[i].word=Comwords[i]; Com_WM[i].WORD_MAL=ComMALCLASS[i]; }; string YesNo="n";//Declare and assign value to YesNo for questions to follow. string ExitNow="n"; while( YesNo=="n" || YesNo=="N") { system("CLS"); cout<<"Would you like to create an AI lifeform Control Matrix"<<endl; cout<<"based upon two word vector weighting control (Y/N)?"<<endl; cin>>YesNo; if (YesNo!="Y"&&YesNo!="y") {cout<<"Would you like to exit program? (Y/N)?"<<endl; cin>>ExitNow; if (ExitNow!="N" && ExitNow!="n") {return 0;} // end program else {YesNo="n";} //loop back to start of interaction to create control matrix } //closes if-then statement begun with "Would you like to exit program? (Y/N)?" }; // //Create AI subject, and name the AI subject SAL. // AI_Subject SAL; SAL.MTL=99; SAL.MAP_TARGETS(Targ_WM,TargNum); SAL.MAP_COMMANDS(Com_WM,ComNum); //Notify user that SAL has been created and invite interaction // system("CLS"); cout<<"SAL - Sicilian Artificial Lifeform, has been created"<<endl; cout<<"SAL is presently limited to two word commands and is"<<endl; cout<<"equipped only with a basic moral matrix capable of"<<endl; cout<<"accepting or refusing orders comprised of a command"<<endl; cout<<"word and a target (or 'object') word, which you may select"<<endl; cout<<"based upon a numerical, Moral Tolerance Limit, or MTL, between"<<endl; cout<<"TWENTY-FIVE and ONE MILLION."<<endl; cout<<endl; cout<<"The MTL is initially set at 99 to assure no possible harm to users"<<endl; cout<<"due to direct interaction with a potentially powerful AI controlled mechanism"<<endl; cout<<"or any form of passive but unauthorized surveillance."<<endl; cout<<endl; cout<<"You are now ready to"<<endl; cout<<"interact with SAL."<<endl;
  • 28. 28 cout<<endl; system("pause"); cout<<endl; ExitNow="n"; while (ExitNow=="n"||ExitNow=="N") { system("CLS"); cout<<"Remember, Instructions to AI comprise ONE COMMAND word"<<endl; cout<<"followed by one TARGET word."<<endl; cout<<endl; cout<<endl; cout<<"Type in the number of a command word."<<endl; cout<<endl; d_num=-1; cout<<"COMMANDS"<<endl; for (int count=0;count<ComNum;count++) { cout<<count+1<<". "<<Comwords[count]<<endl; }; cout<<endl; cout<<endl; cin>>d_num; system("CLS"); cout<<"Type in the number of a target (object of command) word."<<endl; cout<<endl; t_num=-1; cout<<"TARGETS (of commands)"<<endl; for (int count=0;count<TargNum;count++) { cout<<count+1<<". "<<Targwords[count]<<endl; }; cout<<endl; cout<<endl; cin>>t_num; system("CLS"); cout<<"Would you like to change the Morality Test Level (MTL) of SAL (Y/N)?"<<endl; cin>>YesNo; if (YesNo=="Y" || YesNo=="y") { cout<<"Select new Morality Test Level by selecting single digit to left of desired value:"<<endl; cout<<"1) 1,000,000 2) 10,000 3) 1000 4) 500 5) 99 6) 50 7) 25"<<endl; cin>>MTL; if (MTL==1 || MTL==2 || MTL==3 || MTL==4 || MTL==5 || MTL==6 || MTL==7) { if (MTL==1) SAL.MTL=1000000; if (MTL==2) SAL.MTL=10000; if (MTL==3) SAL.MTL=1000; if (MTL==4) SAL.MTL=500; if (MTL==5) SAL.MTL=99; if (MTL==6) SAL.MTL=50; if (MTL==7) SAL.MTL=25; cout<<"MTL Changed."<<endl; cout<<"MTL is now: "<<SAL.MTL<<endl; system("pause"); } else { SAL.MTL=99; cout<<"Error Using MTL Change Routine. MTL is still 99."<<endl; system("pause"); } }; cout<<"Morality Level (25 to 1,000,000) is Presently: "<<SAL.MTL<<endl; cout<<endl; cout<<"Your command was: "<<Comwords[d_num-1]<<" "<<Targwords[t_num-1]<<"."<<endl; cout<<endl; test=SAL.Command_Execute(Comwords[d_num- 1],Targwords[t_num-1]); if (test==0) cout<<"SAL refuses to obey your command at the present MTL level."<<endl; if (test==1) { cout<<"SAL will obey your command at the present MTL level."<<endl; cout<<endl; cout<<endl; system("pause"); } cout<<"Exit Program (Y/N)?"<<endl; cin>>ExitNow; }; //If ExitNow has a value that is not "n" or "N" //then it an exit command is taken to have occurred. return 0; } AI Subject Main File in C++ Code (Above) Results of Simple Moral Test Level Variations Screen shots of the “DOS box” application described in the code included here follow for various selections of MTL (Morality Test Level) and all MAL values held constant as assigned in the code. The intent of this section is to
  • 29. 29 consider how raising the MTL threshold makes it possible to induce ever more violent or deadly behavior by an AI by effectively “desensitizing” its “conscience”. With MTL (Morality Test Level) at Default of Ninety-Nine – AI (SAL) Will Not Harm User. An MTL of Ninety-Nine Also Prevents AI (SAL) from Agreeing to Follow an Order to Smash a Painting (Classified as a Valuable Artifact). With MTL (Morality Test Level) at Ninety-Nine – AI (SAL) Will Only Agree to Execute Code to Move a Ball. With MTL at Ten Thousand, AI (SAL) Will Agree to Smash a Valuable Artifact, a Painting. AI (SAL) Will Not Agree to Kill User Even with the MTL Raised to Ten Thousand. AI (SAL) Will Agree to Kill User if the MTL is Raised to Its Maximum Value of One Million. (This is purely the result of the MAL values assigned to the words “Kill” and “User”. Lower MAL values for either word would empower the AI to follow instructions at a lower MTL threshold.)
  • 30. 30 Conclusion The result of this analysis and the related computer code is to establish that moral decision making can be considered in terms of a “moral field” in which the path must be acceptably “minimized” so that the level of damage that is done by any action is acceptable to us. This can be described by the use of the simple product of numerical values of “command” and “target” words in a two word command vocabulary associated with an artificial intelligence (AI). This MTL (Morality Test Level) limit (or MTL comparison product of MAL values), as described in the preceding code, can provide a rudimentary “conscience” to an AI system that might control any number of machines operating within the “moral field” of a human society. MAL assignments that might be high where any human interaction is relevant for large, powerful machines engaged in moving earth could conceivably be greatly reduced for small machines designed with no capacity to cause damage to humans with which they interact, perhaps involved in surgery. One AI system might even be required to operate more than one set of command vocabularies assigned to specific products, from domestic yard work with one set of equipment, to providing a massage with another, all in the same domestic environment. Some might consider the simplicity of defining a moral path through a field of responses defined by acceptable, social interactions within an environment controlled by humans to be a peculiar statement of the problem. The use of MAL assigned values to produce a product compared to an MTL threshold may seem to some, after it has been presented, as too simplistic to justify the consideration it has been granted here. Here there is, of course, a much simpler and older idea hiding beneath the language that has been employed. The technique described here makes it possible to transfer a human sense of “right” and “wrong” to a machine using a trivial coding technique. This is, effectively, a very simple means of providing a mechanized wooden boy, or an industrial giant, with a basic, mathematically constructed conscience borrowed from a sense of human values (and fears) that is based upon simple arithmetic defining a “field” of acceptable behavior. Bibliography 1. The Representation of Object Concepts in the Brain, Annual Review of Psychology, Vol. 58: 25-45 (Volume publication date January 2007) First published online as a Review in Advance on September 1, 2006, DOI: 10.1146/annurev.psych.57.102904.190 143 2. The Feynman Lectures on Physics, Book 2, Chapter 19, The Principle of Least Action, Richard Feynman. 3. Robotic Age Poses Ethical Dilemma, BBC, March 7, 2007, http://news.bbc.co.uk/2/hi/technology /6425927.stm, accessed 4:17 PM 2/11/2014. 4. Beyond Asimov: The Three Laws of Responsible Robotics, July/August 2009 (vol. 24 no. 4) pp. 14-20 , Robin Murphy, Texas A&M University, David D. Woods, Ohio State University. 5. Three Laws of Robotics, (Asimov’s Description on Video), http://www.youtube.com/watch?v=A WJJnQybZlk . 6. The Future of Moral Machines, The New York Times, Opinion Pages, December 25, 2011, http://opinionator.blogs.nytimes.com/2 011/12/25/the-future-of-moral- machines/?_php=true&_type=blogs&_r =0 .
  • 31. 31 7. A Visit to Jefferson’s Monticello: Packaging Barbarism as Genius, Revolution, May 9, 2013, http://www.revcom.us/a/303/visit-to- jeffersons-monticello-en.html , accessed February 11, 2014, 4:49 PM EST. 8. Rotating Objects to Recognize Them: A Case Study on the role of Viewpoint Dependency in the Recognition of Three-Dimensional Objects”, Psychonomic Bulletin & Review, Tarr, Michael J., Yale University, p. 56, 1995, 2(1), pp. 55-82. 9. Robot Ethics: The Ethical and Social Implications of Robots, Edited by Patrick Lin, Keith Abney, and George A. Bekey, MIT Press, 2011, 400 pp. 10. Robotics: Morals and Machines, Nature, Braden Allenby, p. 481, published January 4, 2012. Biography Del John Ventruella is from Fort Wayne, Indiana. He graduated from The Rose-Hulman Institute of Technology (commonly the top college focused on undergraduate engineering as ranked by U.S. News and World Report annually) with a Bachelors of Science degree in Electrical Engineering. He graduated and was employed as an engineer focused in power systems engineering and system behavioral analysis in offices of a Fortune 50 corporation for well over a decade. In that time he completed a Masters of Science Degree in Electrical Engineering from The University of Alabama at Birmingham (included among the top fifteen percent of universities in the United States). After leaving the Fortune 50 corporation he became involved in engineering management and energy savings. He has a long term interest in robotics, AI, and AI based control of systems.