Unit 2 ENG 120
Critique Assignment
Due: 11:59 PM EST Sunday
The critique does have a negative connotation, but it doesn’t
have to be a strictly negative piece. After reading through the
readings about Artificial
Intelligence and how to write critiques, choose one of the
required readings (listed below) to write your critique. The
critique will be two to three
pages in length and employ APA format.
Follow these guidelines to write your critique:
ue should be two to three pages in length. Choose
one of the following articles:
o “What did the Watson Computer do?”
o “Computer Wins on Jeopardy, Trivial, It’s Not”
o “The AI Revolution is on”
o “The Man who Would Teach Machines to Think”
o “Mind in the Machine”
-text citations wherever you reference the
text.
Students: Be sure to read the criteria, by which your
paper/project will be evaluated, before you write, and again
after you write.
Critique Rubric
Unsatisfactory – 1 Developing – 2 Proficient – 3 Exemplary
– 4
Organization
The critique lacks a topic
sentence, does not state the
main idea of the original
selection, and lacks an
appropriate conclusion.
The critique does not begin with
a topic sentence that states the
main idea of the original
selection but has an effective
concluding statement.
The critique begins with a topic
sentence that states the main
idea of the original selection
but lacks a concluding
statement that brings the
critique to a close.
The critique begins with a
clear topic sentence that states
the main idea of the original
selection. A concluding
sentence effectively brings the
critique to a close.
Support
The critique states few major
ideas and does not use a
logical order. The writing
lacks unity, coherence, and
supporting details.
The critique does not include all
major ideas but the writing is
unified and coherent. Missing
most supporting details.
The writing is unified and
coherent and all major ideas
are represented, but is lacking
in some supporting details.
All major ideas are arranged
in logical order. The writing is
unified and coherent
throughout. Includes all
supporting details.
Elements of
Critique
Student neither evaluates nor
analyzes the piece and offers
no commentary or personal
interpretation.
Student does little to evaluate or
analyze the piece. Offers little
commentary or personal
interpretation.
Student evaluates and analyzes
the piece with some insightful
commentary. Uses some
personal interpretation to
strengthen claims.
Student appropriately
evaluates and analyzes the
piece, accurately presents
insightful commentary on the
validity of the piece, and uses
personal interpretation to
strengthen claims.
APA
Format
There is no evidence of
proper APA style anywhere
in the critique.
The critique contains more than
seven APA errors in title page,
objective tone, citation, and
active voice.
The critique contains four to
six errors in APA style title
page, objective tone, citation,
and active voice.
The critique contains fewer
than three errors in APA style
title page, objective tone,
citation, and active voice.
Grammar,
Mechanics
Errors in mechanics make
the critique difficult to
understand.
There are seven or more errors
in mechanics, usage, grammar,
or spelling.
There are four to six errors in
mechanics, usage, grammar, or
spelling.
There are fewer than three
errors in mechanics, usage,
grammar, or spelling.
20 = 100% 16 = 90% 12 = 79% 8 = 70%
19 = 98% 15 = 87% 11 = 77% 7 = 68%
18 = 96% 14 = 84% 10 = 75% 6 = 66%
17 = 93% 13 = 81% 9 = 73% 5 = 64%
*A zero can be earned if the above criteria are not met.
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
1/10
Title:
Database:
Mind in the MACHINE. By: PIORE, ADAM, Discover,
02747529, Jun2013, Vol. 34, Issue 5
Academic Search Premier
Mind in the MACHINE
A visionary engineer aims to transform computing with
technology modeled on the human brain
The day he got the news that would transform his life,
Dharmendra Modha, 17, was supervising a team of laborers
scraping paint
off iron chairs at a local Mumbai hospital. He felt happy to have
the position, which promised steady pay and security -- the most
a
poor teen from Mumbai could realistically aspire to in 1986.
Modha's mother sent word to the job site shortly after lunch:
The results from the statewide university entrance exams had
come
in. There appeared to be some sort of mistake, because a
perplexing telegram had arrived at the house. Modha's scores
hadn't
just placed him atop the city, the most densely inhabited in
India -- he was No. 1 in math, physics and chemistry for the
entire
province of Maharashtra, population 100 million. Could he
please proceed to the school to sort it out?
Back then, Modha couldn't conceive what that telegram might
mean for his future. Both his parents had ended their schooling
after the 11th grade. He could count on one hand the number of
relatives who went to college. But Modha's ambitions have
expanded considerably in the years since those test scores paved
his way to one of India's most prestigious technical
academies, and a successful career in computer science at IBM's
Almaden Research Center in San Jose, Calif.
Listen American Accent
POST UNIVERSITY
LIBRARY
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:__doPostBack('ctl00$ctl00$FindField$customerLogo'
);
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
2/10
Recently, the diminutive engineer with the bushy black
eyebrows, closely cropped hair and glasses sat in his Silicon
Valley office
and shared a vision to do nothing less than transform the future
of computing. "Our mission is clear," said Modha, now 44,
holding
up a rectangular circuit board featuring a golden square.
"We'd like these chips to be everywhere -- in every corner, in
everything. We'd like them to become absolutely essential to the
world."
Traditional chips are sets of miniaturized electrical components
on a small plate used by computers to perform operations. They
often consist of millions of tiny circuits capable of encoding
and storing information while also executing programmed
commands.
Modha's chips do the same thing, but at such enormous energy
savings that the computers they comprise would handle far more
data, by design. With the new chips as linchpin, Modha has
envisioned a novel computing paradigm, one far more powerful
than
anything that exists today, modeled on the same magical entity
that allowed an impoverished laborer from Mumbai to ascend to
one of the great citadels of technological innovation: the human
brain.
TURNING TO NEUROSCIENCE
The human brain consumes about as much energy as a 20-watt
bulb -- a billion times less energy than a computer that
simulates
brainlike computations. It is so compact it can fit in a two-liter
soda bottle. Yet this pulpy lump of organic material can do
things no
modern computer can. Sure, computers are far superior at
performing pre-programmed computations -- crunching payroll
numbers or calculating the route a lunar module needs to take to
reach a specific spot on the moon. But even the most advanced
computers can't come close to matching the brain's ability to
make sense out of unfamiliar sights, sounds, smells and events,
and quickly understand how they relate to one another. Nor can
such machines equal the human brain's capacity to learn from
experience and make predictions based on memory.
Five years ago, Modha concluded that if the world's best
engineers still hadn't figured out-how to match the brain's
energy
efficiency and resourcefulness after decades of trying using the
old methods, perhaps they never would. So he tossed aside
many of the tenets that have guided chip design and software
development over the past 60 years and turned to the literature
of
neuroscience. Perhaps understanding the brain's disparate
components and the way they fit together would help him build
a
smarter, more energy-efficient silicon machine.
These efforts are paying off. Modha's new chips contain silicon
components that crudely mimic the physical layout of, and
connections between, microscopic carbon-based brain cells.
(See "Inside Modha's Neural Chip," page 55.) Modha is
confident
that his chips can be used to build a cognitive computing system
on the scale of a human brain for only 100 times more power,
making it 10 million times more energy efficient than the
computers of today.
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
3/10
Already, Modha's team has demonstrated some basic
capabilities. Without the help of a programmer explicitly telling
them what to
do, the chips they've developed can learn to play the game
Pong, moving a bar along the bottom of the screen and
anticipating the
exact angle of a bouncing ball. They can also recognize the
numbers zero through nine as a lab assistant scrawls them on a
pad
with an electronic pen.
Of course, plenty of engineers have pulled off such feats -- and
far more impressive ones. An entire subspecialty known as
machine learning is devoted to building algorithms that allow
computers to develop new behaviors based on experience. Such
machines have beaten the world's best minds in chess and
Jeopardy! But while machine learning theorists have made
progress
in teaching computers to perform specific tasks within a strict
set of parameters -- such as how to parallel park a car or plumb
encyclopedias for answers to trivia questions -- their programs
don't enable computers to generalize in an open-ended way.
Modha hopes his energy-efficient chips will usher in change.
"Modern computers were originally designed for three
fundamental
problems: business applications, such as billing; science, such
as nuclear physics simulation; and government programs, such
as Social Security," Modha states. The brain, on the other hand,
was forged on the crucible of evolution to quickly make sense
of
the world around it and act upon its conclusions. "It has the
ability to pick out a prowling predator in huge grasses, amid a
huge
amount of noise, without being told what it is looking for. It
isn't programmed. It learns to escape and avoid the lion."
Machines with similar capabilities could help solve one of
mankind's most pressing problems: the overload of information.
Between 2005 and 2012, the amount of digital information
created, replicated and consumed worldwide increased over
2,000
percent -- exceeding 2.8 trillion gigabytes in 2012. By some
estimates, that's almost as many bits of information as there are
stars
in the observable universe. The arduous task of writing the code
that instructs today's computers to make sense of this flood of
information -- how to order it, analyze it, connect it, what to do
with it -- is already far outstripping the abilities of human
programmers.
Cognitive computers, Modha believes, could plug the gap. Like
the brain, they will weave together inputs from multiple sensory
streams, form associations, encode memories, recognize
patterns, make predictions and then interpret, perhaps even act -
- all
using far less power than today's machines.
Drawing on data streaming in from a multitude of sensors
monitoring the world's water supply, for instance, the computer
might
learn to recognize changes in pressure, temperature, wave size
and tides, then issue tsunami warnings, even though current
science has yet to identify the constellation of variables
associated with the monster waves. Brain-based computers
could help
emergency department doctors render elusive diagnoses even
when science has yet to recognize the collection of changes in
body temperature, blood composition or other variables
associated with an underlying disease.
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
4/10
"You will still want to store your salary, your gender, your
Social Security number in today's computers," Modha says. "But
cognitive computing gives us a complementary paradigm for a
radically different kind of machine."
LIGHTING THE NETWORK
Modha is hardly the first engineer to draw inspiration from the
brain. An entire field of computer science has grown out of
insights
derived from the way the smallest units of the brain -- cells
called neurons -- perform computations. It is the firing of
neurons that
allows us to think, feel and move. Yet these abilities stem not
from the activity of any one neuron, but from networks of
interconnected neurons sending and receiving simple signals
and working in concert with each other.
The potential for brainlike machines emerged as early as 1943,
when neurophysiologist Warren McCulloch and mathematician
Walter Pitts proposed an idealized mathematical formulation for
the way networks of neurons interact to cause one another to
fire,
sending messages throughout the brain.
In a biological brain, neurons communicate by passing
electrochemical signals across junctions known as synapses.
Often the
process starts with external stimuli, like light or sound. If the
stimulus is intense enough, voltage across the membrane of
receiving neurons exceeds a given threshold, signaling
neurochemicals to fly across the synapses, causing more
neurons to fire
and so on and so forth. When a critical mass of neurons fire in
concert, the input is perceived by the cognitive regions of the
brain.
With enough neurons firing together, a child can learn to ride a
bike and a mouse can master a maze.
McCulloch and Pitts pointed out that no matter how many inputs
their idealized neuron might receive, it would always be in one
of
only two possible states -- activated or at rest, depending upon
whether the threshold for excitement had been passed. Because
neurons follow this "all-or-none law," every computation the
brain performs can be reduced to series of true or false
expressions,
where true and false can be represented by 1 and 0,
respectively. Modern computers are also based on logic systems
using Is
and 0s, with information coming from electric switches instead
of the outside environment.
McCulloch and Pitts had captured a fundamental similarity
between brains and computers. If endowed with the capacity to
ask
enough yes-or-no questions, either one should presumably
eventually arrive at the solution to even the most complicated of
questions. As an example, to draw a boundary between a group
of red dots and blue dots, one might ask of each dot if it is red
(yes/no) or blue (yes/no). Then one might ask if two
neighboring pairs of dots are of differing colors (yes/no). With
enough layers
of questions and answers, one might answer almost any complex
question at all.
Yet this kind of logical ability seemed far removed from the
capacity of brains, made of networks of neurons, to encode
memories
or learn. That capacity was explained in 1949 by Canadian
psychologist Donald Hebb, who hypothesized that when two
neurons
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
5/10
fire in close succession, connections between them strengthen.
"Neurons that fire together wire together" is the catchy phrase
that emerged from his pivotal work.
Connections between neurons explain how narrative memory is
formed. In a famous literary example, Marcel Proust s
childhood
flooded back when he dipped a madeleine in his cup of tea and
took a bite. The ritual was one he had performed often during
childhood. When he repeated it years later, neurons fired in the
areas of the brain storing these taste and motor memories. As
Hebb had suggested, those neurons had strong physical
connections to other neurons associated with other childhood
memories. Thus when Proust tasted the madeleine, the neurons
encoding those memories also fired -- and Proust was flooded
with so many associative memories he filled volumes of his
masterwork, In Search of Lost Time.
By 1960, computer researchers were trying to model Hebb's
ideas about learning and memory. One effort was a crude brain
mock-up called the perceptron. The perceptron contained a
network of artificial neurons, which could be simulated on a
computer
or physically built with two layers of electrical circuits. The
space between the layers was said to represent the synapse.
When
the layers communicated with each other by passing signals
over the synapse, that was said to model (roughly) a living
neural
net. One could adjust the strength of signals passed between the
two layers -- and thus the likelihood that the first layer would
activate the second (much like one firing neuron activates
another to pass a signal along). Perceptron learning occurred
when the
second layer was instructed to respond more powerfully to some
inputs than others. Programmers trained an artificial neural
network to "read," activating more strongly when shown
patterns of light depicting certain letters of the alphabet and
less strongly
when shown others.
The idea that one could train a computer to categorize data
based on experience was revolutionary. But the perceptron was
limited: Consisting of a mere two layers, it could only recognize
a "linearly separable" pattern, such as a plot of black dots and
white dots that can be separated by a single straight line (or, in
more graphic terms, a cat sitting next to a chair). But show it a
plot
of black and white dots depicting something more complex, like
a cat sitting on a chair, and it was utterly confused.
It wasn't until the 1980s that engineers developed an algorithm
capable of taking neural networks to the next level. Now
programmers could adjust the weights not just between two
layers of artificial neurons, but also a third, a fourth -- even a
ninth
layer -- in between, representing a universe where many more
details could live. This expanded the complexity of questions
such
networks could answer. Suddenly neural networks could render
squiggly lines between black and white dots, recognizing both
the
cat and the chair it was sitting in at the same time.
OUT OF BOMBAY
Just as the neural net revival was picking up steam, Modha
entered India's premier engineering school, the Indian Institute
of
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
6/10
Technology in Bombay. He graduated with a degree in computer
science and engineering in 1990.
As Modha looked to continue his education, few areas seemed
as hot as the reinvigorated field of neural networks. In theory,
the
size of neural networks was limited only by the size of
computers and the ingenuity of programmers.
In one powerful example of the new capabilities around that
time, Carnegie Mellon graduate student Dean Pomerleau used
simulated images of road conditions to teach a neural network
to interpret live road images picked up by cameras attached to a
car's onboard computer. Traditional programmers had been
stumped because even subtle changes in angle, lighting or other
variables threw off preprogrammed software coded to recognize
exact visual parameters.
Instead of trying to precisely code every possible image or road
condition, Pomerleau simply showed a neural network different
kinds of road conditions. Once it was trained to drive under
specific conditions, it was able to generalize to drive under
similar but
not identical conditions. Using this method, a computer could
recognize a road with metal dividers based on its similarities to
a
road without dividers, or a rainy road based on its similarity to
a sunny road -- an impossibility using traditional coding
techniques.
After being shown images of various left-curving and right-
curving roads, it could recognize roads curving at any angle.
Other programmers designed a neural network to detect credit
card fraud by exposing it to purchase histories of good versus
fraudulent card accounts. Based on the general spending
patterns found in known fraudulent accounts, the neural network
was
able to recognize the behavior and flag new fraud cases.
The neural networking mecca was San Diego -- in 1987, about
1,500 people met there for the first significant conference on
neural networking in two decades. And in 1991, Modha arrived
at the University of California, San Diego to pursue his Ph.D.
He
focused on applied math, constructing equations to examine
how many dimensions of variables certain systems could
handle,
and designing configurations to handle more.
By the time Modha was hired by IBM in 1997 in San Jose,
another computing trend was taking center stage: the explosion
of the
World Wide Web. Even back then, it was apparent that the flood
of new data was overwhelming programmers. The Internet
offered a vast trove of information about human behavior,
consumer preferences and social trends. But there was so much
of it:
How did one organize it? How could you begin to pick patterns
out of files that could be classified based on tens of thousands
of
characteristics? Current computers consumed way too much
energy to ever handle the data or the massive programs required
to
take every contingency into account. And with a growing array
of sensors gathering visual, auditory and other information in
homes, bridges, hospital emergency departments and
everywhere else, the information deluge would only grow.
A CANONICAL PATH
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
7/10
The more Modha thought about it, the more he became
convinced that the solution might be found by turning back to
the brain, the
most effective and energy-efficient pattern recognition machine
in existence. Looking to the neuro-scientific literature for
inspiration, he found the writings of MIT neuroscientist
Mriganka Sur. Sur had severed the neurons connecting the eyes
of
newborn ferrets to the brain's visual cortex; then he reconnected
those same neurons to the auditory cortex. Even with eyes
connected to the sound-processing areas of the brain, the
rewired animals could still see as adults.
To Modha, this revealed a fascinating insight: The neural
circuits in Sur's ferrets were flexible -- as interchangeable, it
seemed, as
the back and front tires of some cars. Sur's work implied that to
build an artificial cortex on a computer, you only needed one
design to create the "circuit" of neurons that formed all its
building blocks. If you could crack the code of that circuit --
and embody
it in computation -- all you had to do was repeat it.
Programmers wouldn't have to start over every time they wanted
to add a new
function to a computer, using pattern recognition algorithms to
make sense of new streams of data. They could just add more
circuits.
"The beauty of this whole approach," Modha enthusiastically
explains, "is that if you look at the mammalian cerebral cortex
as a
road map, you find that by adding more and more of these
circuits, you get more and more functionality."
In search of a master neural pattern, Modha discovered that
European researchers had come up with a mathematical
description
of what appeared to be the same as the circuit Sur investigated
in ferrets, but this time in cats. If you unfolded the cat cortex
and
unwrinkled it, you would find the same six layers repeated
again and again. When connections were drawn between
different
groups of neurons in the different layers, the resulting diagrams
looked an awful lot like electrical circuit diagrams.
Modha and his team began programming an artificial neural
network that drew inspiration from these canonical circuits and
could
be replicated multiple times. The first step was determining how
many of these virtual circuits they could they link together and
run
on IBM's traditional supercomputers at once. Would it be
possible to reach the scale of a human cortex?
At first Modha and his team hit a wall before they reached 40
percent of the number of neurons present in the mouse cerebral
cortex: roughly 8 million neurons, with 6,300 synaptic
connections apiece. The truncated circuitry limited the learning,
memory
and creative intelligence their simulation could achieve.
So they turned back to neuroscience for solutions. The actual
neurons in the brain, they realized, only become a factor in the
organs overall computational process when they are activated.
When inactive, neurons simply sit on the sidelines, expending
little
energy and doing nothing. So there was no need to update the
relationship between 8 million neurons 1,000 times a second.
Doing so only slowed the system down. Instead, they could
emulate the brain by instructing the computer to focus attention
only
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
8/10
on neurons that had recently fired and were thus most likely to
fire again. With this adjustment, the speed at which the
supercomputer could simulate a brain-based system increased a
thousandfold. By November 2007, Modha had simulated a
neural network on the scale of a rat cortex, with 55 million
neurons and 442 billion synapses.
Two years later his team scaled it up to the size of a cat brain,
simulating 1.6 billion neurons and almost 9 trillion synapses.
Eventually they scaled the model up to simulate a system of 530
billion neurons and 100 trillion synapses, a crude approximation
of the human brain.
BUILDING A SILICON BRAIN
The researchers had simulated hundreds of millions of
repetitions of the kind of canonical circuit that might one day
enable a new
breed of cognitive computer. But it was just a model, running at
a maddeningly slow speed on legacy machines that could never
be brainlike, never step up to the cognitive plate. In 2008, the
federal Defense Advanced Research Projects Agency (DARPA)
announced a program aimed at building the hardware for an
actual cognitive computer. The first grant was the creation of an
energy-efficient chip that would serve as the heart and soul of
the new machine -- a dream come true for Modha.
With DARPA's funding, Modha unveiled his new, energy-
efficient neural chips in summer 2011. Key to the chips' success
was
their processors, chip components that receive and execute
instructions for the machine. Traditional computers contain a
small
number of very fast processors (modern laptops usually have
two to four processors on a single chip) that are almost always
working. Every millisecond, these processors scan millions of
electrical switches, monitoring and flipping thousands of
circuits
between two possible states, 1 and 0 -- activated or not.
To store the patterns of ones and zeros, today's computers use a
separate memory unit. Electrical signals are conveyed between
the processor and memory over a pathway known as a memory
bus. Engineers have increased the speed of computing by
shortening the length of the bus. Some servers can now loop
from memory to processor and back around a few hundred-
million
times per second. But even the shortest buses consume energy
and create heat, requiring lots of power to cool.
The brain's architecture is fundamentally different, and a
computer based on the brain would reflect that. Instead of a
small
number of large, powerful processors working continuously, the
brain contains billions of relatively slow, small processors -- its
neurons -- which consume power only when activated. And
since the brain stores memories in the strength of connections
between neurons, inside the neural net itself, it requires no
energy-draining bus.
The processors in Modha's new chip are the smallest units of a
computer that works like the brain: Every chip contains 256
very
slow processors, each one representing an artificial neuron (By
comparison, a roundworm brain consists of about 300 neurons.)
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
9/10
Only activated processors consume significant power at any one
time, making energy consumption low. But even when
activated,
the processors need far less power than their counterparts in
traditional computers because the tasks they are designed to
execute are far simpler: Whereas a traditional computer
processor is responsible for carrying out all the calculations and
operations that allow a computer to run, Modha's tiny units only
need to sum up the number of signals received from other
virtual
neurons, evaluate their relative weights and determine whether
there are enough of them to prompt the processor to emit a
signal
of its own.
Modha has yet to link his new chips and their processors in a
large-scale network that mimics the physical layout of a brain.
But
when he does, he is convinced that the benefits will be vast.
Evolution has invested the brain's anatomy with remarkable
energy
efficiencies by positioning those areas most likely to
communicate closer together; the closer neurons are to one
another, the less
energy they need to push a signal through. By replicating the
big-picture layout of the brain, Modha hopes to capture these
and
other unanticipated energy savings in his brain-inspired
machines. He has spent years poring over studies of long-
distance
connections in the rhesus macaque monkey brain, ultimately
creating a map of 383 different brain areas, connected by 6,602
individual links. (See "Mapping the Monkey Brain," page 58.)
The map suggests how many cognitive computing chips should
be
allocated to the different regions of any artificial brain, and
which other chips they should be wired to. For instance, 336
links begin
at the main vision center of the brain. An impressive 1,648 links
emerge from the frontal lobe, which contains the prefrontal
cortex,
a centrally located brain structure that is the seat of
decisionmaking and cognitive thought. As with a living brain,
the neural
computer would have most connections converging on a central
point.
Of course, even if Modha can build this brai-niac, some
question whether it will have any utility at all. Geoff Hinton, a
leading neural
networking theorist, argues the hardware is useless without the
proper "learning algorithm" spelling out which factors change
the
strength of the synaptic connections and by how much. Building
a new kind of chip without one, he argues, is "a bit like building
a
car engine without first figuring out how to make an explosion
and harness the energy to make the wheels go round."
But Modha and his team are undeterred. They argue that they
are complementing traditional computers with cognitive-
computing-
like abilities that offer vast savings in energy, enabling capacity
to grow by leaps and bounds. The need grows more urgent by
the
day. By 2020, the world will generate 14 times the amount of
digital information it did in 2012. Only when computers can
spot
patterns and make connections on their own, says Modha, will
the problem be solved.
Creating the computer of the future is a daunting challenge. But
Modha learned long ago, halfway across the world as a teen
scraping the paint off of chairs, that if you tap the power of the
human brain, there is no telling what you might do.
In a biological brain, neurons communicate by passing
electrochemical signals across junctions known as synapses.
Often the
7/2/2014 Mind in the MACHINE: EBSCOhost
http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd-
4ca4-8835-
240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn
NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824
10/10
process starts with external stimuli, like light or sound.
"We'd like these chips to be everywhere -- in every corner, in
everything. We'd like them to become absolutely essential to the
world."
They simulated a system of 530 billion neurons and 100 trillion
synapses, a crude rendition of the human brain.
Gathering in front of the brain wall this February are the
Cognitive Computing Lab team members (from left) John
Arthur, Paul
Merolla, Bill Risk, Dharmendra Modha, Bryan Jackson, Myron
Flickner and Steve Esser.
Dharmendra Modha and team member Bill Risk stand by a
supercomputer at the IBM Almaden facility. Using the
supercomputers
at Almaden and Lawrence Livermore National Laboratory, the
group simulated networks that crudely approximated the brains
of
mice, rats, cats and humans.
Dharmendra Modha stands alongside the brain wall, used by his
cognitive computing team to simulate brain activity and model
neural chips at IBM. In his hand is a neurosynaptic chip, the
core component of a new generation of computers based on the
architecture of the brain. The neon swirl on the opposite page
was inspired by the neural architecture of a rhesus macaque
brain,
used by Modha to help him design the chip. (Initials around the
swirl's rim indicate discrete regions in the macaque brain.)
~~~~~~~~
By ADAM PIORE
PHOTOGRAPH BY MAJED ABOLFAZLI
Adam Piore is a contributing editor at DISCOVER.
© 2013 Discover Magazine
The Man Who WouULTeach Machines to Think
when he was 35, in 1980,
Douglas Hofstadter won the
Pulitzer Prize for his book on
how the mind works, Gödel,
Escher, Bach. It became the
bible of artificial intelligence,
the book of the future. Then
the future moved somewhere
else. What if the best ideas
about AI are yellowing in a
drawer in Bloomington?
By James Somers
Photographs by Greg Ruffing
90 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
T DEPENDS ON whatyoumean
by artificial intelligence." Douglas
Hofstadter is in a grocery store in
Bloomington, Indiana, picking out
salad ingredients. "If somebody
meant by artificial intelligence the
attempt to understand the mind,
or to create something human-
like, they might say—maybe they
wouldn't go this far—but they
might say this is some ofthe only
good work that's ever been done."
Hofstadter says this with an
easy deliberateness, and he says it
that way because for him, it is an
uncontroversial conviction that
the most-exciting projects in modern artificial intelligence,
the stuff the public maybe sees as stepping stones on the way
to science fiction—like Watson, IBM's Jeopardy-playing super-
computer, or Siri, Apple's iPhone assistant—in fact have very
little to do with intelligence. Eor the past 30 years, most of
them
spent in an old house just northwest ofthe Indiana University
campus, he and his graduate students have been picking up the
slack: trying to figure out how our thinking works, by writing
computer programs that think.
Their operating premise is simple: the mind is a very un-
usual piece of software, and the best way to understand how a
piece of software works is to write it yourself. Computers are
fiexible enough to model the strange evolved convolutions of
our thought, and yet responsive only to precise instructions.
So if the endeavor succeeds, it will be a double victory: we will
finally come to know the exact mechanics of our selves—and
we'll have made intelligent machines.
T
HE IDEA THAT CHANGED Hofstadter's existence,
as he has explained over the years, came to him on
the road, on a break from graduate school in particle
physics. Discouraged by the way his doctoral thesis was go-
ing at the University of Oregon, feeling "profoundly lost," he
decided in the summer of 1972 to pack his things into a car he
called Quicksilver and drive eastward across the continent.
Each night he pitched his tent somewhere new ("sometimes
in a forest, sometimes by a lake") and read by flashlight. He
was free to think about whatever he wanted; he chose to think
about thinking itself. Ever since he was about 14, when he
found out that his youngest sister, Molly, couldn't understand
language, because she "had something deeply wrong with her
brain" (her neurological condition probably dated from birth,
and was never diagnosed), he had been quietly obsessed by the
relation of mind to matter. The father of psychology, William
James, described this in 1890 as "the most mysterious thing in
the world" : How could consciousness be physical? How could
a few pounds of gray gelatin give rise to our very thoughts and
selves?
Roaming in his 1956 Mercury, Hofstadter thought he had
found the answer—that it lived, of all places, in the kernel of a
mathematical proof In 1931, the Austrian-born logician Kurt
Gödel had famously shown how a mathematical system could
make statements not just about numbers but about the system
itself. Consciousness, Hofstadter wanted to say, emerged via
just the same kind of "level-crossing feedback loop." He sat
down one afternoon to sketch his thinking in a letter to a friend.
But after 30 handwritten pages, he decided not to send it; in-
stead he'd let the ideas germinate a while. Seven years later,
they
had not so much germinated as metastasized into a 2.9-pound,
777-page book called Gödel, Escher, Bach: An Eternal Golden
Braid, which would earn for Hofstadter—only 35 years old, and
a
first-time author—the 1980 Pulitzer Prize for general
nonfiction.
GEB, as the book became known, was a sensation. Its suc-
cess was catalyzed by Martin Gardner, a popular columnist
for Scientific American, who very unusually devoted his space
in the July 1979 issue to discussing one book—and wrote a
glowing review. "Every few decades," Gardner began, "an un-
known author brings out a book of such depth, clarity, range,
wit, beauty and originality that it is recognized at once as a
major literary event." The first American to earn a doctoral
degree in computer science (then labeled "communication
sciences"), John Holland, recalled that "the general response
amongst people I know was that it was a wonderment."
Hofstadter seemed poised to become an indelible part of
the culture. GEB was not just an influential book, it was a book
fully ofthe future. People called it the bible of artificial intel-
ligence, that nascent field at the intersection of computing,
cognitive science, neuroscience, and psychology. Hofstadter's
account of computer programs that weren't just capable but
creative, his road map for uncovering the "secret software
structures in our minds," launched an entire generation of
eager young students into AI.
But then AI changed, and Hofstadter didn't change with it,
and for that he all but disappeared.
9 2 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
G
EB ARRIVED ON THE SCENEatan
inflection point in AI's history. In the
early 1980s, the field was retrenching:
funding for long-term "basic science" was dry-
ing up, and the focus was shifting to practical
systems. Ambitious AI research had acquired a
bad reputation. Wide-eyed overpromises were
the norm, going back to the birth of the field
in 1956 at the Dartmouth Summer Research
Proj ect, where the organizers—including the man who coined
the term artificial intelligence, John McCarthy—declared that
"if a carefully selected group of scientists work on it together
for a summer," they would make significant progress toward
creating machines with one or more of the following abilities:
the ability to use language; to form concepts; to solve prob-
lems now solvable only by humans; to improve themselves.
McCarthy later recalled that they failed because "AI is harder
than we thought."
With wartime pressures mounting, a chief underwriter of
AI research—the Defense Department's Advanced Research
Projects Agency (ARPA)—tightened its leash. In 1969,
Congress
passed the Mansfield Amendment, requiring that Defense
support only projects with "a direct and apparent relationship
to a specific military function or operation." In 1972, ARPA be-
came DARPA, the D for "Defense," to reflect its emphasis on
projects with a military benefit. By the middle of the decade,
the agency was asking itself: What concrete improvements in
national defense did we just buy, exactly, with 10 years and
$50 million worth of exploratory research?
By the early 1980s, the pressure was great enough that AI,
which had begun as an endeavor to answer yes to Alan Turing's
The messy study
on the top floor of
Hofstadter's house
in Bloomington is
the center of his
worid, a museum
for his intellectual
"binges," a scene out
of a brainy episode of
Hoarders.
famous question, "Can machines think?," started to mature—
or mutate, depending on your point of view—into a subfield
of software engineering, driven by applications. Work was in-
creasingly done over short time horizons, often with specific
buyers in mind. For the military, favored projects included
"command and control" systems, like a computerized in-flight
assistant for combat pilots, and programs that would automati-
cally spot roads, bridges, tanks, and silos in aerial photographs.
In the private sector, the vogue was "expert systems," niche
products like a pile-selection system, which helped designers
choose materials for building foundations, and the Automated
Cable Expertise program, which ingested and summarized
telephone-cable maintenance reports.
In GEB, Hofstadter was calling for an approach to AI con-
cerned less with solving human problems intelligently than
with understanding human intelligence—at precisely the mo-
ment that such an approach, having borne so little fruit, was
being abandoned. His star faded quickly. He would increas-
ingly find himself out of a mainstream that had embraced a
new imperative: to make machines perform in any way pos-
sible, with little regard for psychological plausibility.
Take Deep Blue, the IBM supercomputer that bested the
chess grandmaster Carry Kasparov. Deep Blue won by brute
force. For each legal move it could make at a given point in
the game, it would consider its opponent's responses, its own
responses to those responses, and so on for six
or more steps down the line. With a fast evalua-
tion function, it would calculate a score for each
possible position, and then make the move that
led to the best score. What allowed Deep Blue to
beat the world's best humans was raw computa-
tional power. It could evaluate up to 330 million
positions a second, while Kasparov could evalu-
ate only a few dozen before having to make a
decision.
Hofstadter wanted to ask: Why conquer a task
if there's no insight to be had from the victory?
"Okay," he says, "Deep Blue plays very good chess—so what?
Does that tell you something about how we play chess? No.
Does
it tell you about how Kasparov envisions, understands a chess-
board?" A brand of AI that didn't try to answer such questions—
however impressive it might have been—was, in Hofstadter's
mind, a diversion. He distanced himself from the field almost as
soon as he became a part of it. "To me, as a fledgling AI
person,"
he says, "it was self-evident that I did not want to get involved
in
that trickery. It was obvious: I don't want to be involved in
pass-
ing off some fancy program's behavior for intelligence when I
know that it has nothing to do with intelligence. And I don't
know
why more people aren't that way."
One answer is that the AI enterprise went from being worth
a few million dollars in the early 1980s to billions by the end of
the decade. (After Deep Blue won in 1997, the value of IBM's
stock increased by $18 billion.) The more staid an engineer-
ing discipline AI became, the more it accomplished. Today, on
the strength of techniques bearing little relation to the stuff of
thought, it seems to be in a kind of golden age. AI pervades
heavy industry, transportation, and finance. It powers many
of Google's core functions, Netflix's movie recommendations,
Watson, Siri, autonomous drones, the self-driving car.
T H E A T L A N T I C N O V E M B E R 2 0 1 3 9 3
"The quest for 'artificial flight' succeeded when the Wright
brothers and others stopped imitating birds and started... learn-
ing about aerodynamics," Stuart Russell and Peter Norvig write
in their leading textbook. Artificial Intelligence: A Modern Ap-
proach. AI started working when it ditched humans as a model,
because it ditched them. That's the thrust of the analogy: Air-
planes don't flap their wings; why should computers think?
It's a compelling point. But it loses some bite when you cun-
sider what we want: a Google that knows, in the way a human
would know, what you really mean when you search for some
thing. Russell, a computer-science professor at Berkeley, said
to me, "What's the combined market cap of all of the search
companies on the Web? It's probably four hundred, five hundred
billion dollars. Engines that could actually extract all that infor-
mation and understand it would be worth lo times as much."
This, then, is the trillion-dollar question: Will the ap-
proach undergirding AI today—an approach that borrows
little from the mind, that's grounded instead in big data and
big engineering—get us to where we want to go? How do you
make a search engine that understands if you don't know how
you understand? Perhaps, as Russell and Norvig politely ac-
knowledge in the last chapter of their textbook, in taking its
practical turn, AI has become too much like the man who tries
to get to the moon by climbing a tree: "One can report steady
progress, all the way to the top of the tree."
Consider that computers today still have trouble recogniz-
The thesis of his new book, which features a mélange of A's on
its cover, is that analogy is "the fuel and fire of thinking," the
bread and butter of our daily mental lives.
"Look at your conversations," he says. "You'll see over and
over again, to your surprise, that this is the process of analogy-
making." Someone says something, which reminds you of
something else; you say something, which reminds the other
person of something else—that's a conversation. It couldn't
be more straightforward. But at each step, Hofstadter argues,
there's an analogy, a mental leap so stunningly complex that
it's a computational miracle: somehow your brain is able to strip
any remark of the irrelevant surface details and extract its gist,
its "skeletal essence," and retrieve, from your own repertoire
of ideas and experiences, the story or remark that best relates.
"Beware," he writes, "of innocent phrases like 'Oh, yeah,
that's exactly what happened to me' ... behind whose non-
chalance is hidden the entire mystery of the human mind."
In the years after the release of GEB, Hofstadter and AI
went their separate ways. Today, if you were to pull AI: A
Modern Approach off the shelf, you wouldn't find Hofstadter's
name—not in more than 1,000 pages. Colleagues talk about
him in the past tense. New fans of G£B, seeing when it was
published, are surprised to find out its author is still alive.
Of course in Hofstadter's telling, the story goes like this:
when everybody else in AI started building products, he and
his team, as his friend, the philosopher Daniel Dennett, wrote.
''Very few people are interested in how human intelligence
works/'
Hofstadter says. "That's what we're interested in—what is
thinking^"
ing a handwritteni4. In fact, the task is so difficult that it forms
the basis for CAPTCHAS ("Completely Automated Public Tur-
ing tests to tell Computers and Humans Apart"), those widgets
that require you to read distorted text and type the characters
into a box before, say, letting you sign up for a Web site.
In Hofstadter's mind, there is nothing to be surprised about.
To know what all A's have in common would be, he argued in
a 1982 essay, to "understand the fluid nature of mental cat-
egories." And that, he says, is the core of human intelligence.
"Cognition is recognition," he likes to say. He describes
"seeing as" as the essential cognitive act: you see some lines as
"an A," you see a hunk of wood as "a table," you see a meeting
as "an emperor-has-no-clothes situation" and a friend's pout-
ing as "sour grapes" and a young man's style as "hipsterish"
and on and on ceaselessly throughout your day. That's what it
means to understand. But how does understanding work? For
three decades, Hofstadter and his students have been trying to
find out, trying to build "computer models of the fundamental
mechanisms of thought."
"At every moment," Hofstadter writes in Surfaces and
Essences, his latest book (written with Emmanuel Sander), "we
are simultaneously faced with an indefinite number of over-
lapping and intermingling situations." It is our job, as organ-
isms that want to live, to make sense ofthat chaos. We do it
by having the right concepts come to mind. This happens
automatically, all the time. Analogy is Hofstadter's go-to word.
"patiently, systematically, brilliantly," way out of the light of
day, chipped away at the real problem. "Very few people are
interested in how human intelligence works," Hofstadter says.
"That's what we're interested in—what is thinking?—and we
don't lose track ofthat question."
"I mean, who knows?" he says. "Who knows what'U happen.
Maybe someday people will say, 'Hofstadter already did this
stuffand said this stuffand we're just now discovering it.' "
Which sounds exactly like the self-soothing of the guy who
lost. But Hofstadter has the kind of mind that tempts you to ask:
What if the best ideas in artificial intelligence—"genuine artifi-
cial intelligence," as Hofstadter now calls it, with apologies for
the oxymoron—are yellowing in a drawer in Bloomington?
D
OUGLAS R. HOFSTADTER was born into a life
of the mind the way other kids are born into a life
of crime. He grew up in 1950s Stanford, in a house
on campus, just south of a neighborhood actually called
Professorville. His father, Robert, was a nuclear physicist who
would go on to share the 1961 Nobel Prize in Physics; his moth-
er, Nancy, who had a passion for politics, became an advocate
for developmentally disabled children and served on the eth-
ics committee of the Agnews Developmental Center, where
Molly lived for more than 20 years. In her free time Nancy was,
the joke went, a "professional faculty wife": she transformed
the Hofstadters' living room into a place where a tight-knit
N O V E M B E R ¿ Ü 1 3 T H E A T L A N T I C
Community of friends could gather for stimulating conversa-
tion and jazz, for "the interpénétration of the sciences and the
arts," Hofstadter told me—an intellectual feast.
Dougie ate it up. He was enamored of his parents' friends,
their strange talk about "the tiniest or gigantic-est things." (At
age 8, he once said, his dream was to become "a zero-mass, spin
one-half neutrino.") He'd hang around the physics department
for 4 o'clock tea, "as if I were a little 12-year-old graduate stu-
dent." He was curious, insatiable, unboreable—"just a kid fas-
cinated by ideas"—and intense. His intellectual style was, and
is, to go on what he calls "binges": he might practice piano for
seven hours a day; he might decide to memorize 1,200 lines of
Eugene Onegin. He once spent weeks with a tape recorder
teach-
ing himself to speak backwards, so that when he played his gar-
bles in reverse they came out as regular English. For months at
a
time he'll immerse himself in idiomatic French or write comput-
er programs to generate nonsensical stories or study more than
a dozen proofs of the Pythagorean theorem until he can "see the
reason it's true." He spends "virtually every day exploring these
things," he says, "unable to not explore. Just totally possessed,
totally obsessed, by this kind of stuff."
Hofstadter is 68 years old. But there's something Peter
Pan-ish about a life lived so much on paper, in software, in a
man's own head. Can someone like that age in the usual way?
Hofstadter has untidy gray hair that juts out over his ears, a
fragile, droopy stature, and, between his nose and upper lip,
a long groove, almost like the Grinch's. But he has the self-
seriousness, the urgent earnestness, of a still very young man.
The stakes are high with him; he isn't easygoing. He's the
kind of vegetarian who implores the whole dinner party to eat
vegetarian too; the kind of sensitive speaker who corrects you
for using "sexist language" around him. "He has these rules,"
explains his friend Peter Jones, who's known Hofstadter for
59 years. "Like how he hateŝ oM^wjv .̂ That's an imperative. If
you're talking to him, you better not say you guys."
For more than 30 years, Hofstadter has worked as a profes-
sor at Indiana University at Bloomington. He lives in a house a
few blocks from campus with Baofen Lin, whom he married last
September; his two children by his previous marriage, Danny
and Monica, are now grown. Although he has strong ties with
the cognitive-science program and affiliations with several
departments—including computer science, psychological and
brain sciences, comparative literature, and philosophy—he has
no official obligations. "I think I have about the cushiest job
you
could imagine," he told me. "I do exactly what I want."
He spends most of his time in his study, two rooms on the
top floor of his house, carpeted, a bit stufïy, and messier than
he
would like. His study is the center of his world. He reads there,
listens to music there, studies there, draws there, writes his
books there, writes his e-mails there. (Hofstadter spends four
hours a day writing e-mail. "To me," he has said, "an e-mail is
identical to a letter, every bit as formal, as refined, as careftdly
written... I rewrite, rewrite, rewrite, rewrite all of my e-mails,
al-
ways.") He lives his mental life there, and it shows. Wall-to-
wall
there are books and drawings and notebooks and files, thoughts
fossilized and splayed all over the room. It's like a museum for
his binges, a scene out of a brainy episode of Hoarders.
"Anything that I think about becomes part of my profession-
al life," he says. Daniel Dennett, who co-edited The Mind's I
with him, has explained that "what Douglas Hofstadter is,
quite simply, is aphenomenologist, apracticingphenomenolo-
gist, and he does it better than anybody else. Ever." He studies
the phenomena—the feelings, the inside actions—of his own
mind. "And the reason he's good at it," Dennett told me, "the
reason he's better than anybody else, is that he is very actively
trying to have a theory of what's going on backstage, of how
thinking actually happens in the brain."
In his back pocket, Hofstadter carries a four-color Bic ball-
point pen and a small notebook. It's always been that way. In
what used to be a bathroom adjoined to his study but is now just
extra storage space, he has bookshelves full of these notebooks.
He pulls one down—it's from the late 1950s. It's full of speech
er-
rors. Ever since he was a teenager, he has captured some 10,000
examples of swapped syllables ("hypodeemic nerdle"), mala-
propisms ("runs the gambit"), "malaphors" ("easy-go-lucky"),
and so on, about half of them committed by Hofstadter him-
self. He makes photocopies of his notebook pages, cuts them up
with scissors, and stores the errors in filing cabinets and labeled
boxes around his study.
For Hofstadter, they're clues. "Nobody is a very reliable
guide concerning activities in their mind that are, by definition,
subconscious," he once wrote. "This is what makes vast collec-
tions of errors so important. In an isolated error, the mecha-
nisms involved yield only slight traces of themselves; however,
in a large collection, vast numbers of such slight traces exist,
col-
lectively adding up to strong evidence for (and against) particu-
lar mechanisms." Correct speech isn't very interesting; it's like
a well-executed magic trick—effective because it obscures how
it works. What Hofstadter is looking for is "a tip of the rabbit's
ear... a hint of a trap door."
In this he is the modern-day William James, whose blend of
articulate introspection (he introduced the idea of the stream
of consciousness) and crisp explanations made his 1890 text.
Principles of Psychology, a classic. "The mass of our thinking
vanishes for ever, beyond hope of recovery," James wrote,
"and psychology only gathers up a few of the crumbs that fall
from the feast." Like Hofstadter, James made his life playing
under the table, gleefully inspecting those crumbs. The dif-
ference is that where James had only his eyes, Hofstadter has
something like a microscope.
T H E A T L A N T I C N O V E M B E R 2 0 1 3 9 5
Y
o u CAN CREDIT the development of manned
aircraft not to the Wright brothers' glider flights at
Kitty Hawk but to the six-foot wind tunnel they built
for themselves in their bicycle shop using scrap metal and
recycled wheel spokes. While their competitors were testing
wing ideas at full scale, the Wrights were doing focused aero-
dynamic experiments at a fraction of the cost. Their biogra-
pher Fred Howard says that these were "the most crucial and
fruitful aeronautical experiments ever conducted in so short a
time with so few materials and at so little expense."
In an old house on North Fess Avenue in Bloomington,
Hofstadter directs the Fluid Analogies Research Group, af-
fectionately known as FARC. The yearly operating budget is
$100,000. Inside, it's homey—if you wandered through, you
could easily miss the flling cabinets tucked beside the pantry,
the photocopier humming in the living room, the librarian's
labels (NEUROSCIENCE, MATHEMATICS, PERCEPTION) on
the
bookshelves. But for 25 years, this place has been host to high
enterprise, as the small group of scientists tries, Hofstadter has
written, "flrst, to uncover the secrets of creativity, and second,
to uncover the secrets of consciousness."
As the wind tunnel was to the Wright brothers, so the com-
puter is to FARC. The quick unconscious chaos of a mind can
be slowed down on the computer, or rewound, paused, even
edited. In Hofstadter's view, this is the great opportunity of
artificial intelligence. Parts of a program can be selectively iso-
lated to see how it functions without them; parameters can be
changed to see how performance improves or degrades. When
the computer surprises you—whether by being especially cre-
ative or especially dim-witted—you can see exactly why. "I
have always felt that the only hope of humans ever coming to
fully understand the complexity of their minds," Hofstadter
has written, "is by modeling mental processes on computers
and learning from the models' inevitable failures."
Turning a mental process caught and catalogued in Hof-
stadter's house into a running computer program, just a mile up
the road, takes a dedicated graduate student about five to nine
years. The programs all share the same basic architecture—
a set of components and an overall style that traces back to
Jumbo, a program that Hofstadter wrote in 1982 that worked
on the word jumbles you find in newspapers.
The first thought you ought to have when you hear about a
program that's tackling newspaper jumbles is: Wouldn't those
be trivial for a computer to solve? And indeed they are—I just
wrote a program that can handle any word, and it took me four
minutes. My program works like this: it takes the jumbled word
and tries every rearrangement of its letters until it finds a word
in the dictionary.
Hofstadter spent two years building Jumbo: he was less
interested in solving jumbles than in finding out what was
happening when he solved them. He had been watching his
mind. "I could feel the letters shifting around in my head, by
themselves," he told me, "just kind of jumping around form-
ing little groups, coming apart, forming new groups—flickering
clusters. It wasn't me manipulating anything. It was just them
doing things. They would be trying things themselves."
The architecture Hofstadter developed to model this auto-
matic letter-play was based on the actions inside a biological
cell. Letters are combined and broken apart by different types
of "enzymes," as he says, that jiggle around, glomming on to
structures where they flnd them, kicking reactions into gear.
Some enzymes are rearrangers {pang-loss hecomespan-gloss or
lang-poss), others are builders (g and h become the duster gh;
jum and ble become jumble), and still others are breakers (ight
is broken into it and gh). Each reaction in turn produces oth-
ers, the population of enzymes at any given moment balancing
itself to reflect the state of the jumble.
It's an unusual kind of computation, distinct for its fluidity.
Hofstadter of course offers an analogy: a swarm of ants ram-
bling around the forest floor, as scouts make small random for-
ays in all directions and report their finds to the group, their
feedback driving an efficient search for food. Such a swarm
is robust—step on a handful of ants and the others quickly
recover—and, because ofthat robustness, adept.
When you read Fluid Concepts and Creative Analogies: Com-
puter Models of the Fundamental Mechanisms of Thought,
which
describes in detail this architecture and the logic and mechanics
of the programs that use it, you wonder whether maybe Hof-
stadter got famous for the wrong book. As a writer for The New
York Times once put it in a 1995 review, "The reader of'Fluid
Concepts & Creative Analogies' cannot help suspecting that the
group at Indiana University is on to something momentous."
But very few people, even admirers of G£B, know about the
book or the programs it describes. And maybe that's because
FARc's programs are almost ostentatiously impractical. Be-
cause they operate in tiny, seemingly childish "microdomains."
Because there is no task they perform better than a human.
T
HE MODERN ERA of mainstream AI—an era of
steady progress and commercial success that began,
roughly, in the early 1990s and continues to this d a y -
is the long unlikely springtime after a period, known as the AI
Winter, that nearly killed off the field.
It came down to a basic dilemma. On the one hand, the
software we know how to write is very orderly; most comput-
er programs are organized like a well-run army, with layers of
commanders, each layer passing instructions dovra to the next,
and routines that call subroutines that call subroutines. On the
other hand, the software we want to write would be adaptable—
and for that, a hierarchy of rules seems like just the wrong idea.
Hofstadter once summarized the situation by writing, "The en-
tire effort of artificial intelligence is essentially a fight against
computers' rigidity." In the late '80s, mainstream AI was losing
research dollars, clout, conference attendance, journal submis-
sions, and press—because it was getting beat in that fight.
The "expert systems" that had once been the field's meal
ticket were foundering because of their brittleness. Their ap-
proach was fundamentally broken. Take machine translation
from one language to another, long a holy grail of AI. The
standard attack involved corralling linguists and translators
into a room and trying to convert their expertise into rules for
a program to follow. The standard attack failed for reasons
you might expect: no set of rules can ever wrangle a human
language; language is too big and too protean; for every rule
obeyed, there's a rule broken.
If machine translation was to survive as a commercial
enterprise—if AI was to survive—it would have to find another
way. Or better yet, a shortcut.
9 6 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
And it did. You could say that it started in 1988, with a proj-
ect out of IBM called Candide. The idea behind Candide, a
machine-translation system, was to start by admitting that the
rules-based approach requires too deep an understanding of
how language is produced; how semantics, syntax, and mor-
phology work; and how words commingle in sentences and
combine into paragraphs—to say nothing of understanding the
ideas for which those words are merely conduits. So IBM threw
that approach out the window. What the developers did instead
was brilliant, but so straightforward, you can hardly believe it.
The technique is called "machine learning." The goal is to
make a device that takes an English sentence as input and spits
out a French sentence. One such device, of course, is the human
brain—but the whole point is to avoid grappling with the brain's
complexity. So what you do instead is start with a machine so
simple, it almost doesn't work: a machine, say, that randomly
spits out French words for the English words it's given.
Imagine a box with thousands of knobs on it. Some of these
knobs control general settings: given one English word, how
many French words, on average, should come out? And some
control specific settings: given jump, what is the probability
that shot comes next? The question is, just by tuning these
knobs, can you get your machine to convert sensible English
into sensible French?
It turns out that you can. What you do is feed the machine
English sentences whose Erench translations you already know.
(Candide, for example, used 2.2 million pairs of sentences,
mostly from the bilingual proceedings of Canadian parliamen-
tary debates.) You proceed one pair at a time. After you've en-
tered a pair, take the English half and feed it into your machine
to see what comes out in Erench. If that sentence is different
from what you were expecting—different from the known cor-
rect translation—your machine isn't quite right. So jiggle the
knobs and try again. After enough feeding and trying and jig-
gling, feeding and trying and jiggling again, you'll get a feel
for the knobs, and you'll be able to produce the correct French
equivalent of your English sentence.
By repeating this process with millions of pairs of sentences,
you will gradually calibrate your machine, to the point where
you'll be able to enter a sentence whose translation you don't
know and get a reasonable result. And the beauty is that you
nev-
er needed to program the machine explicitly; you never needed
to know why the knobs should be twisted this way or that.
Candide didn't invent machine learning—in fact the con-
cept had been tested plenty before, in a primitive form of ma-
chine translation in the 1960s. But up to that point, no test had
been very successftil. The breakthrough wasn't that Candide
cracked the problem. It was that so simple a program per-
formed adequately. Machine translation was, as Adam Berger,
a member ofthe Candide team, writes in a summary ofthe
project, "widely considered among the most difficult tasks in
natural language processing, and in artificial intelligence in
general, because accurate translation seems to be impossible
without a comprehension ofthe text to be translated." That a
program as straightforward as Candide could perform at par
suggested that effective machine translation didn't require
comprehension—all it required was lots of bilingual text. And
for that, it became a proof of concept for the approach that
conquered AI.
SliVIULACRA
Before the beak of a tiny pipette
dipped through a glisten of DNA
and ewe quickened to ewe
with exactly the simulacrum
forty thousand years had worked toward,
before Muybridge's horses cantered
and a ratchet-and-pawl-cast waltzing couple
shuffled along a phasmatrope,
before dime-size engines
sparked in the torsos of toddler dolls
and little bellows let them sing
and the Unassisted Walking O n e -
Miss Autoperipatetikos—stepped
in her caterpillar gait
across the New World's wide-plank floor,
before motion moved the figures, and torsion
moved the motion—or steam, or sand,
or candle flame—before cogged wheels and taut springs
nudged Gustav the Climbing Miller
up his mill's retaining wall (and gravity
retrieved him), before image, like sound,
stroked through an outreach of crests and troughs,
and corresponding apertures
caught patterns in the waves,
caught, like eels beneath ancestral ponds,
radiance in the energy,
before lamposcope and zograscope,
fantascope and panorama, before lanterns
recast human hands, or a dye-drop
of beetle first fluttered across
a flicker book of papyrus leaves,
someone sketched a creature along the contours
of a cave, its stippled, monochromatic shape
tracing the vaults and hollows,
shivers of flank and shoulder
already drawing absence nearer,
as torchlight set the motion
and shadow set the rest.
— Linda Bierds
Linda Bierds's new collection, Roget's Illusion, will be
published
early next year. She teaches at the University ofWashington.
T H E A T L A N T I C N O V E M B E R 2 0 1 3 97
what Candide's approach does, and with spectacular effi-
ciency, is convert the problem of unknotting a complex process
into the problem of finding lots and lots of examples ofthat
process in action. This problem, unlike mimicking the actual
processes of the brain, only got easier with time—particularly
as the late '80s rolled into the early '90s and a nerdy haven for
physicists exploded into the World Wide Web.
It is no coincidence that AI saw a resurgence in the '90s,
and no coincidence either that Google, the world's biggest
Web company, is "the world's biggest AI system," in the words
of Peter Norvig, a director of research there, who wrote AI:A
Modem Approach with Stuart Russell. Modern AI, Norvig has
said, is about "data, data, data," and Google has more data
than anyone else.
Josh Estelle, a software engineer on Google Translate,
which is based on the same principles as Candide and is now
the world's leading machine-translation system, explains,
"you can take one of those simple machine-learning algorithms
that you learned about in the first few weeks of an AI class, an
algorithm that academia has given up on, that's not seen as
useful—but when you go from 10,000 training examples to
10 billion training examples, it all starts to work. Data trumps
everything."
The technique is so effective that the Google Translate team
Ü V ^ ^ ID WE SIT DOWN when we built Watson and
• • try to model human cognition?" Dave Ferrucci,
J ^ - ^ who led the Watson team at IBM, pauses for em-
phasis. "Absolutely not. We just tried to create a machine that
could win at Jeopardy."
For Ferrucci, the definition of intelligence is simple: it's
what a program can do. Deep Blue was intelligent because it
could beat Garry Kasparov at chess. Watson was intelligent
because it could beat Ken Jennings at Jeopardy. "It's artificial
intelligence, right? Which is almost to say not-human intelli-
gence. Why would you expect the science of artificial intelli-
gence to produce human intelligence?"
Ferrucci is not blind to the difference. He likes to tell crowds
that whereas Watson played using a room's worth of proces-
sors and 20 tons of air-conditioning equipment, its opponents
relied on a machine that fits in a shoebox and can run for hours
on a tuna sandwich. A machine, no less, that would allow them
to get up when the match was over, have a conversation, enjoy
a bagel, argue, dance, think—while Watson would be left hum-
ming, hot and dumb and un-alive, answering questions about
presidents and potent potables.
"The features that [these systems] are ultimately look-
ing at are just shadows—they're not even shadows—of Nhat
it is that they represent," Ferrucci says. "We constantly
"I don't want to be involved in passing offsome faney program's
behavior for intelligence when I know that it has nothing to do
with
intelligence. And I don't know why more people aren't that
way."
can be made up of people who don't speak most of the lan-
guages their application translates. "It's a bang-for-your-buck
argument," Estelle says. "You probably want to hire more engi-
neers instead" of native speakers. Engineering is what counts
in a world where translation is an exercise in data-mining at a
massive scale.
That's what makes the machine-learning approach such a
spectacular boon: it vacuums out the first-order problem, and
replaces the task of understanding with nuts-and-bolts engi-
neering. "You saw this springing up throughout" Google, Nor-
vig says. " If we can make this part 10 percent faster, that
would
save so many millions of dollars per year, so let's go ahead and
do it. How are we going to do it? Well, we'll look at the data,
and
we'll use a machine-learning or statistical approach, and we'll
come up with something better."
Google has projects that gesture toward deeper under-
standing: extensions of machine learning inspired by brain
biology; a "knowledge graph" that tries to map words, like
Obama, to people or places or things. But the need to serve
1 billion customers has a way of forcing the company to trade
understanding for expediency. You don't have to push Google
Translate very far to see the compromises its developers have
made for coverage, and speed, and ease of engineering. Al-
though Google Translate captures, in its way, the products of
human intelligence, it isn't intelligent itself. It's like an enor-
mous Rosetta Stone, the calcified hieroglyphics of minds once
at work.
underestimate—we did in the '50s about AI, and we're still
doing it—what is really going on in the human brain."
The question that Hofstadter wants to ask Ferrucci, and
everybody else in mainstream AI, is this: Then why don't you
come study it?
"I have mixed feelings about this," Ferrucci told me when
I put the question to him last year. "There's a limited number
of things you can do as an individual, and I think when you
dedicate your life to something, you've got to ask yourself the
question: To what end? And I think at some point I asked my-
self that question, and what it came out to was, I'm fascinated
by how the human mind works, it would be fantastic to under-
stand cognition, I love to read books on it, I love to get a grip
on it"—he called Hofstadter's work inspiring—"but where am
/ going to go with it? Really what I want to do is build computer
systems that do something. And I don't think the short path to
that is theories of cognition."
Peter Norvig, one of Google's directors of research, echoes
Ferrucci almost exactly. "I thought he was tackling a really
hard problem," he told me about Hofstadter's work. "And I
guess I wanted to do an easier problem."
In their responses, one can see the legacy of AI's failures.
Work on fundamental problems reeks of the early days. "Con-
cern for 'respectability,' " Nils Nilsson writes in his academic
history. The Quest for Artificial Intelligence, "has had, I think,
a
stultifying effect on some AI researchers."
Stuart Russell, Norvig's co-author of ALA Modem Approach,
9 8 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
goes further. "A lot of the stuff going on is not very ambitious,"
he told me. "In machine learning, one of the big steps that hap-
pened in the mid-'8os was to say, 'Look, here's some real d a t a
-
can I get my program to predict accurately on parts of the data
that I haven't yet provided to it?' What you see now in machine
learning is that people see that as the only task."
It's insidious, the way your own success can stifle you. As
our machines get faster and ingest more data, we allow our-
selves to be dumber. Instead of wrestling with our hardest
problems in earnest, we can just plug in billions of examples
of them. Which is a bit like using a graphing calculator to do
your high-school calculus homework—it works great until you
need to actually understand calculus.
It seems unlikely that feeding Google Translate l trillion
documents, instead of lo billion, will suddenly enable it to
work at the level of a human translator. The same goes for
search, or image recognition, or question-answering, or plan-
ning or reading or writing or design, or any other problem for
which you would rather have a human's intelligence than a
machine's.
This is a fact of which Norvig, just like everybody else in
commercial AI, seems to be aware, if not dimly afraid. "We
could draw this curve: as we gain more data, how much bet-
ter does our system get?" he says. "And the answer is, it's still
improving—but we are getting to the point where we get less
benefit than we did in the past."
For James Marshall, a former graduate student of Hof-
stadter's, it's simple: "In the end, the hard road is the only one
that's going to lead you all the way."
H
OFSTADTER WAS 35 when he had his first long-
term romantic relationship. He was born, he says,
with "a narrow resonance curve," borrowing a con-
cept from physics to describe his extreme pickiness. "There
have been certain women who have had an enormous effect
on me; their face has had an incredible effect on me. I can't
give you a recipe for the face... but it's very rare." In 1980, after
what he has described as " 15 hellish, love-bleak years," he met
Carol Brush. ("She was at the dead center of the resonance
curve.") Not long after they met, they were happily married
with two kids, and not long after that, while they were on sab-
batical together in Italy in 1993, Carol died suddenly of a brain
tumor. Danny and Monica were 5 and 2 years old. "I felt that
he was pretty much lost a long time after Carol's death," says
Pentti Kanerva, a longtime friend.
Hofstadter hasn't been to an artificial-intelligence confer-
ence in 30 years. "There's no communication between me and
these people," he says of his AI peers. "None. Zero. I don't
want to talk to colleagues that I find very, very intransigent
and hard to convince of anything. You know, I call them col-
leagues, but they're almost not colleagues—we can't speak to
each other."
Hofstadter strikes me as difficult, in a quiet way. He is kind,
but he doesn't do the thing that easy conversationalists do, that
well-liked teachers do, which is to take the best of what you've
said—to work you into their thinking as an indispensable ally,
as though their point ultimately depends on your contribu-
tion. I remember sitting in on a roundtable discussion that
Hofstadter and his students were having and thinking of how
little I saw his mind change. He seemed to be seeking consen-
sus. The discussion had begun as an e-mail that he had sent
out to a large list of correspondents; he seemed keenest on the
replies that were keenest on him.
"So I don't enjoy it," he told me. "I don't enjoy going to con-
ferences and running into people who are stubborn and con-
vinced of ideas I don't think are correct, and who don't have
any understanding of my ideas. And I just like to talk to people
who are a little more sympathetic."
Ever since he was about 15, Hofstadter has read The Catcher
in the Rye once every 10 years. In the fall of 2011, he taught an
undergraduate seminar called "Why Is J. D. Salinger's The
Catcher in the Rye a Great Novel?" He feels a deep kinship with
Holden Caulfield. When I mentioned that a lot of the kids in
my high-school class didn't like Holden—they thought he was
a whiner—Hofstadter explained that "they may not recognize
his vulnerability." You imagine him standing like Holden stood
at the beginning of the novel, alone on the top of a hill, watch-
ing his classmates romp around at the football game below. "I
have too many ideas already," Hofstadter tells me. "I don't
need the stimulation of the outside world."
Of course, the folly of being above the fray is that you're
also not a part of it. "There are very few ideas in science that
are so black-and-white that people say 'Oh, good God, why
didn't we think ofthat?' " says Bob French, a former student
of Hofstadter's who has known him for 30 years. "Everything
from plate tectonics to evolution—all those ideas, someone
had to fight for them, because people didn't agree with those
ideas. And if you don't participate in the fight, in the rough-
and-tumble of academia, your ideas are going to end up be-
ing sidelined by ideas which are perhaps not as good, but were
more ardently defended in the arena."
Hofstadter never much wanted to fight, and the double-
edged sword of his career, if there is one, is that he never
really had to. He won the Pulitzer Prize when he was 35, and
instantly became valuable property to his university. He was
awarded tenure. He didn't have to submit articles to journals;
he didn't have to have them reviewed, or reply to reviews. He
had a publisher, Basic Books, that would underwrite anything
he sent them.
Stuart Russell puts it bluntly. "Academia is not an environ-
ment where you just sit in your bath and have ideas and expect
everyone to run around getting excited. It's possible that in 50
years' time we'll say, 'We really should have listened more to
Doug Hofstadter.' But it's incumbent on every scientist to at
least think about what is needed to get people to understand
the ideas."
"Ars longa, vita brevis," Hofstadter likes to say. "I just figure
that life is short. I work, I don't try to publicize. I don't try to
fight."
There's an analogy he made for me once. Einstein, he said,
had come up with the light-quantum hypothesis in 1905. But
nobody accepted it until 1923. "Not a soul," Hofstadter says.
"Einstein was completely alone in his belief in the existence of
light as particles—for 18 years.
"That must have been very lonely." El
James Somers is a writer and computer programmer based in
New York City.
1 0 0 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
Copyright of Atlantic Monthly (10727825) is the property of
Atlantic Media Company and its
content may not be copied or emailed to multiple sites or posted
to a listserv without the
copyright holder's express written permission. However, users
may print, download, or email
articles for individual use.

Unit 2 ENG 120 Critique Assignment Due 1159 PM EST.docx

  • 1.
    Unit 2 ENG120 Critique Assignment Due: 11:59 PM EST Sunday The critique does have a negative connotation, but it doesn’t have to be a strictly negative piece. After reading through the readings about Artificial Intelligence and how to write critiques, choose one of the required readings (listed below) to write your critique. The critique will be two to three pages in length and employ APA format. Follow these guidelines to write your critique: ue should be two to three pages in length. Choose one of the following articles: o “What did the Watson Computer do?” o “Computer Wins on Jeopardy, Trivial, It’s Not” o “The AI Revolution is on” o “The Man who Would Teach Machines to Think” o “Mind in the Machine”
  • 2.
    -text citations whereveryou reference the text. Students: Be sure to read the criteria, by which your paper/project will be evaluated, before you write, and again after you write. Critique Rubric Unsatisfactory – 1 Developing – 2 Proficient – 3 Exemplary – 4 Organization The critique lacks a topic sentence, does not state the main idea of the original selection, and lacks an appropriate conclusion. The critique does not begin with
  • 3.
    a topic sentencethat states the main idea of the original selection but has an effective concluding statement. The critique begins with a topic sentence that states the main idea of the original selection but lacks a concluding statement that brings the critique to a close. The critique begins with a clear topic sentence that states the main idea of the original selection. A concluding sentence effectively brings the critique to a close. Support
  • 4.
    The critique statesfew major ideas and does not use a logical order. The writing lacks unity, coherence, and supporting details. The critique does not include all major ideas but the writing is unified and coherent. Missing most supporting details. The writing is unified and coherent and all major ideas are represented, but is lacking in some supporting details. All major ideas are arranged in logical order. The writing is unified and coherent throughout. Includes all supporting details.
  • 5.
    Elements of Critique Student neitherevaluates nor analyzes the piece and offers no commentary or personal interpretation. Student does little to evaluate or analyze the piece. Offers little commentary or personal interpretation. Student evaluates and analyzes the piece with some insightful commentary. Uses some personal interpretation to strengthen claims. Student appropriately evaluates and analyzes the piece, accurately presents
  • 6.
    insightful commentary onthe validity of the piece, and uses personal interpretation to strengthen claims. APA Format There is no evidence of proper APA style anywhere in the critique. The critique contains more than seven APA errors in title page, objective tone, citation, and active voice. The critique contains four to six errors in APA style title page, objective tone, citation, and active voice.
  • 7.
    The critique containsfewer than three errors in APA style title page, objective tone, citation, and active voice. Grammar, Mechanics Errors in mechanics make the critique difficult to understand. There are seven or more errors in mechanics, usage, grammar, or spelling. There are four to six errors in mechanics, usage, grammar, or spelling. There are fewer than three errors in mechanics, usage, grammar, or spelling.
  • 8.
    20 = 100%16 = 90% 12 = 79% 8 = 70% 19 = 98% 15 = 87% 11 = 77% 7 = 68% 18 = 96% 14 = 84% 10 = 75% 6 = 66% 17 = 93% 13 = 81% 9 = 73% 5 = 64% *A zero can be earned if the above criteria are not met. 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 1/10 Title: Database: Mind in the MACHINE. By: PIORE, ADAM, Discover, 02747529, Jun2013, Vol. 34, Issue 5 Academic Search Premier Mind in the MACHINE
  • 9.
    A visionary engineeraims to transform computing with technology modeled on the human brain The day he got the news that would transform his life, Dharmendra Modha, 17, was supervising a team of laborers scraping paint off iron chairs at a local Mumbai hospital. He felt happy to have the position, which promised steady pay and security -- the most a poor teen from Mumbai could realistically aspire to in 1986. Modha's mother sent word to the job site shortly after lunch: The results from the statewide university entrance exams had come in. There appeared to be some sort of mistake, because a perplexing telegram had arrived at the house. Modha's scores hadn't just placed him atop the city, the most densely inhabited in India -- he was No. 1 in math, physics and chemistry for the entire province of Maharashtra, population 100 million. Could he please proceed to the school to sort it out? Back then, Modha couldn't conceive what that telegram might mean for his future. Both his parents had ended their schooling after the 11th grade. He could count on one hand the number of relatives who went to college. But Modha's ambitions have expanded considerably in the years since those test scores paved
  • 10.
    his way toone of India's most prestigious technical academies, and a successful career in computer science at IBM's Almaden Research Center in San Jose, Calif. Listen American Accent POST UNIVERSITY LIBRARY javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:__doPostBack('ctl00$ctl00$FindField$customerLogo' ); 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 2/10 Recently, the diminutive engineer with the bushy black eyebrows, closely cropped hair and glasses sat in his Silicon Valley office and shared a vision to do nothing less than transform the future of computing. "Our mission is clear," said Modha, now 44, holding
  • 11.
    up a rectangularcircuit board featuring a golden square. "We'd like these chips to be everywhere -- in every corner, in everything. We'd like them to become absolutely essential to the world." Traditional chips are sets of miniaturized electrical components on a small plate used by computers to perform operations. They often consist of millions of tiny circuits capable of encoding and storing information while also executing programmed commands. Modha's chips do the same thing, but at such enormous energy savings that the computers they comprise would handle far more data, by design. With the new chips as linchpin, Modha has envisioned a novel computing paradigm, one far more powerful than anything that exists today, modeled on the same magical entity that allowed an impoverished laborer from Mumbai to ascend to one of the great citadels of technological innovation: the human brain. TURNING TO NEUROSCIENCE The human brain consumes about as much energy as a 20-watt bulb -- a billion times less energy than a computer that simulates brainlike computations. It is so compact it can fit in a two-liter soda bottle. Yet this pulpy lump of organic material can do things no
  • 12.
    modern computer can.Sure, computers are far superior at performing pre-programmed computations -- crunching payroll numbers or calculating the route a lunar module needs to take to reach a specific spot on the moon. But even the most advanced computers can't come close to matching the brain's ability to make sense out of unfamiliar sights, sounds, smells and events, and quickly understand how they relate to one another. Nor can such machines equal the human brain's capacity to learn from experience and make predictions based on memory. Five years ago, Modha concluded that if the world's best engineers still hadn't figured out-how to match the brain's energy efficiency and resourcefulness after decades of trying using the old methods, perhaps they never would. So he tossed aside many of the tenets that have guided chip design and software development over the past 60 years and turned to the literature of neuroscience. Perhaps understanding the brain's disparate components and the way they fit together would help him build a smarter, more energy-efficient silicon machine. These efforts are paying off. Modha's new chips contain silicon components that crudely mimic the physical layout of, and connections between, microscopic carbon-based brain cells.
  • 13.
    (See "Inside Modha'sNeural Chip," page 55.) Modha is confident that his chips can be used to build a cognitive computing system on the scale of a human brain for only 100 times more power, making it 10 million times more energy efficient than the computers of today. 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 3/10 Already, Modha's team has demonstrated some basic capabilities. Without the help of a programmer explicitly telling them what to do, the chips they've developed can learn to play the game Pong, moving a bar along the bottom of the screen and anticipating the exact angle of a bouncing ball. They can also recognize the numbers zero through nine as a lab assistant scrawls them on a pad with an electronic pen. Of course, plenty of engineers have pulled off such feats -- and far more impressive ones. An entire subspecialty known as
  • 14.
    machine learning isdevoted to building algorithms that allow computers to develop new behaviors based on experience. Such machines have beaten the world's best minds in chess and Jeopardy! But while machine learning theorists have made progress in teaching computers to perform specific tasks within a strict set of parameters -- such as how to parallel park a car or plumb encyclopedias for answers to trivia questions -- their programs don't enable computers to generalize in an open-ended way. Modha hopes his energy-efficient chips will usher in change. "Modern computers were originally designed for three fundamental problems: business applications, such as billing; science, such as nuclear physics simulation; and government programs, such as Social Security," Modha states. The brain, on the other hand, was forged on the crucible of evolution to quickly make sense of the world around it and act upon its conclusions. "It has the ability to pick out a prowling predator in huge grasses, amid a huge amount of noise, without being told what it is looking for. It isn't programmed. It learns to escape and avoid the lion." Machines with similar capabilities could help solve one of mankind's most pressing problems: the overload of information. Between 2005 and 2012, the amount of digital information created, replicated and consumed worldwide increased over
  • 15.
    2,000 percent -- exceeding2.8 trillion gigabytes in 2012. By some estimates, that's almost as many bits of information as there are stars in the observable universe. The arduous task of writing the code that instructs today's computers to make sense of this flood of information -- how to order it, analyze it, connect it, what to do with it -- is already far outstripping the abilities of human programmers. Cognitive computers, Modha believes, could plug the gap. Like the brain, they will weave together inputs from multiple sensory streams, form associations, encode memories, recognize patterns, make predictions and then interpret, perhaps even act - - all using far less power than today's machines. Drawing on data streaming in from a multitude of sensors monitoring the world's water supply, for instance, the computer might learn to recognize changes in pressure, temperature, wave size and tides, then issue tsunami warnings, even though current science has yet to identify the constellation of variables associated with the monster waves. Brain-based computers could help emergency department doctors render elusive diagnoses even when science has yet to recognize the collection of changes in
  • 16.
    body temperature, bloodcomposition or other variables associated with an underlying disease. 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 4/10 "You will still want to store your salary, your gender, your Social Security number in today's computers," Modha says. "But cognitive computing gives us a complementary paradigm for a radically different kind of machine." LIGHTING THE NETWORK Modha is hardly the first engineer to draw inspiration from the brain. An entire field of computer science has grown out of insights derived from the way the smallest units of the brain -- cells called neurons -- perform computations. It is the firing of neurons that allows us to think, feel and move. Yet these abilities stem not from the activity of any one neuron, but from networks of interconnected neurons sending and receiving simple signals and working in concert with each other.
  • 17.
    The potential forbrainlike machines emerged as early as 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts proposed an idealized mathematical formulation for the way networks of neurons interact to cause one another to fire, sending messages throughout the brain. In a biological brain, neurons communicate by passing electrochemical signals across junctions known as synapses. Often the process starts with external stimuli, like light or sound. If the stimulus is intense enough, voltage across the membrane of receiving neurons exceeds a given threshold, signaling neurochemicals to fly across the synapses, causing more neurons to fire and so on and so forth. When a critical mass of neurons fire in concert, the input is perceived by the cognitive regions of the brain. With enough neurons firing together, a child can learn to ride a bike and a mouse can master a maze. McCulloch and Pitts pointed out that no matter how many inputs their idealized neuron might receive, it would always be in one of only two possible states -- activated or at rest, depending upon whether the threshold for excitement had been passed. Because neurons follow this "all-or-none law," every computation the brain performs can be reduced to series of true or false
  • 18.
    expressions, where true andfalse can be represented by 1 and 0, respectively. Modern computers are also based on logic systems using Is and 0s, with information coming from electric switches instead of the outside environment. McCulloch and Pitts had captured a fundamental similarity between brains and computers. If endowed with the capacity to ask enough yes-or-no questions, either one should presumably eventually arrive at the solution to even the most complicated of questions. As an example, to draw a boundary between a group of red dots and blue dots, one might ask of each dot if it is red (yes/no) or blue (yes/no). Then one might ask if two neighboring pairs of dots are of differing colors (yes/no). With enough layers of questions and answers, one might answer almost any complex question at all. Yet this kind of logical ability seemed far removed from the capacity of brains, made of networks of neurons, to encode memories or learn. That capacity was explained in 1949 by Canadian psychologist Donald Hebb, who hypothesized that when two neurons
  • 19.
    7/2/2014 Mind inthe MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 5/10 fire in close succession, connections between them strengthen. "Neurons that fire together wire together" is the catchy phrase that emerged from his pivotal work. Connections between neurons explain how narrative memory is formed. In a famous literary example, Marcel Proust s childhood flooded back when he dipped a madeleine in his cup of tea and took a bite. The ritual was one he had performed often during childhood. When he repeated it years later, neurons fired in the areas of the brain storing these taste and motor memories. As Hebb had suggested, those neurons had strong physical connections to other neurons associated with other childhood memories. Thus when Proust tasted the madeleine, the neurons encoding those memories also fired -- and Proust was flooded with so many associative memories he filled volumes of his masterwork, In Search of Lost Time. By 1960, computer researchers were trying to model Hebb's ideas about learning and memory. One effort was a crude brain mock-up called the perceptron. The perceptron contained a
  • 20.
    network of artificialneurons, which could be simulated on a computer or physically built with two layers of electrical circuits. The space between the layers was said to represent the synapse. When the layers communicated with each other by passing signals over the synapse, that was said to model (roughly) a living neural net. One could adjust the strength of signals passed between the two layers -- and thus the likelihood that the first layer would activate the second (much like one firing neuron activates another to pass a signal along). Perceptron learning occurred when the second layer was instructed to respond more powerfully to some inputs than others. Programmers trained an artificial neural network to "read," activating more strongly when shown patterns of light depicting certain letters of the alphabet and less strongly when shown others. The idea that one could train a computer to categorize data based on experience was revolutionary. But the perceptron was limited: Consisting of a mere two layers, it could only recognize a "linearly separable" pattern, such as a plot of black dots and white dots that can be separated by a single straight line (or, in more graphic terms, a cat sitting next to a chair). But show it a plot
  • 21.
    of black andwhite dots depicting something more complex, like a cat sitting on a chair, and it was utterly confused. It wasn't until the 1980s that engineers developed an algorithm capable of taking neural networks to the next level. Now programmers could adjust the weights not just between two layers of artificial neurons, but also a third, a fourth -- even a ninth layer -- in between, representing a universe where many more details could live. This expanded the complexity of questions such networks could answer. Suddenly neural networks could render squiggly lines between black and white dots, recognizing both the cat and the chair it was sitting in at the same time. OUT OF BOMBAY Just as the neural net revival was picking up steam, Modha entered India's premier engineering school, the Indian Institute of 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 6/10
  • 22.
    Technology in Bombay.He graduated with a degree in computer science and engineering in 1990. As Modha looked to continue his education, few areas seemed as hot as the reinvigorated field of neural networks. In theory, the size of neural networks was limited only by the size of computers and the ingenuity of programmers. In one powerful example of the new capabilities around that time, Carnegie Mellon graduate student Dean Pomerleau used simulated images of road conditions to teach a neural network to interpret live road images picked up by cameras attached to a car's onboard computer. Traditional programmers had been stumped because even subtle changes in angle, lighting or other variables threw off preprogrammed software coded to recognize exact visual parameters. Instead of trying to precisely code every possible image or road condition, Pomerleau simply showed a neural network different kinds of road conditions. Once it was trained to drive under specific conditions, it was able to generalize to drive under similar but not identical conditions. Using this method, a computer could recognize a road with metal dividers based on its similarities to a road without dividers, or a rainy road based on its similarity to a sunny road -- an impossibility using traditional coding
  • 23.
    techniques. After being shownimages of various left-curving and right- curving roads, it could recognize roads curving at any angle. Other programmers designed a neural network to detect credit card fraud by exposing it to purchase histories of good versus fraudulent card accounts. Based on the general spending patterns found in known fraudulent accounts, the neural network was able to recognize the behavior and flag new fraud cases. The neural networking mecca was San Diego -- in 1987, about 1,500 people met there for the first significant conference on neural networking in two decades. And in 1991, Modha arrived at the University of California, San Diego to pursue his Ph.D. He focused on applied math, constructing equations to examine how many dimensions of variables certain systems could handle, and designing configurations to handle more. By the time Modha was hired by IBM in 1997 in San Jose, another computing trend was taking center stage: the explosion of the World Wide Web. Even back then, it was apparent that the flood of new data was overwhelming programmers. The Internet offered a vast trove of information about human behavior, consumer preferences and social trends. But there was so much
  • 24.
    of it: How didone organize it? How could you begin to pick patterns out of files that could be classified based on tens of thousands of characteristics? Current computers consumed way too much energy to ever handle the data or the massive programs required to take every contingency into account. And with a growing array of sensors gathering visual, auditory and other information in homes, bridges, hospital emergency departments and everywhere else, the information deluge would only grow. A CANONICAL PATH 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 7/10 The more Modha thought about it, the more he became convinced that the solution might be found by turning back to the brain, the most effective and energy-efficient pattern recognition machine in existence. Looking to the neuro-scientific literature for inspiration, he found the writings of MIT neuroscientist
  • 25.
    Mriganka Sur. Surhad severed the neurons connecting the eyes of newborn ferrets to the brain's visual cortex; then he reconnected those same neurons to the auditory cortex. Even with eyes connected to the sound-processing areas of the brain, the rewired animals could still see as adults. To Modha, this revealed a fascinating insight: The neural circuits in Sur's ferrets were flexible -- as interchangeable, it seemed, as the back and front tires of some cars. Sur's work implied that to build an artificial cortex on a computer, you only needed one design to create the "circuit" of neurons that formed all its building blocks. If you could crack the code of that circuit -- and embody it in computation -- all you had to do was repeat it. Programmers wouldn't have to start over every time they wanted to add a new function to a computer, using pattern recognition algorithms to make sense of new streams of data. They could just add more circuits. "The beauty of this whole approach," Modha enthusiastically explains, "is that if you look at the mammalian cerebral cortex as a road map, you find that by adding more and more of these circuits, you get more and more functionality."
  • 26.
    In search ofa master neural pattern, Modha discovered that European researchers had come up with a mathematical description of what appeared to be the same as the circuit Sur investigated in ferrets, but this time in cats. If you unfolded the cat cortex and unwrinkled it, you would find the same six layers repeated again and again. When connections were drawn between different groups of neurons in the different layers, the resulting diagrams looked an awful lot like electrical circuit diagrams. Modha and his team began programming an artificial neural network that drew inspiration from these canonical circuits and could be replicated multiple times. The first step was determining how many of these virtual circuits they could they link together and run on IBM's traditional supercomputers at once. Would it be possible to reach the scale of a human cortex? At first Modha and his team hit a wall before they reached 40 percent of the number of neurons present in the mouse cerebral cortex: roughly 8 million neurons, with 6,300 synaptic connections apiece. The truncated circuitry limited the learning, memory and creative intelligence their simulation could achieve. So they turned back to neuroscience for solutions. The actual
  • 27.
    neurons in thebrain, they realized, only become a factor in the organs overall computational process when they are activated. When inactive, neurons simply sit on the sidelines, expending little energy and doing nothing. So there was no need to update the relationship between 8 million neurons 1,000 times a second. Doing so only slowed the system down. Instead, they could emulate the brain by instructing the computer to focus attention only 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 8/10 on neurons that had recently fired and were thus most likely to fire again. With this adjustment, the speed at which the supercomputer could simulate a brain-based system increased a thousandfold. By November 2007, Modha had simulated a neural network on the scale of a rat cortex, with 55 million neurons and 442 billion synapses. Two years later his team scaled it up to the size of a cat brain, simulating 1.6 billion neurons and almost 9 trillion synapses. Eventually they scaled the model up to simulate a system of 530
  • 28.
    billion neurons and100 trillion synapses, a crude approximation of the human brain. BUILDING A SILICON BRAIN The researchers had simulated hundreds of millions of repetitions of the kind of canonical circuit that might one day enable a new breed of cognitive computer. But it was just a model, running at a maddeningly slow speed on legacy machines that could never be brainlike, never step up to the cognitive plate. In 2008, the federal Defense Advanced Research Projects Agency (DARPA) announced a program aimed at building the hardware for an actual cognitive computer. The first grant was the creation of an energy-efficient chip that would serve as the heart and soul of the new machine -- a dream come true for Modha. With DARPA's funding, Modha unveiled his new, energy- efficient neural chips in summer 2011. Key to the chips' success was their processors, chip components that receive and execute instructions for the machine. Traditional computers contain a small number of very fast processors (modern laptops usually have two to four processors on a single chip) that are almost always working. Every millisecond, these processors scan millions of electrical switches, monitoring and flipping thousands of circuits
  • 29.
    between two possiblestates, 1 and 0 -- activated or not. To store the patterns of ones and zeros, today's computers use a separate memory unit. Electrical signals are conveyed between the processor and memory over a pathway known as a memory bus. Engineers have increased the speed of computing by shortening the length of the bus. Some servers can now loop from memory to processor and back around a few hundred- million times per second. But even the shortest buses consume energy and create heat, requiring lots of power to cool. The brain's architecture is fundamentally different, and a computer based on the brain would reflect that. Instead of a small number of large, powerful processors working continuously, the brain contains billions of relatively slow, small processors -- its neurons -- which consume power only when activated. And since the brain stores memories in the strength of connections between neurons, inside the neural net itself, it requires no energy-draining bus. The processors in Modha's new chip are the smallest units of a computer that works like the brain: Every chip contains 256 very slow processors, each one representing an artificial neuron (By comparison, a roundworm brain consists of about 300 neurons.)
  • 30.
    7/2/2014 Mind inthe MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 9/10 Only activated processors consume significant power at any one time, making energy consumption low. But even when activated, the processors need far less power than their counterparts in traditional computers because the tasks they are designed to execute are far simpler: Whereas a traditional computer processor is responsible for carrying out all the calculations and operations that allow a computer to run, Modha's tiny units only need to sum up the number of signals received from other virtual neurons, evaluate their relative weights and determine whether there are enough of them to prompt the processor to emit a signal of its own. Modha has yet to link his new chips and their processors in a large-scale network that mimics the physical layout of a brain. But when he does, he is convinced that the benefits will be vast. Evolution has invested the brain's anatomy with remarkable
  • 31.
    energy efficiencies by positioningthose areas most likely to communicate closer together; the closer neurons are to one another, the less energy they need to push a signal through. By replicating the big-picture layout of the brain, Modha hopes to capture these and other unanticipated energy savings in his brain-inspired machines. He has spent years poring over studies of long- distance connections in the rhesus macaque monkey brain, ultimately creating a map of 383 different brain areas, connected by 6,602 individual links. (See "Mapping the Monkey Brain," page 58.) The map suggests how many cognitive computing chips should be allocated to the different regions of any artificial brain, and which other chips they should be wired to. For instance, 336 links begin at the main vision center of the brain. An impressive 1,648 links emerge from the frontal lobe, which contains the prefrontal cortex, a centrally located brain structure that is the seat of decisionmaking and cognitive thought. As with a living brain, the neural computer would have most connections converging on a central point.
  • 32.
    Of course, evenif Modha can build this brai-niac, some question whether it will have any utility at all. Geoff Hinton, a leading neural networking theorist, argues the hardware is useless without the proper "learning algorithm" spelling out which factors change the strength of the synaptic connections and by how much. Building a new kind of chip without one, he argues, is "a bit like building a car engine without first figuring out how to make an explosion and harness the energy to make the wheels go round." But Modha and his team are undeterred. They argue that they are complementing traditional computers with cognitive- computing- like abilities that offer vast savings in energy, enabling capacity to grow by leaps and bounds. The need grows more urgent by the day. By 2020, the world will generate 14 times the amount of digital information it did in 2012. Only when computers can spot patterns and make connections on their own, says Modha, will the problem be solved. Creating the computer of the future is a daunting challenge. But Modha learned long ago, halfway across the world as a teen scraping the paint off of chairs, that if you tap the power of the human brain, there is no telling what you might do.
  • 33.
    In a biologicalbrain, neurons communicate by passing electrochemical signals across junctions known as synapses. Often the 7/2/2014 Mind in the MACHINE: EBSCOhost http://eds.a.ebscohost.com/ehost/detail?sid=1348e4df-91dd- 4ca4-8835- 240055f8f22d%40sessionmgr4003&vid=2&hid=4210&bdata=Jn NpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=87520824 10/10 process starts with external stimuli, like light or sound. "We'd like these chips to be everywhere -- in every corner, in everything. We'd like them to become absolutely essential to the world." They simulated a system of 530 billion neurons and 100 trillion synapses, a crude rendition of the human brain. Gathering in front of the brain wall this February are the Cognitive Computing Lab team members (from left) John Arthur, Paul Merolla, Bill Risk, Dharmendra Modha, Bryan Jackson, Myron Flickner and Steve Esser. Dharmendra Modha and team member Bill Risk stand by a supercomputer at the IBM Almaden facility. Using the supercomputers at Almaden and Lawrence Livermore National Laboratory, the
  • 34.
    group simulated networksthat crudely approximated the brains of mice, rats, cats and humans. Dharmendra Modha stands alongside the brain wall, used by his cognitive computing team to simulate brain activity and model neural chips at IBM. In his hand is a neurosynaptic chip, the core component of a new generation of computers based on the architecture of the brain. The neon swirl on the opposite page was inspired by the neural architecture of a rhesus macaque brain, used by Modha to help him design the chip. (Initials around the swirl's rim indicate discrete regions in the macaque brain.) ~~~~~~~~ By ADAM PIORE PHOTOGRAPH BY MAJED ABOLFAZLI Adam Piore is a contributing editor at DISCOVER. © 2013 Discover Magazine The Man Who WouULTeach Machines to Think when he was 35, in 1980, Douglas Hofstadter won the Pulitzer Prize for his book on
  • 35.
    how the mindworks, Gödel, Escher, Bach. It became the bible of artificial intelligence, the book of the future. Then the future moved somewhere else. What if the best ideas about AI are yellowing in a drawer in Bloomington? By James Somers Photographs by Greg Ruffing 90 N O V E M B E R 2 0 1 3 T H E A T L A N T I C T DEPENDS ON whatyoumean by artificial intelligence." Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. "If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human- like, they might say—maybe they wouldn't go this far—but they might say this is some ofthe only good work that's ever been done." Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that
  • 36.
    the most-exciting projectsin modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM's Jeopardy-playing super- computer, or Siri, Apple's iPhone assistant—in fact have very little to do with intelligence. Eor the past 30 years, most of them spent in an old house just northwest ofthe Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think. Their operating premise is simple: the mind is a very un- usual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are fiexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we'll have made intelligent machines. T HE IDEA THAT CHANGED Hofstadter's existence, as he has explained over the years, came to him on the road, on a break from graduate school in particle physics. Discouraged by the way his doctoral thesis was go- ing at the University of Oregon, feeling "profoundly lost," he decided in the summer of 1972 to pack his things into a car he called Quicksilver and drive eastward across the continent. Each night he pitched his tent somewhere new ("sometimes in a forest, sometimes by a lake") and read by flashlight. He was free to think about whatever he wanted; he chose to think about thinking itself. Ever since he was about 14, when he found out that his youngest sister, Molly, couldn't understand language, because she "had something deeply wrong with her brain" (her neurological condition probably dated from birth,
  • 37.
    and was neverdiagnosed), he had been quietly obsessed by the relation of mind to matter. The father of psychology, William James, described this in 1890 as "the most mysterious thing in the world" : How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves? Roaming in his 1956 Mercury, Hofstadter thought he had found the answer—that it lived, of all places, in the kernel of a mathematical proof In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of "level-crossing feedback loop." He sat down one afternoon to sketch his thinking in a letter to a friend. But after 30 handwritten pages, he decided not to send it; in- stead he'd let the ideas germinate a while. Seven years later, they had not so much germinated as metastasized into a 2.9-pound, 777-page book called Gödel, Escher, Bach: An Eternal Golden Braid, which would earn for Hofstadter—only 35 years old, and a first-time author—the 1980 Pulitzer Prize for general nonfiction. GEB, as the book became known, was a sensation. Its suc- cess was catalyzed by Martin Gardner, a popular columnist for Scientific American, who very unusually devoted his space in the July 1979 issue to discussing one book—and wrote a glowing review. "Every few decades," Gardner began, "an un- known author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event." The first American to earn a doctoral degree in computer science (then labeled "communication sciences"), John Holland, recalled that "the general response
  • 38.
    amongst people Iknow was that it was a wonderment." Hofstadter seemed poised to become an indelible part of the culture. GEB was not just an influential book, it was a book fully ofthe future. People called it the bible of artificial intel- ligence, that nascent field at the intersection of computing, cognitive science, neuroscience, and psychology. Hofstadter's account of computer programs that weren't just capable but creative, his road map for uncovering the "secret software structures in our minds," launched an entire generation of eager young students into AI. But then AI changed, and Hofstadter didn't change with it, and for that he all but disappeared. 9 2 N O V E M B E R 2 0 1 3 T H E A T L A N T I C G EB ARRIVED ON THE SCENEatan inflection point in AI's history. In the early 1980s, the field was retrenching: funding for long-term "basic science" was dry- ing up, and the focus was shifting to practical systems. Ambitious AI research had acquired a bad reputation. Wide-eyed overpromises were the norm, going back to the birth of the field in 1956 at the Dartmouth Summer Research Proj ect, where the organizers—including the man who coined the term artificial intelligence, John McCarthy—declared that "if a carefully selected group of scientists work on it together for a summer," they would make significant progress toward creating machines with one or more of the following abilities:
  • 39.
    the ability touse language; to form concepts; to solve prob- lems now solvable only by humans; to improve themselves. McCarthy later recalled that they failed because "AI is harder than we thought." With wartime pressures mounting, a chief underwriter of AI research—the Defense Department's Advanced Research Projects Agency (ARPA)—tightened its leash. In 1969, Congress passed the Mansfield Amendment, requiring that Defense support only projects with "a direct and apparent relationship to a specific military function or operation." In 1972, ARPA be- came DARPA, the D for "Defense," to reflect its emphasis on projects with a military benefit. By the middle of the decade, the agency was asking itself: What concrete improvements in national defense did we just buy, exactly, with 10 years and $50 million worth of exploratory research? By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing's The messy study on the top floor of Hofstadter's house in Bloomington is the center of his worid, a museum for his intellectual "binges," a scene out of a brainy episode of Hoarders. famous question, "Can machines think?," started to mature— or mutate, depending on your point of view—into a subfield of software engineering, driven by applications. Work was in- creasingly done over short time horizons, often with specific
  • 40.
    buyers in mind.For the military, favored projects included "command and control" systems, like a computerized in-flight assistant for combat pilots, and programs that would automati- cally spot roads, bridges, tanks, and silos in aerial photographs. In the private sector, the vogue was "expert systems," niche products like a pile-selection system, which helped designers choose materials for building foundations, and the Automated Cable Expertise program, which ingested and summarized telephone-cable maintenance reports. In GEB, Hofstadter was calling for an approach to AI con- cerned less with solving human problems intelligently than with understanding human intelligence—at precisely the mo- ment that such an approach, having borne so little fruit, was being abandoned. His star faded quickly. He would increas- ingly find himself out of a mainstream that had embraced a new imperative: to make machines perform in any way pos- sible, with little regard for psychological plausibility. Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Carry Kasparov. Deep Blue won by brute force. For each legal move it could make at a given point in the game, it would consider its opponent's responses, its own responses to those responses, and so on for six or more steps down the line. With a fast evalua- tion function, it would calculate a score for each possible position, and then make the move that led to the best score. What allowed Deep Blue to beat the world's best humans was raw computa- tional power. It could evaluate up to 330 million positions a second, while Kasparov could evalu- ate only a few dozen before having to make a decision.
  • 41.
    Hofstadter wanted toask: Why conquer a task if there's no insight to be had from the victory? "Okay," he says, "Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chess- board?" A brand of AI that didn't try to answer such questions— however impressive it might have been—was, in Hofstadter's mind, a diversion. He distanced himself from the field almost as soon as he became a part of it. "To me, as a fledgling AI person," he says, "it was self-evident that I did not want to get involved in that trickery. It was obvious: I don't want to be involved in pass- ing off some fancy program's behavior for intelligence when I know that it has nothing to do with intelligence. And I don't know why more people aren't that way." One answer is that the AI enterprise went from being worth a few million dollars in the early 1980s to billions by the end of the decade. (After Deep Blue won in 1997, the value of IBM's stock increased by $18 billion.) The more staid an engineer- ing discipline AI became, the more it accomplished. Today, on the strength of techniques bearing little relation to the stuff of thought, it seems to be in a kind of golden age. AI pervades heavy industry, transportation, and finance. It powers many of Google's core functions, Netflix's movie recommendations, Watson, Siri, autonomous drones, the self-driving car. T H E A T L A N T I C N O V E M B E R 2 0 1 3 9 3
  • 42.
    "The quest for'artificial flight' succeeded when the Wright brothers and others stopped imitating birds and started... learn- ing about aerodynamics," Stuart Russell and Peter Norvig write in their leading textbook. Artificial Intelligence: A Modern Ap- proach. AI started working when it ditched humans as a model, because it ditched them. That's the thrust of the analogy: Air- planes don't flap their wings; why should computers think? It's a compelling point. But it loses some bite when you cun- sider what we want: a Google that knows, in the way a human would know, what you really mean when you search for some thing. Russell, a computer-science professor at Berkeley, said to me, "What's the combined market cap of all of the search companies on the Web? It's probably four hundred, five hundred billion dollars. Engines that could actually extract all that infor- mation and understand it would be worth lo times as much." This, then, is the trillion-dollar question: Will the ap- proach undergirding AI today—an approach that borrows little from the mind, that's grounded instead in big data and big engineering—get us to where we want to go? How do you make a search engine that understands if you don't know how you understand? Perhaps, as Russell and Norvig politely ac- knowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: "One can report steady progress, all the way to the top of the tree." Consider that computers today still have trouble recogniz- The thesis of his new book, which features a mélange of A's on its cover, is that analogy is "the fuel and fire of thinking," the bread and butter of our daily mental lives. "Look at your conversations," he says. "You'll see over and over again, to your surprise, that this is the process of analogy-
  • 43.
    making." Someone sayssomething, which reminds you of something else; you say something, which reminds the other person of something else—that's a conversation. It couldn't be more straightforward. But at each step, Hofstadter argues, there's an analogy, a mental leap so stunningly complex that it's a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its "skeletal essence," and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates. "Beware," he writes, "of innocent phrases like 'Oh, yeah, that's exactly what happened to me' ... behind whose non- chalance is hidden the entire mystery of the human mind." In the years after the release of GEB, Hofstadter and AI went their separate ways. Today, if you were to pull AI: A Modern Approach off the shelf, you wouldn't find Hofstadter's name—not in more than 1,000 pages. Colleagues talk about him in the past tense. New fans of G£B, seeing when it was published, are surprised to find out its author is still alive. Of course in Hofstadter's telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote. ''Very few people are interested in how human intelligence works/' Hofstadter says. "That's what we're interested in—what is thinking^" ing a handwritteni4. In fact, the task is so difficult that it forms the basis for CAPTCHAS ("Completely Automated Public Tur- ing tests to tell Computers and Humans Apart"), those widgets that require you to read distorted text and type the characters into a box before, say, letting you sign up for a Web site.
  • 44.
    In Hofstadter's mind,there is nothing to be surprised about. To know what all A's have in common would be, he argued in a 1982 essay, to "understand the fluid nature of mental cat- egories." And that, he says, is the core of human intelligence. "Cognition is recognition," he likes to say. He describes "seeing as" as the essential cognitive act: you see some lines as "an A," you see a hunk of wood as "a table," you see a meeting as "an emperor-has-no-clothes situation" and a friend's pout- ing as "sour grapes" and a young man's style as "hipsterish" and on and on ceaselessly throughout your day. That's what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build "computer models of the fundamental mechanisms of thought." "At every moment," Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), "we are simultaneously faced with an indefinite number of over- lapping and intermingling situations." It is our job, as organ- isms that want to live, to make sense ofthat chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter's go-to word. "patiently, systematically, brilliantly," way out of the light of day, chipped away at the real problem. "Very few people are interested in how human intelligence works," Hofstadter says. "That's what we're interested in—what is thinking?—and we don't lose track ofthat question." "I mean, who knows?" he says. "Who knows what'U happen. Maybe someday people will say, 'Hofstadter already did this stuffand said this stuffand we're just now discovering it.' " Which sounds exactly like the self-soothing of the guy who
  • 45.
    lost. But Hofstadterhas the kind of mind that tempts you to ask: What if the best ideas in artificial intelligence—"genuine artifi- cial intelligence," as Hofstadter now calls it, with apologies for the oxymoron—are yellowing in a drawer in Bloomington? D OUGLAS R. HOFSTADTER was born into a life of the mind the way other kids are born into a life of crime. He grew up in 1950s Stanford, in a house on campus, just south of a neighborhood actually called Professorville. His father, Robert, was a nuclear physicist who would go on to share the 1961 Nobel Prize in Physics; his moth- er, Nancy, who had a passion for politics, became an advocate for developmentally disabled children and served on the eth- ics committee of the Agnews Developmental Center, where Molly lived for more than 20 years. In her free time Nancy was, the joke went, a "professional faculty wife": she transformed the Hofstadters' living room into a place where a tight-knit N O V E M B E R ¿ Ü 1 3 T H E A T L A N T I C Community of friends could gather for stimulating conversa- tion and jazz, for "the interpénétration of the sciences and the arts," Hofstadter told me—an intellectual feast. Dougie ate it up. He was enamored of his parents' friends, their strange talk about "the tiniest or gigantic-est things." (At age 8, he once said, his dream was to become "a zero-mass, spin one-half neutrino.") He'd hang around the physics department for 4 o'clock tea, "as if I were a little 12-year-old graduate stu- dent." He was curious, insatiable, unboreable—"just a kid fas- cinated by ideas"—and intense. His intellectual style was, and is, to go on what he calls "binges": he might practice piano for
  • 46.
    seven hours aday; he might decide to memorize 1,200 lines of Eugene Onegin. He once spent weeks with a tape recorder teach- ing himself to speak backwards, so that when he played his gar- bles in reverse they came out as regular English. For months at a time he'll immerse himself in idiomatic French or write comput- er programs to generate nonsensical stories or study more than a dozen proofs of the Pythagorean theorem until he can "see the reason it's true." He spends "virtually every day exploring these things," he says, "unable to not explore. Just totally possessed, totally obsessed, by this kind of stuff." Hofstadter is 68 years old. But there's something Peter Pan-ish about a life lived so much on paper, in software, in a man's own head. Can someone like that age in the usual way? Hofstadter has untidy gray hair that juts out over his ears, a fragile, droopy stature, and, between his nose and upper lip, a long groove, almost like the Grinch's. But he has the self- seriousness, the urgent earnestness, of a still very young man. The stakes are high with him; he isn't easygoing. He's the kind of vegetarian who implores the whole dinner party to eat vegetarian too; the kind of sensitive speaker who corrects you for using "sexist language" around him. "He has these rules," explains his friend Peter Jones, who's known Hofstadter for 59 years. "Like how he hateŝ oM^wjv .̂ That's an imperative. If you're talking to him, you better not say you guys." For more than 30 years, Hofstadter has worked as a profes- sor at Indiana University at Bloomington. He lives in a house a few blocks from campus with Baofen Lin, whom he married last September; his two children by his previous marriage, Danny and Monica, are now grown. Although he has strong ties with the cognitive-science program and affiliations with several
  • 47.
    departments—including computer science,psychological and brain sciences, comparative literature, and philosophy—he has no official obligations. "I think I have about the cushiest job you could imagine," he told me. "I do exactly what I want." He spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stufïy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e-mails there. (Hofstadter spends four hours a day writing e-mail. "To me," he has said, "an e-mail is identical to a letter, every bit as formal, as refined, as careftdly written... I rewrite, rewrite, rewrite, rewrite all of my e-mails, al- ways.") He lives his mental life there, and it shows. Wall-to- wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It's like a museum for his binges, a scene out of a brainy episode of Hoarders. "Anything that I think about becomes part of my profession- al life," he says. Daniel Dennett, who co-edited The Mind's I with him, has explained that "what Douglas Hofstadter is, quite simply, is aphenomenologist, apracticingphenomenolo- gist, and he does it better than anybody else. Ever." He studies the phenomena—the feelings, the inside actions—of his own mind. "And the reason he's good at it," Dennett told me, "the reason he's better than anybody else, is that he is very actively trying to have a theory of what's going on backstage, of how thinking actually happens in the brain." In his back pocket, Hofstadter carries a four-color Bic ball- point pen and a small notebook. It's always been that way. In what used to be a bathroom adjoined to his study but is now just
  • 48.
    extra storage space,he has bookshelves full of these notebooks. He pulls one down—it's from the late 1950s. It's full of speech er- rors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables ("hypodeemic nerdle"), mala- propisms ("runs the gambit"), "malaphors" ("easy-go-lucky"), and so on, about half of them committed by Hofstadter him- self. He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study. For Hofstadter, they're clues. "Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious," he once wrote. "This is what makes vast collec- tions of errors so important. In an isolated error, the mecha- nisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, col- lectively adding up to strong evidence for (and against) particu- lar mechanisms." Correct speech isn't very interesting; it's like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is "a tip of the rabbit's ear... a hint of a trap door." In this he is the modern-day William James, whose blend of articulate introspection (he introduced the idea of the stream of consciousness) and crisp explanations made his 1890 text. Principles of Psychology, a classic. "The mass of our thinking vanishes for ever, beyond hope of recovery," James wrote, "and psychology only gathers up a few of the crumbs that fall from the feast." Like Hofstadter, James made his life playing under the table, gleefully inspecting those crumbs. The dif- ference is that where James had only his eyes, Hofstadter has something like a microscope.
  • 49.
    T H EA T L A N T I C N O V E M B E R 2 0 1 3 9 5 Y o u CAN CREDIT the development of manned aircraft not to the Wright brothers' glider flights at Kitty Hawk but to the six-foot wind tunnel they built for themselves in their bicycle shop using scrap metal and recycled wheel spokes. While their competitors were testing wing ideas at full scale, the Wrights were doing focused aero- dynamic experiments at a fraction of the cost. Their biogra- pher Fred Howard says that these were "the most crucial and fruitful aeronautical experiments ever conducted in so short a time with so few materials and at so little expense." In an old house on North Fess Avenue in Bloomington, Hofstadter directs the Fluid Analogies Research Group, af- fectionately known as FARC. The yearly operating budget is $100,000. Inside, it's homey—if you wandered through, you could easily miss the flling cabinets tucked beside the pantry, the photocopier humming in the living room, the librarian's labels (NEUROSCIENCE, MATHEMATICS, PERCEPTION) on the bookshelves. But for 25 years, this place has been host to high enterprise, as the small group of scientists tries, Hofstadter has written, "flrst, to uncover the secrets of creativity, and second, to uncover the secrets of consciousness." As the wind tunnel was to the Wright brothers, so the com- puter is to FARC. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter's view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively iso- lated to see how it functions without them; parameters can be
  • 50.
    changed to seehow performance improves or degrades. When the computer surprises you—whether by being especially cre- ative or especially dim-witted—you can see exactly why. "I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds," Hofstadter has written, "is by modeling mental processes on computers and learning from the models' inevitable failures." Turning a mental process caught and catalogued in Hof- stadter's house into a running computer program, just a mile up the road, takes a dedicated graduate student about five to nine years. The programs all share the same basic architecture— a set of components and an overall style that traces back to Jumbo, a program that Hofstadter wrote in 1982 that worked on the word jumbles you find in newspapers. The first thought you ought to have when you hear about a program that's tackling newspaper jumbles is: Wouldn't those be trivial for a computer to solve? And indeed they are—I just wrote a program that can handle any word, and it took me four minutes. My program works like this: it takes the jumbled word and tries every rearrangement of its letters until it finds a word in the dictionary. Hofstadter spent two years building Jumbo: he was less interested in solving jumbles than in finding out what was happening when he solved them. He had been watching his mind. "I could feel the letters shifting around in my head, by themselves," he told me, "just kind of jumping around form- ing little groups, coming apart, forming new groups—flickering clusters. It wasn't me manipulating anything. It was just them doing things. They would be trying things themselves." The architecture Hofstadter developed to model this auto- matic letter-play was based on the actions inside a biological cell. Letters are combined and broken apart by different types
  • 51.
    of "enzymes," ashe says, that jiggle around, glomming on to structures where they flnd them, kicking reactions into gear. Some enzymes are rearrangers {pang-loss hecomespan-gloss or lang-poss), others are builders (g and h become the duster gh; jum and ble become jumble), and still others are breakers (ight is broken into it and gh). Each reaction in turn produces oth- ers, the population of enzymes at any given moment balancing itself to reflect the state of the jumble. It's an unusual kind of computation, distinct for its fluidity. Hofstadter of course offers an analogy: a swarm of ants ram- bling around the forest floor, as scouts make small random for- ays in all directions and report their finds to the group, their feedback driving an efficient search for food. Such a swarm is robust—step on a handful of ants and the others quickly recover—and, because ofthat robustness, adept. When you read Fluid Concepts and Creative Analogies: Com- puter Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hof- stadter got famous for the wrong book. As a writer for The New York Times once put it in a 1995 review, "The reader of'Fluid Concepts & Creative Analogies' cannot help suspecting that the group at Indiana University is on to something momentous." But very few people, even admirers of G£B, know about the book or the programs it describes. And maybe that's because FARc's programs are almost ostentatiously impractical. Be- cause they operate in tiny, seemingly childish "microdomains." Because there is no task they perform better than a human. T HE MODERN ERA of mainstream AI—an era of
  • 52.
    steady progress andcommercial success that began, roughly, in the early 1990s and continues to this d a y - is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field. It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most comput- er programs are organized like a well-run army, with layers of commanders, each layer passing instructions dovra to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable— and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, "The en- tire effort of artificial intelligence is essentially a fight against computers' rigidity." In the late '80s, mainstream AI was losing research dollars, clout, conference attendance, journal submis- sions, and press—because it was getting beat in that fight. The "expert systems" that had once been the field's meal ticket were foundering because of their brittleness. Their ap- proach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there's a rule broken. If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut. 9 6 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
  • 53.
    And it did.You could say that it started in 1988, with a proj- ect out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and mor- phology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits. So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward, you can hardly believe it. The technique is called "machine learning." The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain's complexity. So what you do instead is start with a machine so simple, it almost doesn't work: a machine, say, that randomly spits out French words for the English words it's given. Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French? It turns out that you can. What you do is feed the machine English sentences whose Erench translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamen- tary debates.) You proceed one pair at a time. After you've en- tered a pair, take the English half and feed it into your machine to see what comes out in Erench. If that sentence is different
  • 54.
    from what youwere expecting—different from the known cor- rect translation—your machine isn't quite right. So jiggle the knobs and try again. After enough feeding and trying and jig- gling, feeding and trying and jiggling again, you'll get a feel for the knobs, and you'll be able to produce the correct French equivalent of your English sentence. By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you'll be able to enter a sentence whose translation you don't know and get a reasonable result. And the beauty is that you nev- er needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. Candide didn't invent machine learning—in fact the con- cept had been tested plenty before, in a primitive form of ma- chine translation in the 1960s. But up to that point, no test had been very successftil. The breakthrough wasn't that Candide cracked the problem. It was that so simple a program per- formed adequately. Machine translation was, as Adam Berger, a member ofthe Candide team, writes in a summary ofthe project, "widely considered among the most difficult tasks in natural language processing, and in artificial intelligence in general, because accurate translation seems to be impossible without a comprehension ofthe text to be translated." That a program as straightforward as Candide could perform at par suggested that effective machine translation didn't require comprehension—all it required was lots of bilingual text. And for that, it became a proof of concept for the approach that conquered AI. SliVIULACRA Before the beak of a tiny pipette
  • 55.
    dipped through aglisten of DNA and ewe quickened to ewe with exactly the simulacrum forty thousand years had worked toward, before Muybridge's horses cantered and a ratchet-and-pawl-cast waltzing couple shuffled along a phasmatrope, before dime-size engines sparked in the torsos of toddler dolls and little bellows let them sing and the Unassisted Walking O n e - Miss Autoperipatetikos—stepped in her caterpillar gait across the New World's wide-plank floor, before motion moved the figures, and torsion moved the motion—or steam, or sand, or candle flame—before cogged wheels and taut springs nudged Gustav the Climbing Miller
  • 56.
    up his mill'sretaining wall (and gravity retrieved him), before image, like sound, stroked through an outreach of crests and troughs, and corresponding apertures caught patterns in the waves, caught, like eels beneath ancestral ponds, radiance in the energy, before lamposcope and zograscope, fantascope and panorama, before lanterns recast human hands, or a dye-drop of beetle first fluttered across a flicker book of papyrus leaves, someone sketched a creature along the contours of a cave, its stippled, monochromatic shape tracing the vaults and hollows, shivers of flank and shoulder already drawing absence nearer, as torchlight set the motion
  • 57.
    and shadow setthe rest. — Linda Bierds Linda Bierds's new collection, Roget's Illusion, will be published early next year. She teaches at the University ofWashington. T H E A T L A N T I C N O V E M B E R 2 0 1 3 97 what Candide's approach does, and with spectacular effi- ciency, is convert the problem of unknotting a complex process into the problem of finding lots and lots of examples ofthat process in action. This problem, unlike mimicking the actual processes of the brain, only got easier with time—particularly as the late '80s rolled into the early '90s and a nerdy haven for physicists exploded into the World Wide Web. It is no coincidence that AI saw a resurgence in the '90s, and no coincidence either that Google, the world's biggest Web company, is "the world's biggest AI system," in the words of Peter Norvig, a director of research there, who wrote AI:A Modem Approach with Stuart Russell. Modern AI, Norvig has said, is about "data, data, data," and Google has more data than anyone else. Josh Estelle, a software engineer on Google Translate, which is based on the same principles as Candide and is now the world's leading machine-translation system, explains, "you can take one of those simple machine-learning algorithms that you learned about in the first few weeks of an AI class, an algorithm that academia has given up on, that's not seen as useful—but when you go from 10,000 training examples to
  • 58.
    10 billion trainingexamples, it all starts to work. Data trumps everything." The technique is so effective that the Google Translate team Ü V ^ ^ ID WE SIT DOWN when we built Watson and • • try to model human cognition?" Dave Ferrucci, J ^ - ^ who led the Watson team at IBM, pauses for em- phasis. "Absolutely not. We just tried to create a machine that could win at Jeopardy." For Ferrucci, the definition of intelligence is simple: it's what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy. "It's artificial intelligence, right? Which is almost to say not-human intelli- gence. Why would you expect the science of artificial intelli- gence to produce human intelligence?" Ferrucci is not blind to the difference. He likes to tell crowds that whereas Watson played using a room's worth of proces- sors and 20 tons of air-conditioning equipment, its opponents relied on a machine that fits in a shoebox and can run for hours on a tuna sandwich. A machine, no less, that would allow them to get up when the match was over, have a conversation, enjoy a bagel, argue, dance, think—while Watson would be left hum- ming, hot and dumb and un-alive, answering questions about presidents and potent potables. "The features that [these systems] are ultimately look- ing at are just shadows—they're not even shadows—of Nhat it is that they represent," Ferrucci says. "We constantly "I don't want to be involved in passing offsome faney program's behavior for intelligence when I know that it has nothing to do
  • 59.
    with intelligence. And Idon't know why more people aren't that way." can be made up of people who don't speak most of the lan- guages their application translates. "It's a bang-for-your-buck argument," Estelle says. "You probably want to hire more engi- neers instead" of native speakers. Engineering is what counts in a world where translation is an exercise in data-mining at a massive scale. That's what makes the machine-learning approach such a spectacular boon: it vacuums out the first-order problem, and replaces the task of understanding with nuts-and-bolts engi- neering. "You saw this springing up throughout" Google, Nor- vig says. " If we can make this part 10 percent faster, that would save so many millions of dollars per year, so let's go ahead and do it. How are we going to do it? Well, we'll look at the data, and we'll use a machine-learning or statistical approach, and we'll come up with something better." Google has projects that gesture toward deeper under- standing: extensions of machine learning inspired by brain biology; a "knowledge graph" that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don't have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Al- though Google Translate captures, in its way, the products of human intelligence, it isn't intelligent itself. It's like an enor- mous Rosetta Stone, the calcified hieroglyphics of minds once at work.
  • 60.
    underestimate—we did inthe '50s about AI, and we're still doing it—what is really going on in the human brain." The question that Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don't you come study it? "I have mixed feelings about this," Ferrucci told me when I put the question to him last year. "There's a limited number of things you can do as an individual, and I think when you dedicate your life to something, you've got to ask yourself the question: To what end? And I think at some point I asked my- self that question, and what it came out to was, I'm fascinated by how the human mind works, it would be fantastic to under- stand cognition, I love to read books on it, I love to get a grip on it"—he called Hofstadter's work inspiring—"but where am / going to go with it? Really what I want to do is build computer systems that do something. And I don't think the short path to that is theories of cognition." Peter Norvig, one of Google's directors of research, echoes Ferrucci almost exactly. "I thought he was tackling a really hard problem," he told me about Hofstadter's work. "And I guess I wanted to do an easier problem." In their responses, one can see the legacy of AI's failures. Work on fundamental problems reeks of the early days. "Con- cern for 'respectability,' " Nils Nilsson writes in his academic history. The Quest for Artificial Intelligence, "has had, I think, a stultifying effect on some AI researchers." Stuart Russell, Norvig's co-author of ALA Modem Approach, 9 8 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
  • 61.
    goes further. "Alot of the stuff going on is not very ambitious," he told me. "In machine learning, one of the big steps that hap- pened in the mid-'8os was to say, 'Look, here's some real d a t a - can I get my program to predict accurately on parts of the data that I haven't yet provided to it?' What you see now in machine learning is that people see that as the only task." It's insidious, the way your own success can stifle you. As our machines get faster and ingest more data, we allow our- selves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them. Which is a bit like using a graphing calculator to do your high-school calculus homework—it works great until you need to actually understand calculus. It seems unlikely that feeding Google Translate l trillion documents, instead of lo billion, will suddenly enable it to work at the level of a human translator. The same goes for search, or image recognition, or question-answering, or plan- ning or reading or writing or design, or any other problem for which you would rather have a human's intelligence than a machine's. This is a fact of which Norvig, just like everybody else in commercial AI, seems to be aware, if not dimly afraid. "We could draw this curve: as we gain more data, how much bet- ter does our system get?" he says. "And the answer is, it's still improving—but we are getting to the point where we get less benefit than we did in the past." For James Marshall, a former graduate student of Hof- stadter's, it's simple: "In the end, the hard road is the only one
  • 62.
    that's going tolead you all the way." H OFSTADTER WAS 35 when he had his first long- term romantic relationship. He was born, he says, with "a narrow resonance curve," borrowing a con- cept from physics to describe his extreme pickiness. "There have been certain women who have had an enormous effect on me; their face has had an incredible effect on me. I can't give you a recipe for the face... but it's very rare." In 1980, after what he has described as " 15 hellish, love-bleak years," he met Carol Brush. ("She was at the dead center of the resonance curve.") Not long after they met, they were happily married with two kids, and not long after that, while they were on sab- batical together in Italy in 1993, Carol died suddenly of a brain tumor. Danny and Monica were 5 and 2 years old. "I felt that he was pretty much lost a long time after Carol's death," says Pentti Kanerva, a longtime friend. Hofstadter hasn't been to an artificial-intelligence confer- ence in 30 years. "There's no communication between me and these people," he says of his AI peers. "None. Zero. I don't want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them col- leagues, but they're almost not colleagues—we can't speak to each other." Hofstadter strikes me as difficult, in a quiet way. He is kind, but he doesn't do the thing that easy conversationalists do, that well-liked teachers do, which is to take the best of what you've said—to work you into their thinking as an indispensable ally, as though their point ultimately depends on your contribu- tion. I remember sitting in on a roundtable discussion that Hofstadter and his students were having and thinking of how
  • 63.
    little I sawhis mind change. He seemed to be seeking consen- sus. The discussion had begun as an e-mail that he had sent out to a large list of correspondents; he seemed keenest on the replies that were keenest on him. "So I don't enjoy it," he told me. "I don't enjoy going to con- ferences and running into people who are stubborn and con- vinced of ideas I don't think are correct, and who don't have any understanding of my ideas. And I just like to talk to people who are a little more sympathetic." Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called "Why Is J. D. Salinger's The Catcher in the Rye a Great Novel?" He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn't like Holden—they thought he was a whiner—Hofstadter explained that "they may not recognize his vulnerability." You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watch- ing his classmates romp around at the football game below. "I have too many ideas already," Hofstadter tells me. "I don't need the stimulation of the outside world." Of course, the folly of being above the fray is that you're also not a part of it. "There are very few ideas in science that are so black-and-white that people say 'Oh, good God, why didn't we think ofthat?' " says Bob French, a former student of Hofstadter's who has known him for 30 years. "Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn't agree with those ideas. And if you don't participate in the fight, in the rough- and-tumble of academia, your ideas are going to end up be- ing sidelined by ideas which are perhaps not as good, but were more ardently defended in the arena."
  • 64.
    Hofstadter never muchwanted to fight, and the double- edged sword of his career, if there is one, is that he never really had to. He won the Pulitzer Prize when he was 35, and instantly became valuable property to his university. He was awarded tenure. He didn't have to submit articles to journals; he didn't have to have them reviewed, or reply to reviews. He had a publisher, Basic Books, that would underwrite anything he sent them. Stuart Russell puts it bluntly. "Academia is not an environ- ment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It's possible that in 50 years' time we'll say, 'We really should have listened more to Doug Hofstadter.' But it's incumbent on every scientist to at least think about what is needed to get people to understand the ideas." "Ars longa, vita brevis," Hofstadter likes to say. "I just figure that life is short. I work, I don't try to publicize. I don't try to fight." There's an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. "Not a soul," Hofstadter says. "Einstein was completely alone in his belief in the existence of light as particles—for 18 years. "That must have been very lonely." El James Somers is a writer and computer programmer based in New York City. 1 0 0 N O V E M B E R 2 0 1 3 T H E A T L A N T I C
  • 65.
    Copyright of AtlanticMonthly (10727825) is the property of Atlantic Media Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.