Alan Turing was a pioneering computer scientist and mathematician who developed the Turing test in 1950 to determine whether a machine could exhibit intelligent behavior indistinguishable from a human. The Turing test evaluates whether a machine can convince a human evaluator that it is also human through natural language conversations. In 2011, a chatbot named Cleverbot passed a limited version of the Turing test, though debates continue about the capabilities and limitations of artificial intelligence compared to human intelligence.
2. WHO WAS ALAN TURING?
(UK 1912-1954)
Alan Turing was an English mathematician,
computer scientist, logician, cryptanalyst,
philosopher and theoretical biologist. Turing is
widely considered to be the father of theoretical
computer science and artificial intelligence. [wiki]
3.
4. TURING TEST
The Turing test, developed by Alan
Turing in 1950, is a test of a
machine's ability to exhibit
intelligent behaviour equivalent to,
or indistinguishable from, that of a
human. [wiki]
7. AIPORIA
Coined by Byron Reese, ‘aiporia’ refers to feelings of uncertainty about whether
you are dealing with a human or an AI. It’s the sense of puzzlement regarding
who — or, indeed, what — we are interacting with at any given time.
9. UNCANNY VALLEY
[YT 3:16]
In aesthetics, the uncanny valley
is a hypothesized relationship
between the degree of an object's
resemblance to a human being
and the emotional response to
such an object [wiki]
13. SABINE HOSSENFELDER [YT 6:46]: 10 DIFFERENCES
BETWEEN ARTIFICIAL INTELLIGENCE AND HUMAN
INTELLIGENCE
Artificial intelligence
Form = not physical
Size: ____ neurons
Connectivity: all layers are fully connected
Power consumption 🤭
Architecture: neatly ordered
Activation potential: smoothly from off to on
Speed: 10 billion / second
Learning technique: by producing output, then adjusting weights
Structure: starts from scratch every time
Precision: high
Human intelligence
Form = function
Size: 100 billion neurons
Not all regions of the brain are equally connected
Power consumption: 20 Watts
Architecture: ______________
Activation potential: ____________________
Speed: thousand / second
Learning technique: __________________________
Structure: draws on previous models
Precision: low
Though his tweets about AI often take an alarmist tone, Musk’s warnings are as plausible as they are sensational:
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and are entrusted with mission-critical responsibilities:
“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.”
Musk has compared the destructive potential of AI networks to the risks of global nuclear conflict posed by North Korea:
“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
He has also pointed out that AI doesn’t necessarily have to be malevolent to threaten humanity’s future. To Musk, the cold, immutable efficiency of machine logic is as dangerous as any evil science-fiction construct:
“AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
More here: https://www.cbinsights.com/research/ai-threatens-humanity-expert-quotes/
Nick Bostrom at the Future of Humanity Institute. (Future of Humanity Institute)
Academic researcher and writer Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, shares Stephen Hawking’s belief that AI could rapidly outpace humanity’s ability to control it:
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”
More here: https://www.cbinsights.com/research/ai-threatens-humanity-expert-quotes/