Can an abstraction lead to Intelligence?
Dr Janet Bastiman
Overhyping AI as
makes us look
By International Telecommunication Union - https://www.flickr.com/photos/itupictures/35008372172, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=64035565
One of the things that annoys me on almost a daily basis in the over hyping of AI.
From the overplaying of the citizen AI Sophia as some sort of self-aware robot, the
pervasive Terminator images, to the continued comparisons to human intelligence.
While most of this is a combination of marketing and lazy click-bait journalism, it is
skewing the public perception of our industry. Furthermore I feel that the
anthropomorphisation of AI tools is dangerous in terms of misleading the public on
how advanced these systems are.
“Our AI has the intelligence of a twelve year old”, “our AI learns like a human”
Really? I think it’s fine to say that these are based on neuroscience concepts but
Going further than that needs some proof.
I talk a lot to people about “Machine washing” and try to educate people outside of
The community about what AI is and isn’t in straight forward ways.
• Overcomplicating AI deliberately to promote its abilities
• Removes desire to question as it sounds too difficult
• Pretending something is AI when it’s not
We cannot have transparency when the general public are deliberately misled
with statistics in the interest of sensationalism.
Overhyping AI - continued
I’m with Paul Senior on this one And any meme with a reference to Cthulu always gets
extra points from me.
What further confuses matters is that virtually everyone has their own definitions of AI.
Pretty much every conference I go to there is a redefinition. Is it the term coined in the
1950s? Is it self-aware machines? Is it the academic research or just the applications?
If we can’t agree amongst ourselves what these terms mean then how are we to educate
those outside of the community?
I stick with the original definition from the 1950s
"Any system that makes a decision that
appears to be intelligent
from specific inputs” John McCarthy
What is AI?
What is AI - continued
This definition includes everything from human-designed algorithms through to the latest
research in machine learning. I separate this from a term I define as “artificial sentience”
(which is what a lot of people think when they hear “AI”) and is more a human-like capacity
for memory, logic, reason and learning.
Before we go into whether we can have artificial sentience to rival human sentience, we nee
to be aware that we don’t have a good definition of what it means to be sentient or even
What is Intelligence?
• The definition of life is controversial
• The definition of intelligence is controversial
• - S. Legg; M. Hutter. "A Collection of Definitions of Intelligence". 157: 17–24
What is Intelligence
We are struggling with a universal definition of life and our clumsy definitions of intelligence
are far from agreed. With artificial sentience, do you require the ability to reason, to
understand new concepts and to question the sense of self. It’s a thorny philosophical
problem – how do you prove that anyone you interact with is actually an intelligent self-awar
being and not some construct of your own imagination?
The Turing test (see previous tweet) is useless here. I would fail a Turing test if it was in a
language I don’t speak.
Even if we accept, by Occam’s razor if not actually evidence, that you aren’t all simulations
of my mind, how do we sort the intelligent from the reactive (again ignoring the fact that we
don’t have a good definition!) Is there a test that can tell the difference?
Prior to my PhD and my years in the professional world in AI applications, my background
is molecular and cellular biochemistry, so I have a neuron in both the biological and
artificial camps here. Laying my cards on the table; I am a reductionist and believe that
everything that’s necessary to define human sentience, everything that gives us our
adaptability and creativity, our personalities, our fears, our hopes and desires, is an emerge
property of the biochemistry of our neurons and therefore can and will be an emergent
property of algorithms. What really excites me in the balance between a complete simulatio
and an abstraction model that will give the qualities of sentience in its simplest form.
Our first real understanding of biological neuron activity was the action potential…
Reductionist approach – one neuron at a time
Hodgkin AL, Huxley AF (August 1952).
"A quantitative description of membrane current and its application to conduction and excitation in nerve".
The Journal of Physiology. 117 (4): 500–44. doi:10.1113/jphysiol.1952.sp004764
Even in the Hodgkin and Huxley paper, there was the notion of multiple ionic currents
combining. The early mathematical models included all of these and required understandin
of the internal and external ion concentrations to get a precise model. Not all action
potentials are equal!
As we started modelling neural networks there just wasn’t the data available to model
everything at the ionic level, so the computational models had to be far simpler. We had an
integrate and fire approach. The ion effects were encoded in the vectors representing
synaptic weightings for incoming and outgoing signals. This encapsulated the activation an
inhibition effects and strengths of different connections, but we lost the effect of signal
degradation due to relative ionic concentrations and patterns of firing. Not all neurons are
affected equally by the same incoming signals.
Neuron montage from http://www.mind.ilstu.edu/curriculum/neurons_intro/neurons_intro.php
Synapse from The role of synaptic ion channels in synaptic plasticity, Giannis Voglis, Nektarios Tavernarakis
EMBO reports (2006) 7, 1104-1110 http://embor.embopress.org/content/7/11/1104
Visualizing and Understanding Convolutional Networks,
Zeiler and Fergus 2013
Those early ANNs still retained the biological analogy, individual neurons with connections,
even though the models themselves were pretty abstract. We used the same language, we
drew the connection diagrams and, on the whole, the networks served the purposes for
which we designed them.
From a biological point of view, we understand neurons at the physical level. We have
studied synapses in detail, we know the impact of different chemical concentrations on
signalling and how there are multiple routes to convey the same signal to allow adaptation
to damage. We can see the physical changes in the brain in response to aging and learning
We can see regions of the brain active in response to thinking, dreaming and emotion, but
we don’t have a tangible concept of “thought” itself. Where is our sense of self when you
look at these neural structures?
Simple biological systems are not understood
A gate-and-switch model for head orientation behaviors in C. elegans
Marie-Hélène Ouellette, Melanie Desrochers, Ioana Gheta, Ryan Ramos, Michael Hendricks
We’ve been trying to answer these questions for years as the tweet in the last slide shows! Even a simple
system like C. elegans with its connectome of 300 mapped neurons is still not understood how simple
inputs convert into behaviours…. And people complain about AI being a black box system! The paper I
referenced in the previous slide is an understanding of simple head movements after years of research.
What is interesting to me is that there are asymmetric responses in the neurons that are very sensitive to
other variables. You could
model the default pattern of behaviour and get a simulation to trigger the motor neurons effectively, but
without also modelling ios, temperature etc, the simulation would not respond in the same way as the real
worm other than in a very narrow window of parameters.
Because we don’t have a good definition of intelligence, we don’t know if there is anything in the C. Elegans
network that gives rise to intelligence (as we understand it or not). I’m not even going to open the can of
worms that is the debate around whether just having neurons equals intelligence or if there are a minimum
number or any of the other arguments that we simply can’t answer right now.
While we know about biochemical synapse fatigue and reinforcement, gaseous neurotransmission to
nearby neurons, and the adaptability of pathways, we haven’t yet managed to convert this reduction to
anything approaching sentience. At best, all we have been able to do is identify critical neuron groups for
key repetitive and predictable behaviours. Going from the what to the how has been difficult.
Can computer science go further?
• Beating humans in translation
• Beating humans in both open information games (Chess, Go, Jeopardy)
and closed game systems (Texas Hold `em Poker)
• Better photoshopping skills (deep fakes)
• Using abstractions further removed from early ANN models
• Everything is a tensor…
• Adaption is still a problem
Computer science continued
In some respects, the field of AI research has made better progress from the other side.
Starting with the simple perceptron and adding complexity as necessary has quickly
created these predictive networks for specific tasks. Then along came Tensorflow and a
different approach. Rather than a biological analogy, layers of neurons are abstracted into
Tensors encoding all of the parameters of that layer and its actions. This is very powerful
and has enabled an explosion of applications and it’s actually far easier to implement than
trying to design a biological style network.
Where these networks struggle is adaptation. Language is a great example here as
meanings can change quickly. A few years ago if you’d asked me to name a famous
Twitcher I’d’ve said Bill Oddie. Now my answer would be Ninja, and judging from your faces, a lot
of you don’t get that, which is the point. We don’t have an NLP AI that can cope with those context
changes by person.
You can cross train to similar tasks pretty easily, but we’re not seeing these abstractions
update themselves through every day use. There has to be deliberate feedback and
retraining. I hypothesise that we’re missing a key ingredient in this abstraction.
The talk continued
At this point I could take a merry divergence into quantum machine learning and how that will impact
this. But the “too long didn’t read” summary is that it will give speed through physical architecture.
Fundamentally we need to understand the what before we rush forward with the how. There are
limited problems that currently work well with quantum machine learning.
But this isn’t all one way. Advances in computer science are also allowing neuroscience to push
forward. A recent paper jointly by Deepmind and UCL has looked at how some of the ideas behind
reinforcement learning in AI can provide hypotheses for testing within biological neurons, leading to
a new theory of learning in the brain for specific reward functions.
Computer science hypotheses for Biology
Prefrontal cortex as a meta-reinforcement learning system
Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, Matthew Botvinick
Where are we now?
AI is overhyped and we are nowhere near any sort of artificial sentience. If this is possible (and I
believe it is) then we will get there regardless of whether we “should”.
At what point do biological neurons change from being simple logic gates to an entity able to question
its own existence?
Will artificial sentience be recognisable? How do we define this? Will there be convergent evolution
of structure or will it be completely different? Because of this, we may not know it when we see it.
I don’t have the answers – if I did, I’d be a very rich woman!
So in the words of the great Philip DeFranco…
I want to hand the question back to you…