Can machines understand what they “know”? To answer this question, we need to first work out what understanding actually is, and the problem is that the notion of understanding is applied in many ways. I shall present my argument for thinking that instead of asking whether machines understand, it is perhaps more interesting to ask whether we humans do what machine learning systems do when they understand. This might shed some light on us as well as on them.
1. John S Wilkins john@wilkins.id.au
John S. Wilkins
1 March 2020
Adobe Stock
Humans as
machines:
understanding in
terms of Machine
Learning
2. John S Wilkins john@wilkins.id.au
Since it is the understanding that sets
man above all other animals and
enables him to use and dominate them,
it is certainly worth our while to
enquire into it.
[John Locke. An Essay Concerning Human Understanding, chapter 1]
3. John S Wilkins john@wilkins.id.au
Experiential
The Eureka! Moment, in which you feel
now that you understand the topic; a
feeling of competence
Illumination Experience
Explanatory
Cognitive model of why
something is the way it is
Knowledge of Causes
What is understanding?
Feeling
Knowing
why
Pragmatic
Tacit or explicit
knowledge of how to do
something practical with
the understanding
Skills to do something
Knowing
how
Is this it?
4. John S Wilkins john@wilkins.id.au
“Against the old Diltheyan distinction, it must be accepted that understanding and explaining are
one.” Pierre Bordieu, 1996
The problem of understanding is three-way:
1. What kind of cognitive, mental or psychological process is understanding?
2. How does it relate to other processes like explaining, knowing, predicting,
etc.?
3. How is it acquired: directly or indirectly, individually or socially,
theoretically or practically?
It’s a topic of interest in developmental psychology, in cognitive science, in
philosophy (especially of science) and theology (relating to hermeneutics), but
recently also of machine learning [ML]: when is an ML system in a state of
understanding what it “knows”?
The problem of understanding
5. John S Wilkins john@wilkins.id.au
There are two broad traditions in philosophy over this.
One is the subjectivist tradition
Phenomenological (how it is experienced)
Husserl, Heidegger, Merleau-Ponty, Sartre
Hermeneutic (what it means in a social or cultural context)
Dilthey, Collingwood, Gadamer, Heidegger, Wittgenstein, Kuhn
The other is the objectivist tradition
Scientific understanding
Carnap, Popper, de Regt
Common sense realism
Objectivist and subjectivist
6. John S Wilkins john@wilkins.id.au
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Understanding as a knowledge of causes
7. John S Wilkins john@wilkins.id.au Wikipedia
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Can you know something and not understand it? [Me and the Krebs Cycle]
Understanding as a knowledge of causes
8. John S Wilkins john@wilkins.id.au
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Can you know something and not understand it? [Me and the Krebs Cycle]
Can you understand something you do not know the causes of? [Feeling
hungry]
Understanding as a knowledge of causes
9. John S Wilkins john@wilkins.id.au
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Can you know something and not understand it? [Me and the Krebs Cycle]
Can you understand something you do not know the causes of? [Feeling
hungry]
Is knowing the cause of something understanding it? [Turbulence]
Understanding as a knowledge of causes
10. John S Wilkins john@wilkins.id.au
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Can you know something and not understand it? [Me and the Krebs Cycle]
Can you understand something you do not know the causes of? [Feeling
hungry]
Is knowing the cause of something understanding it? [Turbulence]
Finally, if you know and understand something, can you still be unable to
explain it?
Understanding as a knowledge of causes
11. John S Wilkins john@wilkins.id.au
Ever since Aristotle, understanding has been seen as knowledge of the causes
of something. Of course, Aristotle meant several things by “cause”, including a
description of the shapes of things
So, four questions:
Can you know something and not understand it? [Me and the Krebs Cycle]
Can you understand something you do not know the causes of? [Feeling
hungry]
Is knowing the cause of something understanding it? [Turbulence]
Finally, if you know and understand something, can you still be unable to
explain it?
If the answer to any of these is yes, then understanding is not necessarily
knowing the causes
Understanding as a knowledge of causes
12. John S Wilkins john@wilkins.id.au
“Data is not information, information is not knowledge, knowledge
is not wisdom, wisdom is not truth.” [Royar (1994, 103),
paraphrasing Frank Zappa]
Is it more data? No, because more data is harder to
understand
Is it better models? No, because models
oversimplify the world
Is it knowing how? No, because one can understand
what one cannot manipulate
Is it knowing that? No, because I know the thats of
many things I do not understand
Perhaps we don’t know what understanding is, because there are so many
disparate things going by that term
So what is it to understand?
A Wise Man and Cat
13. John S Wilkins john@wilkins.id.au
Big data: a census without meaning
The more data we have the less we understand
To make sense of a large set of data we have to simplify it
We find trends
We perform statistical operations like regression
We make it something we can hold in our heads
We have a maximum amount of information that we can usefully store and
apply, as do all finite cognitive systems (learners)
Working memory is the amount of storage and processing a learner can do
given the constraints of time, attention, and so on. If we ignore these
constraints we may have unrealistic expectations for understanding.
Lost in a sea of data
14. John S Wilkins john@wilkins.id.au
Biology and the big data problem
“It’s human DNA!”
That’s not how this works!
15. John S Wilkins john@wilkins.id.au
The missing link in the
DIKW pyramid
Understanding?
16. John S Wilkins john@wilkins.id.au
The missing link in the
DIKW pyramid
Understanding?
17. John S Wilkins john@wilkins.id.au
{
The missing link in the
DIKW pyramid
Understanding?
18. John S Wilkins john@wilkins.id.au
{
The missing link in the
DIKW pyramid
Understanding?
If we approach this from
the machine learning
perspective, we might
get a better idea of
human scientific
understanding
19. John S Wilkins john@wilkins.id.au
Analysis
simplifies to
Dynamic or causal
y = f(x)
Simplification and generalisation
Measurement
simplifies to
Kinematic
y = f(x)
applies to
Data Information
KnowledgeUnderstanding
Understanding
What counts as wisdom is
left to you to decide
20. John S Wilkins john@wilkins.id.au
Inverting the problem
Instead of asking “Can Machines Understand?”, ask
Does Machine Understanding help us understand our own
understanding?
What do ML systems do when they understand?
1. They are trained on prototypes
2. They “experience” many cases
3. They react to new cases with a classification, regression or clustering
21. John S Wilkins john@wilkins.id.au
Inverting the problem
Instead of asking “Can Machines Understand?”, ask
Does Machine Understanding help us understand our own
understanding?
What do ML systems do when they understand?
1. They are trained on prototypes
2. They “experience” many cases
3. They react to new cases with a classification, regression or clustering
This depends on
their processing power in the available time, and
their working memory and storage
22. John S Wilkins john@wilkins.id.au
So do humans do this as well?
Turing thought so: whatever it is we do it is something a Turing machine can
do
Inverting the problem
23. John S Wilkins john@wilkins.id.au
So do humans do this as well?
Turing thought so: whatever it is we do it is something a Turing machine can
do
Ergo, if a Turing machine (an ML system) can understand, then we can too
in the same manner
And any mode by which we understand, a Turing machine can too
This doesn’t mean we match an ML system precisely, but it is a good model
And models lead to understanding 😄
Inverting the problem
24. John S Wilkins john@wilkins.id.au
There is a common error in philosophy of mind:
The fact that we can model human cognition in a system is not grounds for saying
that we are instances of that model
Any more than a computer model of the solar system means the solar system is
a computer program
Or that my computer simulation weighs 1.0014 Solar masses
Caveat: we are not, actually, ML systems
25. John S Wilkins john@wilkins.id.au
There is a common error in philosophy of mind:
The fact that we can model human cognition in a system is not grounds for saying
that we are instances of that model
Any more than a computer model of the solar system means the solar system is
a computer program
Or that my computer simulation weighs 1.0014 Solar masses
This error is the fallacy of reification, or less technically, thingification:
To mistake a formal description of something for the
thing itself
Everything can be described and model as Turing machines
(formalised computations or algorithms)
Not everything is an algorithm, or even could be [we are
not in the Matrix]
Caveat: we are not, actually, ML systems
26. John S Wilkins john@wilkins.id.au
The conception always precedes the understanding; and where the one is obscure, the other
is uncertain; where the one fails, the other must fail also. [David Hume, A Treatise of Human
Nature, 1896 edn, I.iii.XIV]
What we can do an ML system can model and vice versa
A toy system like an ML can shed light on our own faculty of
understanding
Nevertheless, it is very like a lot of what counts as understanding (outside
the sciences, anyway) is psychological or social, subjective or relative
And all of it relies upon the resources (working memory, time, available
information) that the ML has at hand, so to speak
And simplification is the basis of understanding
Conclusion
27. John S Wilkins john@wilkins.id.au
For discussion:
• John Collier
• Malte Ebach
• Ward Wheeler
• Adam Ford
• Marcus Hutter
• The audience at the ISHPSSB conference in Oslo, July 2019
Acknowledgements