Transcript of "Metabiology life as evolving software by g j chaitin"
1.
1
METABIOLOGY:
LIFE AS EVOLVING
SOFTWARE
METABIOLOGY: a ﬁeld parallel to biology,
dealing with the random evolution of artiﬁ-
cial software (computer programs) rather than
natural software (DNA), and simple enough
that it is possible to prove rigorous theorems
or formulate heuristic arguments at the same
high level of precision that is common in the-
oretical physics.
2.
2
“The chance that higher life forms might have emerged in this way [by
Darwinian evolution] is comparable to the chance that a tornado sweeping
through a junkyard might assemble a Boeing 747 from the materials therein.”
— Fred Hoyle.
“In my opinion, if Darwin’s theory is as simple, fundamental and basic as its
adherents believe, then there ought to be an equally fundamental mathemati-
cal theory about this, that expresses these ideas with the generality, precision
and degree of abstractness that we are accustomed to demand in pure math-
ematics.” — Gregory Chaitin, Speculations on Biology, Information and
Complexity.
“Mathematics is able to deal successfully only with the simplest of situations,
more precisely, with a complex situation only to the extent that rare good
fortune makes this complex situation hinge upon a few dominant simple fac-
tors. Beyond the well-traversed path, mathematics loses its bearings in a
jungle of unnamed special functions and impenetrable combinatorial partic-
ularities. Thus, the mathematical technique can only reach far if it starts
from a point close to the simple essentials of a problem which has simple
essentials. That form of wisdom which is the opposite of single-mindedness,
the ability to keep many threads in hand, to draw for an argument from many
disparate sources, is quite foreign to mathematics.” — Jacob Schwartz,
The Pernicious Inﬂuence of Mathematics on Science.
“It may seem natural to think that, to understand a complex system, one
must construct a model incorporating everything that one knows about the
system. However sensible this procedure may seem, in biology it has repeat-
edly turned out to be a sterile exercise. There are two snags with it. The
ﬁrst is that one ﬁnishes up with a model so complicated that one cannot
understand it: the point of a model is to simplify, not to confuse. The sec-
ond is that if one constructs a suﬃciently complex model one can make it
do anything one likes by ﬁddling with the parameters: a model that can
predict anything predicts nothing.” — John Maynard Smith & E¨ors
Szathm´ary, The Origins of Life.
3.
3
Course Notes
METABIOLOGY:
LIFE AS EVOLVING
SOFTWARE
G. J. Chaitin
Draft October 1, 2010
4.
4
To my wife Virginia
who played an essential role in this research
5.
Contents
Preface 7
1 Introduction: Building a theory 9
2 The search for the perfect language 19
3 Is the world built out of information? Is everything soft-
ware? 39
4 The information economy 45
5 How real are real numbers? 55
6 Speculations on biology, information and complexity 77
7 Metaphysics, metamathematics and metabiology 87
8 Algorithmic information as a fundamental concept in
physics, mathematics and biology 101
9 To a mathematical theory of evolution and biological creativ-
ity 113
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.2 History of Metabiology . . . . . . . . . . . . . . . . . . . . . . 114
9.3 Modeling Evolution . . . . . . . . . . . . . . . . . . . . . . . . 116
9.3.1 Software Organisms . . . . . . . . . . . . . . . . . . . . 116
9.3.2 The Hill-Climbing Algorithm . . . . . . . . . . . . . . 116
9.3.3 Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.3.4 What is a Mutation? . . . . . . . . . . . . . . . . . . . 117
5
7.
Preface
Biology and mathematics are like oil and water, they do not mix. Never-
theless this course will describe my attempt to express some basic biological
principles mathematically. I’ll try to explain the raison d’ˆetre of what I call
my “metabiological” approach, which studies randomly evolving computer
programs rather than biological organisms.
I want to thank a number of people and organizations for inviting me to
lecture on metabiology; the interaction with audiences was extremely stimu-
lating and helped these ideas to evolve.
Firstly, I thank the IBM Watson Research Center, Yorktown Heights,
where I gave two talks on this, including the world premiere talk on metabiol-
ogy. Another talk on metabiology in the United States was at the University
of Maine.
In Argentina I thank Veronica Becher of the University of Buenos Aires
and Victor Rodriguez of the University of Cordoba for their kind invitations.
And I am most grateful to the University of Cordoba, currently celebrating
its 400th anniversary, for the honorary doctorate that they were kind enough
to bestow on me.
In Chile I spoke on metabiology several times at the Valparaiso Complex
Systems Institute, and in Brazil I included metabiology in courses I gave
at the Federal University of Rio de Janeiro and in a talk at the Federal
University in Niteroi.
Furthermore I thank Bernd-Olaf K¨uppers for inviting me to a very stim-
ulating meeting at his Frege Centre for Structural Sciences at the University
of Jena.
And I thank Ilias Kotsireas for organizing a Chaitin-in-Ontario lecture se-
ries in 2009 in the course of which I spoke on metabiology at the University
of Western Ontario in London, at the Institute for Quantum Computing in
Waterloo, and at the Fields Institute at the University of Toronto. The chap-
7
8.
8 Chaitin: Metabiology
ter of this book on Ω is based on a talk I gave at Wilfrid Laurier University
in Waterloo.
Finally, I should mention that the chapter on “The Search for the Perfect
Language” was ﬁrst given as a talk at the Hebrew University in Jerusalem
in 2008, then at the University of Campinas in Brazil, and ﬁnally at the
Perimeter Institute in Waterloo, Canada.
The chapter on “Is Everything Software?” was originally a talk at the
Technion in Haifa, where I also spoke on metabiology at the University of
Haifa, one of a series of talks I gave there as the Rothschild Distinguished
Lecturer for 2010.
These were great audiences, and their questions and suggestions were
extremely valuable. —
Gregory Chaitin, August 2010
9.
Chapter 1
Introduction: Building a theory
• This is a course on biology that will spend a lot of time discussing Kurt
G¨odel’s famous 1931 incompleteness theorem on the limits of formal
mathematical reasoning. Why? Because in my opinion the ultimate
historical perspective on the signiﬁcance of incompleteness may be that
G¨odel opens the door from mathematics to biology.
• We will also spend a lot of time discussing computer programs and
software for doing mathematical calculations. How come? Because
DNA is presumably a universal programming language, which
is a language that is rich enough that it can express any algorithm.
The fact that DNA is such a powerful programming language is a more
fundamental characteristic of life than mere self-reproduction, which
anyway is never exact—for if it were, there would be no evolution.
• Now a few words on the kind of mathematics that we shall use in this
course. Starting with Newton mathematical physics is full of what are
called ordinary diﬀerential equations, and starting with Maxwell partial
diﬀerential equations become more and more important. Mathematical
physics is full of diﬀerential equations, that is, continuous mathematics.
But that is not the kind of mathematics that we shall use here. The
secret of life is not a diﬀerential equation. There is no diﬀerential
equation for your spouse, for an organism, or for biological evolution.
Instead we shall concentrate on the fact that DNA is the software, it’s
the programming language for life.
• It is true that there are (ordinary) diﬀerential equations in a highly suc-
9
10.
10 Chaitin: Metabiology
cessful mathematical theory of evolution, Wright-Fisher-Haldane pop-
ulation genetics. But population genetics does not say where new
genes come from, it assumes a ﬁxed gene pool and discusses the
change of gene frequencies in response to selective pressure, not bio-
logical creativity and the major transitions in evolution, such
as the transition from unicellular to multicellular organisms, which is
what interests us.
• If we aren’t going to use anymore the diﬀerential equations that popu-
late mathematical physics, what kind of math are we going to use? It
will be discrete math, new math, the math of the 20th century dealing
with computation, with algorithms. It won’t be traditional continuous
math, it won’t be the calculus. As Dorothy says in The Wizard of Oz,
“Toto, we’re not in Kansas anymore!”
More in line with our life-as-evolving-software viewpoint are three hot
new topics in 20th century mathematics, computation, information
and complexity. These have expanded into entire theories, called com-
putability theory, information theory and complexity theory, theories
which superﬁcially appear to have little or no connection with biology.
In particular, our basic tool in this course will be algorithmic infor-
mation theory (AIT), a mixture of Turing computability theory with
Shannon information theory, which features the concept of program-
size complexity. The author was one of the people who created this
theory, AIT, in the mid 1960’s and then further developed it in the
mid 1970’s; the theory of evolution presented in this course could have
been done then—all the necessary tools were available.
Why then the delay of 35 years? My apologies; I got distracted working
on computer engineering and thinking about metamathematics. I had
published notes on biology occasionally on and oﬀ since 1969, but I
couldn’t ﬁnd the right way of thinking about biology, I couldn’t ﬁgure
out how to formulate evolution mathematically in a workable manner.
Once I discovered the right way, this new theory I call metabiology went
from being a gleam in my eye to a full-ﬂedged mathematical theory in
just two years.
• Also, it would be nice to be able to show that in our toy model hierar-
chical structure will evolve, since that is such a conspicuous feature of
biological organisms.
11.
Introduction: Building a theory 11
What kind of math can we use for that? Well, there are places in pure
math and in software engineering where you get hierarchical structures:
in Mandelbrot fractals, in Cantor transﬁnite ordinal numbers, in hier-
archies of fast growing functions, and in software levels of abstraction.
Fractals are continuous math and therefore not suitable for our discrete
models, but the three others are genuine possibilities, and we shall
discuss them all. One of our models of evolution does provably exhibit
hierarchical structure.
• Here is the big challenge: Biology is extremely complicated, and every
rule has exceptions. How can mathematics possibly deal with this?
We will outline an indirect way to deal with it, by studying a toy
model I call metabiology (= life as evolving software, computer program
organisms, computer program mutations), not the real thing. We are
using Leibnizian math, not Newtonian math.
By modeling life as software, as computer programs, we get a very rich
space of possible designs for organisms, and we can discuss biological
creativity = where new genes come from (where new biological ideas
such as multicellular organization come from), not just changes in gene
frequencies in a population as in conventional evolutionary models.
• Some simulations of evolution on the computer (in silico—as contrasted
with in vivo, in the organism, and in vitro, in the test tube) such as
Tierra and Avida do in fact model organisms as software. But in
these models there is only a limited amount of evolution followed by
stagnation.1
Furthermore I do not run my models on a computer, I prove theorems
about them. And one of these theorems is that evolution will con-
tinue indeﬁnitely, that biological creativity (or what passes for it in my
model) is endless, unceasing.
• The main theme of Darwinian evolution is competition, survival of
the ﬁttest, “Nature red in tooth and claw.” The main theme of my
model is creativity: Instead of a population of individuals competing
1
As for genetic algorithms, they are intended to “stagnate” when they achieve an
optimal solution to an engineering design problem; such a solution is a ﬁxed point of the
process of simulated evolution used by genetic algorithms.
12.
12 Chaitin: Metabiology
ferociously with each other in order to spread their individual genes (as
in Richard Dawkins’ The Selﬁsh Gene), instead of a jungle, my model
is like an individual Buddhist trying to attain enlightenment, a monk
who is on the path to enlightenment, it is like a mystic or a kabbalist
who is trying to get closer and closer to God.
More precisely, the single mutating organism in my model attains
greater and greater mathematical knowledge by discovering more and
more of the bits of Ω, which is, as we shall see in the part of the
course on Ω, Course Topic 5, a very concentrated form of mathematical
knowledge, of mathematical creativity. My organisms strive for greater
mathematical understanding, for purely mathematical enlightenment.
I model where new mathematical knowledge is coming from, where
new biological ideas are coming from, it is this process that I model
and prove theorems about.
• But my model of a single mutating organism is indeed Darwinian: I
have a single organism that is subjected to completely random muta-
tions until a ﬁtter organism is found, which then replaces my original
organism, and this process continues indeﬁnitely. The key point is that
in my model progress comes from combining random mutations and
having a ﬁtness criterion (which is my abstract encapsulation of both
competition and the environment).
The key point in Darwin’s theory was to replace God by randomness;
organisms are not designed, they emerge at random, and that is also
the case in my highly simpliﬁed toy model.
• Does this highly abstract game have any relevance to real biology?
Probably not, and if so, only very, very indirectly. It is mathematics
itself that beneﬁts most, because we begin to have a mathematical
theory inspired by Darwin, to have mathematical concepts that are
inspired by biology.
The fact that I can prove that evolution occurs in my model
does not in any way constitute a proof that Darwinians are
correct and Intelligent Design partidarians are mistaken.
But my work is suggestive and it does clarify some of the issues, by
furnishing a toy model that is much easier to analyze than the real
thing—the real thing is what is actually taking place in the biosphere,
13.
Introduction: Building a theory 13
not in my toy model, which consists of arbitrary mutation computer
programs operating on arbitrary organism computer programs.
• More on creativity, a key word in my model: Something is mechanical
if there is an algorithm for doing it; it is creative if there is no such
algorithm.
This notion of creativity is basic to our endeavor, and it comes from
the work of G¨odel on the incompleteness of formal axiomatic theories,
and from the work of Turing on the unsolvability of the so-called halt-
ing problem.2
Their work shows that there are no absolutely general
methods in mathematics and theoretical computer science, that cre-
ativity is essential, a conclusion that Paul Feyerabend with his book
Against Method would have loved had he been aware of it: What Fey-
erabend espouses for philosophical reasons is in fact a theorem, that is,
is provably correct in the ﬁeld of mathematics.
So before we get to Darwin, we shall spend a lot of time in this course
with G¨odel and Turing and the like, preparing the groundwork for our
model of evolution. Without this historical background it is impossible
to appreciate what is going on in our model.
• My model therefore mixes mathematical creativity and biological cre-
ativity. This is both good and bad. It’s bad, because it distances my
model from biology. But it is good, because mathematical creativity
is a deep mathematical question, a fundamental mystery, a big un-
known, and therefore something important to think about, at least for
mathematicians, if not for biologists.
Further distancing my model from biology, my model combines ran-
domness, a very Darwinian feature, with Turing oracles, which have no
counterpart in biology; we will discuss this in due course.
Exploring such models of randomly evolving software may well develop
into a new ﬁeld of mathematics. Hopefully this is just the beginning,
and metabiology will develop and will have more connection with biol-
ogy in the future than it has at present.
2
Turing’s halting problem is the question of deciding whether or not a computer pro-
gram that is self-contained, without any input, will run forever, or will eventually ﬁnish.
14.
14 Chaitin: Metabiology
• The main diﬀerence between our model and the DNA software in real
organisms is their time complexity: the amount of time the software
can run. I can prove elegant theorems because in my model the time
allowed for a program to run is ﬁnite, but unlimited.
Real DNA software must run quickly: 9 months to produce a baby,
70 years in total, more or less. A theory of the evolution of programs
with such limited time complexity, with such limited run time, would
be more realistic but it will not contain the neat results we have in our
idealized version of biology.
This is similar to the thermodynamics arguments which are taken in
the “thermodynamic limit” of large amounts of time, in order to obtain
more clear-cut results, when discussing the ideal performance of heat
engines (e.g., steam engines). Indeed, AIT is a kind of thermodynamics
of computation, with program-size complexity replacing entropy. In-
stead of applying to heat engines and telling us their ideal eﬃciency,
AIT does the same for computers, for computations.
You have to go far from everyday biology to ﬁnd beautiful mathematical
structure.
• It should be emphasized that metabiology is Work in Progress. It
may be mistaken. And it is certainly not ﬁnished yet. We are building
a new theory. How do you create a theory?
“Beauty” is the guide. And this course will give a history of ideas
for metabiology with plenty of examples. An idea is beautiful when it
illuminates you, when it connects everything, when you ask yourself,
“Why didn’t I see that before!,” when in retrospect it seems obvious.
AIT has two such ideas: the idea of looking at the size of a com-
puter program as a complexity measure, and the idea of self-delimiting
programs. Metabiology has two more beautiful ideas: the idea of or-
ganisms as arbitrary programs with a diﬃcult mathematical problem
to solve, and the idea of mutations as arbitrary programs that operate
on an organism to produce a mutated organism.
Once you have these ideas, the rest is just uninspired routine work,
lots of hard work, but that’s all. In this course we shall discuss all four
of these beautiful ideas, which were the key inspirations required for
creating AIT and metabiology.
15.
Introduction: Building a theory 15
Routine work is not enough, you need a spark from God. And mostly
you need an instinct for mathematical beauty, for sensing an idea that
can be developed, for the importance of an idea. That is, more than
anything else, a question of aesthetics, of intuition, of instinct, of judge-
ment, and it is highly subjective.
I will try my best to explain why I believe in these ideas, but just as
in artistic taste, there is no way to convince anyone. You either feel
it somewhere deep in your soul or you don’t. There is nothing more
important than experiencing beauty; it’s a glimpse of transcendence, a
glimpse of the divine, something that fewer and fewer people believe in
nowadays. But without that we are mere machines.
And I may have the beginnings of a mathematical theory of evolu-
tion and biological creativity, but a mathematical theory of beauty is
nowhere in sight.
• Incompleteness goes from being threatening to provoking creativity and
being applied in order to keep our organisms evolving indeﬁnitely. Evo-
lution stagnates in most models because the organisms achieve their
goals. In my model the organisms are asked to achieve something
that can never be fully achieved because of the incompleteness phe-
nomenon. So my organisms keep getting better and better at what
they are doing; they can never stop, because stopping would mean
that they had a complete answer to a math problem to which incom-
pleteness applies. Indeed, the three mathematical challenges that my
organisms face, naming large integers, fast growing functions, and large
transﬁnite ordinals, are very concrete, tangible examples of the incom-
pleteness phenomenon, which at ﬁrst seemed rather mysterious.
Incompleteness is the reason that our organisms have to keep evolving
forever, as they strive to become more and more complete, less and
less incomplete. . . Incompleteness keeps our model of evolution from
stagnating, it gives our organisms a mission, a raison d’ˆetre.
You have to go beyond incompleteness; incompleteness gives rise to
creativity and evolution. Incompleteness sounds bad, but the other
side of the coin is creativity and evolution, which are good.
Now we give an outline of the course, consisting of Course Topics 1–9:
1. This introduction.
16.
16 Chaitin: Metabiology
2. The Search for the Perfect Language. (My talk at the Perimeter Insti-
tute in Waterloo.)
Umberto Eco, Lull, Leibniz, Cantor, Russell, Hilbert, G¨odel, Turing,
AIT, Ω. Kabbalah, Key to Universal Knowledge, God-like Power of
Creation, the Golem!
Mathematical theories are all incomplete (G¨odel, Turing, Ω), but pro-
gramming languages are universal. Most concise programming lan-
guages, self-delimiting programs.
3. Is the world built out of information? Is everything software? (My talk
at the Technion in Haifa.)
Physics of information: Quantum Information Theory; general relativ-
ity and black holes, Beckenstein bound, holographic principle = every
physical system contains a ﬁnite number of bits of information that
grows as the surface area of the physical system, not as its volume
(Lee Smolin, Three Roads to Quantum Gravity); derivation of Ein-
stein’s ﬁeld equations for gravity from the thermodynamics of black
holes (Ted Jacobson, “Thermodynamics of Spacetime: The Einstein
Equation of State”).
The ﬁrst attempt to construct a truly fundamental mathematical model
for biology: von Neumann self-reproducing automata in a cellular
automata world, a world in which magic works, a plastic world.
See also: Edgar F. Codd, Cellular Automata. Konrad Zuse, Rechnen-
der Raum (Calculating Space). Fred Hoyle, Ossian’s Ride. Freeman
Dyson, The Sun, the Genome, and the Internet (1999), green technol-
ogy. Craig Venter, genetic engineering, synthetic life.
Technological applications: Seeds for houses, seeds for jet planes!
Plant the seed in the earth just add water and sunlight. Universal con-
structors, 3D printers = matter printers = printers for objects. Flexible
manufacturing. Alchemy, Plastic reality.
4. Artiﬁcial Life: Evolution Simulations.
Low Level: Thomas Ray’s Tierra, Christoph Adami’s Avida, Walter
Fontana’s ALchemy (Algorithmic Chemistry), Genetic algorithms.
High Level: Exploratory concept-formation based on examining lots
of examples in elementary number theory, experimental math with no
17.
Introduction: Building a theory 17
proofs: Douglas Lenat (1984), “Automated theory formation in math-
ematics,” AM.
After a while these stop evolving. What about proofs instead of
simulations? Seems impossible—see the frontispiece quotes facing the
title page, especially the one by Jacob Schwartz—but there is hope.
See Course Topic 5 arguing that Ω is provably a bridge from math to
biology.
5. How Real Are Real Numbers? A History of Ω. (My talk at WLU in
Waterloo.)
Course Topic 3 gives physical arguments against real numbers, and this
course topic gives mathematical arguments against real numbers. These
considerations about paradoxical real numbers will lead us straight to
the halting probability Ω. That is not how Ω was actually discovered,
but it is the best way of understanding Ω. It’s a Whig history: how it
should have been, not how it actually was.
The irreducible complexity real number Ω proves that math is more
biological than biology; this is the ﬁrst real bridge between math and
biology. Biology is extremely complicated, and pure math is inﬁnitely
complicated.
The theme of Ω as concentrated mathematical creativity is introduced
here; this is important because Ω is the organism that emerges through
random evolution in Course Topic 8.
Now let’s get to work in earnest to build a mathematical theory of
evolution and biological creativity.
6. Metabiology: Life as Evolving Software.
Stephen Wolfram, NKS: the origin of life as the physical implementa-
tion of a universal programming language; the Ubiquity of Univer-
sality. Fran¸cois Jacob, bricolage, Nature is a cobbler, a tinkerer. Neil
Shubin, Your Inner Fish. Stephen Gould, Wonderful Life, on the Cam-
brian explosion of body designs. Murray Gell-Mann, frozen accidents.
Ernst Haeckel, ontogeny recapitulates phylogeny. Evo-devo.
Note that a small change in a computer program (one bit!) can com-
pletely wreck it. But small changes can also make substantial improve-
ments. This is a highly nonlinear eﬀect, like the famous butterﬂy eﬀect
18.
18 Chaitin: Metabiology
of chaos theory (see James Gleick’s Chaos). Over the history of this
planet, covering the entire surface of the earth, there is time to try
many small changes. But not enough time according to the Intelli-
gent Design book Signature in the Cell. In the real world this is still
controversial, but in my toy model evolution provably works.
A blog summarized one of my talks on metabiology like this: “We are
all random walks in program space!” That’s the general idea; in Course
Topics 7 and 8 we ﬁll in the details of this new theory.
7. Creativity in Mathematics. We need to challenge our organisms into
evolving. We need to keep them from stagnating. These problems can
utilize an unlimited amount of mathematical creativity:
• Busy Beaver problem: Naming large integers: 1010
, 101010
. . .
• Naming fast-growing functions: N2
, 2N
. . .
• Naming large transﬁnite Cantor ordinals: ω, ω2
, ωω
. . .
8. Creativity in Biology. Single mutating organism. Hill-climbing algo-
rithm on a ﬁtness landscape. Hill-climbing random walks in software
space. Evolution of mutating software. What is a mutation? Exhaus-
tive search. Intelligent design. Cumulative evolution at random. Ω as
concentrated creativity, Ω as an evolving organism. Randomness yields
intelligence.
We have a proof that evolution works, at least in this toy model; in fact,
surprisingly it is nearly as fast as intelligent design, as deliberately
choosing the mutations in the best possible order. But can we show
that random evolution is slower than intelligent design? Otherwise the
theory collapses onto a point, it cannot distinguish, it does not make
useful distinctions. We also get evolution of hierarchical structure in
non-universal programming languages.
So we seem to have evolution at work in these toy models. But to what
extent is this relevant to real biological systems?
9. Conclusion: On the plasticity of the world. Is the universe mental?
Speculation where all this might possibly lead.
19.
Chapter 2
The search for the perfect
language
I will tell how the story given in Umberto Eco’s book The Search for the
Perfect Language continues with modern work on logical and programming
languages. Lecture given Monday, 21 September 2009, at the Perimeter In-
stitute for Theoretical Physics in Waterloo, Canada.1
Today I’m not going to talk much about Ω. I will focus on that at Wilfrid
Laurier University tomorrow. And if you want to hear a little bit about my
current enthusiasm, which is what I’m optimistically calling metabiology —
it’s a ﬁeld with a lovely name and almost no content at this time — that’s
on Wednesday at the Institute for Quantum Computing.
I thought it would be fun here at the Perimeter Institute to repeat a talk,
to give a version of a talk, that I gave in Jerusalem a year ago. To understand
the talk it helps to keep in mind that it was ﬁrst given in Jerusalem. I’d like
to give you a broad sweep of the history of mathematical logic. I’m a math-
ematician who likes physicists; some mathematicians don’t like physicists.
But I do. Before I became a mathematician I wanted to be a physicist.
So I’m going to talk about mathematics, and I’d like to give you a broad
overview, most deﬁnitely a non-standard view of some intellectual history. It
1
This lecture was published in Portuguese in S˜ao Paulo, Brazil, in the magazine Dicta
& Contradicta, No. 4, 2009. See http://www.dicta.com.br/.
19
20.
20 Chaitin: Metabiology
will be a talk about the history of work on the foundations of mathematics
as seen from the perspective of the Middle Ages. So here goes. . .
This talk = Umberto Eco + Hilbert, G¨odel, Turing. . .
Outline at: http://www.cs.umaine.edu/~chaitin/hu.html
There is a wonderful book by Umberto Eco called The Search for the Perfect
Language, and I recommend it highly to all of you.
In The Search for the Perfect Language you can see that Umberto Eco
likes the Middle Ages — I think he probably wishes we were still there. And
this book talks about a dream that Eco believes played a fundamental role
in European intellectual history, which is the search for the perfect language.
What is the search for the perfect language? Nowadays a physicist would
call this the search for a Theory of Everything (TOE), but in the terms in
which it was formulated originally, it was the idea of ﬁnding, shall we say, the
language of creation, the language before the Tower of Babel, the language
that God used in creating the universe, the language whose structure directly
expresses the structure of the world, the language in which concepts are
expressed in their direct, original format.
You can see that this idea is a little bit like the attempt to ﬁnd a foun-
dational Theory of Everything in physics.
The crucial point is that knowing this language would be like having a key
to universal knowledge. If you’re a theologian, it would bring you closer,
very close, to God’s thoughts, which is dangerous. If you’re a magician, it
would give you magical powers. If you’re a linguist, it would tell you the
original, pure, uncorrupted language from which all languages descend. One
can go on and on. . .
This very fascinating book is about the quest to ﬁnd this language. If
you ﬁnd it, you’re opening a door to absolute knowledge, to God, to the
ultimate nature of reality, to whatever.
And there are a lot of interesting chapters in this intellectual history. One
of them is Raymond Lull, around 1200, a Catalan.
Raymond Lull ≈ 1200
He was a very interesting gentleman who had the idea of mechanically com-
bining all possible concepts to get new knowledge. So you would have a wheel
with diﬀerent concepts on it, and another wheel with other concepts on it,
and you would rotate them to get all possible combinations. This would be
21.
The search for the perfect language 21
a systematic way to discover new concepts and new truths. And if you re-
member Swift’s Gulliver’s Travels, there Swift makes fun of an idea like this,
in one of the parts of the book that is not for children but deﬁnitely only for
adults.
Let’s leave Lull and go on to Leibniz. In The Search for the Perfect
Language there is an entire chapter on Leibniz. Leibniz is a transitional
ﬁgure in the search for the perfect language. Leibniz is wonderful because he
is universal. He knows all about Kabbalah, Christian Kabbalah and Jewish
Kabbalah, and all kinds of hermetic and esoteric doctrines, and he knows
all about alchemy, he actually ghost-authored a book on alchemy. Leibniz
knows about all these things, and he knows about ancient philosophy, he
knows about scholastic philosophy, and he also knows about what was then
called mechanical philosophy, which was the beginning of modern science.
And Leibniz sees good in all of this.
And he formulates a version of the search for the perfect language, which
is ﬁrmly grounded in the magical, theological original idea, but which is also
ﬁt for consumption nowadays, that is, acceptable to modern ears, to contem-
porary scientists. This is a universal language he called the characteristica
universalis that was supposed to come with a crucial calculus ratiocinator.
Leibniz: characteristica universalis, calculus ratiocinator
The idea, the goal, is that you would reduce reasoning to calculation, to
computation, because the most certain thing is that 2 + 5 = 7. In other
words, the way Leibniz put it, perhaps in one of his letters, is that if two
people have an intellectual dispute, instead of dueling they could just sit
down and say, “Gentlemen, let us compute!”, and get the correct answer and
ﬁnd out who was right.
So this is Leibniz’s version of the search for the perfect language. How
far did he get with this?
Well, Leibniz is a person who gets bored easily, and ﬂies like a butterﬂy
from ﬁeld to ﬁeld, throwing out fundamental ideas, rarely taking the trouble
to develop them fully.
One case of the characteristica universalis that Leibniz did develop is
called the calculus. This is one case where Leibniz worked out his ideas for
the perfect language in beautiful detail.
Leibniz’s version of the calculus diﬀers from Newton’s precisely because
it is part of Leibniz’s project for the characteristica universalis. Christian
Huygens hated the calculus.
22.
22 Chaitin: Metabiology
Christian Huygens taught Leibniz mathematics in Paris at a relatively
late age, when Leibniz was in his twenties. Most mathematicians start very,
very young. And Christian Huygen’s hated Leibniz’s calculus because he
said that it was mechanical, it was brainless: Any fool can just calculate the
answer by following the rules, without understanding what he or she is doing.
Huygens preferred the old, synthetic geometry proofs where you have
to be creative and come up with a diagram and some particular reason for
something to be true. Leibniz wanted a general method. He wanted to get
the formalism, the notation, right, and have a mechanical way to get the
answer.
Huygens didn’t like this, but that was precisely the point. This was
precisely what Leibniz was looking for, for everything!
The idea was that if you get absolute truth, if you have found the truth,
it should mechanically enable you to determine what’s going on, without
creativity. This is good, this is not bad.
This is also precisely how Leibniz’s version of the calculus diﬀered from
Newton’s. Leibniz saw clearly the importance of having a formalism that led
you automatically to the answer.
Let’s now take a big jump, to David Hilbert, about a century ago. . .
No, ﬁrst I want to tell you about an important attempt to ﬁnd the perfect
language: Cantor’s theory of inﬁnite sets.
Cantor: Inﬁnite Sets
This late 19th century theory is interesting because it’s ﬁrmly in the Middle
Ages and also, in a way, the inspiration for all of 20th century mathematics.
This theory of inﬁnite sets was actually theology. This is mathematical
theology. Normally you don’t mention that fact. To be a ﬁeld of mathe-
matics, the price of admission is you throw out all the philosophy, and you
just end up with something technical. So all the theology has been thrown
out.
But Cantor’s goal was to understand God. God is transcendent. The
theory of inﬁnite sets has this hierarchy of bigger and bigger inﬁnities, the
alephs, the ℵ’s. You have ℵ0, ℵ1, the inﬁnity of integers, of real numbers,
and you keep going. Each one of these is the set of all subsets of the previous
one. And very far out you get mind-boggling inﬁnities like ℵω; this is the
ﬁrst inﬁnity after
ℵ0, ℵ1, ℵ2, ℵ3, ℵ4 . . .
23.
The search for the perfect language 23
Then you can continue with
ω + 1, ω + 2, ω + 3 . . . 2ω + 1, 2ω + 2, 2ω + 3 . . .
These so-called ordinal numbers are subscripts for the ℵ’s, which are cardi-
nalities. Let’s go farther:
ℵω2 , ℵωω , ℵωωω . . .
And there’s an ordinal called epsilon-nought
0 = ωωωω...
which is the smallest solution of the equation
x = ωx
.
And the corresponding cardinal
ℵ 0
is pretty big!
You know, God is very far oﬀ, since God is inﬁnite and transcendent. We
can try to go in His direction. But we’re never going to get there, because
after every cardinal, there’s a bigger one, the cardinality of the set of all
subsets. And after any inﬁnite sequence of cardinals that you get, you just
take the union of all of that, and you get a bigger cardinal than is in the
sequence. So this thing is inherently open-ended. And contradictory, by
the way!
There’s only one problem. This is absolutely wonderful, breath-taking
stuﬀ. The only problem is that it’s contradictory.
The problem is very simple. If you take the universal set, the set of
everything, and you consider the set of all its subsets, by Cantor’s diago-
nal argument this should have a bigger cardinality, but how can you have
anything bigger than the set of everything?
This is the paradox that Bertrand Russell discovered. Russell looked
at this and asked why do you get this bad result. And if you look at the
Cantor diagonal argument proof that the set of all subsets of everything is
bigger than everything, it involves the set of all sets that are not members
of themselves,
{x : x ∈ x},
24.
24 Chaitin: Metabiology
which can neither be in itself nor not be in itself. This is called the Russell
paradox.
Cantor was aware of the fact that this happens, but Cantor wasn’t both-
ered by these contradictions, because he was doing theology. We’re ﬁnite but
God is inﬁnite, and it’s paradoxical for a ﬁnite being to try to comprehend a
transcendent, inﬁnite being, so paradoxes are okay. But the math community
is not very happy with a theory which leads to contradictions.
However, these ideas are so wonderful, that what the math community
has done is forget about all this theology and philosophy and try to sweep
the contradictions under the rug. There is an expurgated version of all this
called Zermelo-Fraenkel set theory, with the axiom of choice, usually: ZFC.
This is a formal axiomatic theory which you develop using ﬁrst-order logic,
and it is an expurgated version of Cantor’s theory believed not to contain
any paradoxes.
Anyway, Bertrand Russell was inspired by all of this to attempt a general
critique of mathematical reasoning, and to ﬁnd a lot of contradictions, a lot
of mathematical arguments that lead to contradictions.
Bertrand Russell: mathematics is full of contradictions.
I already told you about his most famous one, the Russell paradox.
Russell was an atheist who was searching for the absolute, who believed
in absolute truth. And he loved mathematics and wanted mathematics to
be perfect. Russell went around telling people about these contradictions in
order to try to get them ﬁxed.
Besides the paradox that there’s no biggest cardinal and that the set of
subsets of everything is bigger than everything, there’s also a problem with
the ordinal numbers that’s called the Burali-Forti paradox, namely that the
set of all the ordinals is an ordinal that’s bigger than all the ordinals. This
works because each ordinal can be deﬁned as the set of all the ordinals that
are smaller than it is. (Then an ordinal is less than another ordinal if and
only if it is contained in it.)
Russell is going around telling people that reason leads to contradictions.
So David Hilbert about a century ago proposes a program to put mathematics
on a ﬁrm foundation. And basically what Hilbert proposes is the idea of
a completely formal axiomatic theory, which is a modern version of
Leibniz’s characteristica universalis and calculus ratiocinator:
David Hilbert: mathematics is a formal axiomatic theory.
25.
The search for the perfect language 25
This is the idea of making mathematics totally objective, of removing all
the subjective elements.
So in such a formal axiomatic theory you would have a ﬁnite number
of axioms, axioms that are not written in an ambiguous natural language.
Instead you use a precise artiﬁcial language with a simple, regular artiﬁcial
grammar. You use mathematical logic, not informal reasoning, and you
specify the rules of the game completely precisely. It should be mechanical
to decide whether a proof is correct.
Hilbert was a conservative. He believed that mathematics gives abso-
lute truth, which is an idea from the Middle Ages. You can see the Middle
Ages whenever you mention absolute truth. Nevertheless, modern mathe-
maticians remain enamored with absolute truth. As G¨odel said, we pure
mathematicians are the last holdout of the Middle Ages. We still believe
in the Platonic world of ideas, at least mathematical ideas, when everyone
else, including philosophers, now laughs at this notion. But pure mathemati-
cians live in the Platonic world of ideas, even though everyone else stopped
believing in this a long time ago.
So math gives absolute truth, said Hilbert. Every mathematician some-
where deep inside believes this. Then there ought to exist a ﬁnite set of
axioms, and a precise set of rules for deduction, for inference, such that all of
mathematical truth is a consequence of these axioms. You see, if mathemat-
ical truth is black or white, and purely objective, then if you ﬁll in all the
steps in a proof and carefully use an artiﬁcial language to avoid ambiguity,
you should be able to have a ﬁnite set of axioms we can all agree on, that
in principle enable you to deduce all of mathematical truth. This is just the
notion that mathematics provides absolute certainty; Hilbert is analyzing
what this means.
What Hilbert says is that the traditional view that mathematics provides
absolute certainty, that in the Platonic world of pure mathematics everything
is black or white, means that there should be a single formal axiomatic theory
for all of math. That was a very important idea of his.
An important consequence of this idea goes back to the Middle Ages.
This perfect language for mathematics, which is what Hilbert was looking
for, would in fact give a key to absolute knowledge, because in principle
you could mechanically deduce all the theorems from the axioms, simply by
running through the tree of all possible proofs. You start with the axioms,
then you apply the rules of inference once, and get all the theorems that have
one-step proofs, you apply them two times, and you get all the theorems that
26.
26 Chaitin: Metabiology
have two-step proofs, and like that, totally mechanically, you would get all
of mathematical truth, by systematically traversing the tree of all possible
proofs.
This would not put all mathematicians out of work, not at all. In practice
this process would take an outrageous amount of time to get to interesting
results, and all the interesting theorems would be overwhelmed by uninter-
esting theorems, such as the fact that 1 + 1 = 2 and other trivialities.
It would be hard to ﬁnd the interesting theorems and to separate the
wheat from the chaﬀ. But in principle this would give you all mathematical
truths. You wouldn’t actually do it, but it would show that math gives
absolute certainty.
By the way, it was important to make all mathematicians agree on the
choice of formal axiomatic theory, and you would use metamathematics to
try to convince everyone that this formal axiomatic theory avoids all the
paradoxes that Bertrand Russell had noticed and contains no contradictions.
Okay, so this was the idea of putting mathematics on a ﬁrm foundation
and removing all doubts. This was Hilbert’s idea, about a century ago, and
metamathematics studies a formal axiomatic theory from the outside, and
notice that this is a door to absolute truth, following the notion of the perfect
language.
So what happens with this program, with this proposal of Hilbert’s? Well,
there’s some good news and some bad news. Some of the good news I already
mentioned: The thing that comes the closest to what Hilbert asked for is
Zermelo-Fraenkel set theory, and it is a beautiful axiomatic theory. I want
to mention some of the milestones in the development of this theory.
One of them is the von Neumann integers, so let me tell you about that.
Remember that Spinoza has a philosophical system in which the world is
built out of only one substance, and that substance is God, that’s all there
is. Zermelo-Fraenkel set theory is similar. Everything is sets, and every set
is built out of the empty set. That’s all there is: the empty set, and sets
built starting with the empty set.
So zero is the empty set, that’s the ﬁrst von Neumann integer, and in
general n + 1 is deﬁned to be the set of all integers less than or equal to n:
von Neumann integers: 0 = {}, n + 1 = {0, 1, 2, . . . , n}.
So if you write this out in full, removing all the abbreviations, all you have
are curly braces, you have set formation starting with no content, and the
27.
The search for the perfect language 27
full notation for n grows exponentially in n, if you write it all out, because
everything up to that point is repeated in the next number. In spite of this
exponential growth, this is a beautiful conceptual scheme.
Then you can deﬁne rational numbers as pairs of these integers, you
can deﬁne real numbers as limit sequences of rationals, and you get all of
mathematics, starting just with the empty set. So it’s a lovely piece of
ontology. Here’s all of mathematical creation just built out of the empty set.
And other people who worked on this are of course Fraenkel and Zermelo,
because it is called Zermelo-Fraenkel set theory, and an approximate notion
of what they did was to try to avoid sets that are too big. The universal set
is too big, it gets you into trouble. Not every property determines a set.
So this is a formal theory that most mathematicians believe enables you to
carry out all the arguments that normally appear in mathematics — maybe
if you don’t include category theory, which is very diﬃcult to formalize, and
even more paradoxical than set theory, from what I hear.
Okay, so that’s some of the positive work on Hilbert’s program. Now
some of the negative work on Hilbert’s program — I’d like to tell you about
it, you’ve all heard of it — is of course G¨odel in 1931 and Turing in 1936.
G¨odel, 1931 — Turing, 1936
What they show is that you can’t have a perfect language for mathematics,
you cannot have a formal axiomatic theory like Hilbert wanted for all of
mathematics, because of incompleteness, because no such system will include
all of mathematical truth, it will always leave out truths, it will always be
incomplete.
And this is G¨odel’s incompleteness theorem of 1931, and G¨odel’s original
proof is very strange. It’s basically the paradox of “this statement is false,”
“This statement is false!”
which is a paradox of course because it can be neither true nor false. If it’s
false that it’s false, then it’s true, and if it’s true that it’s false, then it’s
false. That’s just a paradox. But what G¨odel does is say “this statement is
unprovable.”
“This statement is unprovable!”
So if the statement says of itself it’s unprovable, there are two possibilities:
it’s provable, or it isn’t.
28.
28 Chaitin: Metabiology
If it’s provable, then we’re proving something that’s false, because it says
it’s unprovable. So we hope that’s not the case; by hypothesis, we’ll eliminate
that possibility. If we prove things that are false, we have a formal axiomatic
theory that we’re not interested in, because it proves false things.
The only possibility left is that it’s unprovable. But if it’s unprovable
then it’s true, because it asserts it’s unprovable, therefore there’s a hole. We
haven’t captured all of mathematical truth in our theory.
This proof of incompleteness shocks a lot of people, but my personal
reaction to it is, okay, it’s correct, but I don’t like it.
A better proof of incompleteness, a deeper proof, comes from Turing in
1936. He derives incompleteness from a more fundamental phenomenon,
which is uncomputability, the discovery that mathematics is full of stuﬀ that
can’t be calculated, of things you can deﬁne, but which you cannot calculate,
because there’s no algorithm.
Uncomputability ⇒ Incompleteness
And in particular, the uncomputable thing that he discovers is the halt-
ing problem, a very simple question: Does a computer program that’s
self-contained halt or does it go on forever? There is no algorithm to answer
this in every individual case, therefore there is no formal axiomatic theory
that enables you to always prove in individual cases what the answer is.
Why not? Because if there were a formal axiomatic theory that’s complete
for the halting problem, that would give you a mechanical procedure for
deciding, by running through the tree of all possible proofs, until you ﬁnd
a proof that an individual program you’re interested in halts, or you ﬁnd a
proof that it doesn’t. But that’s impossible because this is not a computable
function.
So Turing’s insight in 1936 is that incompleteness, that G¨odel found in
1931, for any formal axiomatic theory, comes from a deeper phenomenon,
which is uncomputability. Incompleteness is an immediate corollary of un-
computability, a concept which does not appear in G¨odel’s 1931 paper.
But Turing’s paper has both good and bad aspects. There’s a negative
aspect of his 1936 paper, which I’ve just told you about, but there’s also a
positive aspect. You get another proof, a deeper proof of incompleteness,
but you also get a kind of completeness. You ﬁnd a perfect language.
There is no perfect language for mathematical reasoning. G¨odel showed
that in 1931, and Turing showed it again in 1936. But what Turing also
29.
The search for the perfect language 29
showed in 1936 is that there are perfect languages, not for mathematical
reasoning, but for computation, for specifying algorithms.
What Turing discovers in 1936 is that there’s a kind of completeness
called universality and that there are universal Turing machines and universal
programming languages.
Universal Turing Machines / Programming Languages
What “universal” means, what a universal programming language or a uni-
versal Turing machine is, is a language in which every possible algorithm can
be written.
So on the one hand, Turing shows us in a deeper way that any language
for mathematical reasoning has to be incomplete, but on the other hand,
he shows us that languages for computation can be universal, which is just
another name, a synonym, for completeness.
There are perfect languages for computation, for writing algorithms, even
though there aren’t any perfect languages for mathematical reasoning. This
is the positive side, this is the completeness side, of Turing’s 1936 paper.
Now, what I’ve spent most of my professional life on, is a subject I call
algorithmic information theory
Algorithmic Information Theory (AIT)
that derives incompleteness from uncomputability by taking advantage of
a deeper phenomenon, by considering an extreme form of uncomputability,
which is called algorithmic randomness or algorithmic irreducibility.
AIT: algorithmic randomness, algorithmic irreducibility
There’s a perfect language again, and there’s also a negative side, the halt-
ing probability Ω, whose bits are algorithmically random, algorithmically
irreducible mathematical truths.
Ω = .010010111 . . .
This is a place in pure mathematics where there’s no structure. If you want
to know the bits of the numerical value of the halting probability, this is
a well-deﬁned mathematical question, and in the world of mathematics all
truths are necessary truths, but these look like accidental, contingent
truths. They look random, they have irreducible complexity.
30.
30 Chaitin: Metabiology
This is a maximal case of uncomputability, this is a place in pure mathe-
matics where there’s absolutely no structure at all. Although it is true that
you can in a few cases actually know some of the ﬁrst bits. . .
There are actually an inﬁnite number of halting probabilities depending
on your choice of programming language. After you choose a language, then
you ask what is the probability that a program generated by coin tossing
will eventually halt. And that gives you a diﬀerent halting probability. The
numerical value will be diﬀerent; the paradoxical properties are the same.
Okay, there are cases for which you can get a few of the ﬁrst bits. For
example, if Ω starts with 1s in binary or 9s in decimal, you can know those
bits or digits, if Ω is .11111. . . base two or .99999. . . base ten. So you can get
a ﬁnite number of bits, perhaps, of the numerical value, but if you have an N-
bit formal axiomatic theory, then you can’t get more than N bits of Ω. That’s
sort of the general result. It’s irreducible logically and computationally. It’s
irreducible mathematical information. It’s a perfect simulation in pure math,
where all truths are necessary, of contingent, accidental, maximal entropy
truths.
So that’s the bad news from AIT. But just like in Turing’s 1936 work,
there is a positive side. On the one hand we have maximal uncomputabil-
ity, maximal entropy, total lack of structure, of any redundancy, in an
information-theoretic sense, but there’s also good news.
AIT, the theory of program-size complexity, the theory where Ω is the
crown jewel, goes further than Turing, and picks out from Turing’s universal
Turing machines, from Turing’s universal languages, maximally expressive
programming languages. Because those are the ones that you have to use to
develop this theory where you get to Ω.
AIT has the notion of a maximally expressive programming language in
which programs are maximally compact, and deals with a very basic complex-
ity concept which is the size of the smallest program to calculate something:
H(x) is the size in bits of the smallest program to calculate x.
And we now have a better notion of perfection. The perfect languages
that Turing found, the universal programming languages, are not all equally
good. We now concentrate on a subset, the ones that enable us to write the
most concise programs. These are the most expressive languages, the ones
with the smallest programs.
Now let me tell you, this deﬁnition of complexity is a dry, technical way
of expressing this idea in modern terms. But let me put this into Medieval
31.
The search for the perfect language 31
terminology, which is much more colorful. The notion of program-size com-
plexity — which by the way has many diﬀerent names: algorithmic complex-
ity, Kolmogorov complexity, algorithmic information content — in Medieval
terms, what we’re asking is, how many yes/no decisions did God have
to make to create something?, which is obviously a rather basic question
to ask. That is, if you consider that God is calculating the universe.
I’m giving you a Medieval perspective on these modern developments.
Theology is the fundamental physics, it’s the theoretical physics of the Middle
Ages.
I have a lot of time left — I’ve been racing through this material — so
maybe I should explain in more detail how AIT contributes to the quest for
the perfect language.
The notion of universal Turing machine that is used in AIT is Turing’s
very basic idea of a ﬂexible machine. It’s ﬂexible hardware, which we call soft-
ware. In a way, Turing in 1936 creates the computer industry and computer
technology. That’s a tremendous beneﬁt of a paper that mathematically
sounds at ﬁrst rather negative, since it talks about things that cannot be
calculated, that cannot be proved. But on the other hand there’s a very pos-
itive aspect — I stated it in theoretical terms — which is that programming
languages can be complete, can be universal, even though formal axiomatic
theories cannot be complete.
Okay, so you get this technology, there’s this notion of a ﬂexible machine,
this notion of software, which emerges in this paper. Von Neumann, the
same von Neumann who invented the von Neumann integers, credited all of
this to Turing. At least Turing is responsible for the concept; the hardware
implementation is another matter.
Now, AIT, where you talk about program-size complexity, the size of the
smallest program, how many yes/no decisions God has to make to calcu-
late something, to create something, picks out a particular class of universal
Turing machines U.
What are the universal computers U like that you use to deﬁne program-
size complexity and talk about Ω? Well, a universal computer U has the
property that for any other computer C and its program p, your universal
computer U will calculate the same result if you give it the original program
p for C concatenated to a preﬁx πC which depends only on the computer
C that you want to simulate. πC tells U which computer to simulate. In
symbols,
U(πC p) = C(p).
32.
32 Chaitin: Metabiology
In other words, πC p is the concatenation of two pieces of information.
It’s a binary string. You take the original program p, which is also a binary
string, and in front of it you put a preﬁx that tells you which computer to
simulate.
Which means that these programs πC p for U are only a ﬁxed number of
bits larger than the programs p for any individual machine C.
These U are the universal Turing machines that you use in AIT. These
are the most expressive languages. These are the languages with maximal
expressive power. These are the languages in which programs are as concise
as possible. This is how you deﬁne program-size complexity. God will natu-
rally use the most perfect, most powerful programming languages, when he
creates the world, to build everything.
I should point out that Turing’s original universality concept was not
careful about counting bits; it didn’t really care about the size of programs.
All a universal machine U had to do was to be able to simulate any other
machine C, but one did not study the size of the program for U as a function
of the size of the program for C. Here we are careful not to waste bits.
AIT is concerned with particularly eﬃcient ways for U to be universal.
The original notion of universality in Turing was not this demanding.
The fact that you can just add a ﬁxed number of bits to a program for C
to get one for U is not completely trivial. Let me tell you why.
After you put πC and p together, you have to know where the preﬁx ends
and the program that is being simulated begins. There are many ways to do
this.
A very simple way to make the preﬁx πC self-delimiting is to have it be
a sequence of 0’s followed by a 1:
πC = 0k
1.
And the number k of 0’s tells us which machine C to simulate. That’s a very
wasteful way to indicate this.
The preﬁx πC is actually an interpreter for the programming language C.
AIT’s universal languages U have the property that you give U an interpreter
plus the program p in this other language C, and U will run the interpreter
to see what p does.
If you think of this interpreter πC as an arbitrary string of bits, one way
to make it self-delimiting is to just double all the bits. 0 goes to 00, 1 goes
to 11, and you put a pair of unequal bits 01 as punctuation at the end:
33.
The search for the perfect language 33
Arbitrary πC: 0 → 00, 1 → 11, 01 at the end.
This is a better way to have a self-delimiting preﬁx that you can concatenate
with p. It only doubles the size, the 0k
1 trick increases the size exponentially.
And there are more eﬃcient ways to make the preﬁx self-delimiting. For
example, you can put the size of the preﬁx in front of the preﬁx. But it’s
sort of like Russian dolls, because if you put the size |πC| of πC in front of
πC, |πC| also has to be self-delimiting:
U(. . . ||πC|| |πC| πC p) = C(p).
Anyway, picking U this way is the key idea in the original 1960s version
of AIT that Solomonoﬀ, Kolmogorov and I independently proposed. But
ten years later I realized that this is not the right approach. You actually
want the whole program πC p for U to be self-delimiting, not just the preﬁx
πC. You want the whole thing to be self-delimiting to get the right theory of
program-size complexity.
Let me compare the 1960s version of AIT and the 1970s version of AIT.
Let me compare these two diﬀerent theories of program-size complexity.
In the 1960s version, an N-bit string will in general need an N-bit pro-
gram, if it’s irreducible, and most strings are algorithmically irreducible.
Most N-bit strings need an N-bit program. These are the irreducible strings,
the ones that have no pattern, no structure. Most N-bit strings need an N-
bit program, because there aren’t enough smaller programs.
But in the 1970s version of AIT, you go from N bits to N + log2 N bits,
because you want to make the programs self-delimiting. An N-bit string will
usually need an N + log2 N bit program:
Most N-bit strings
AIT1960: N bits of complexity,
AIT1970: N + log2 N bits of complexity.
Actually, in AIT1970 it’s N plus H(N), which is the size of the smallest
self-delimiting program to calculate N, that’s exactly what that logarithmic
term is. In other words, in the 1970s version of AIT, the size of the smallest
program for calculating an N-bit string is usually N bits plus the size in bits
of the smallest self-delimiting program to calculate N, which is roughly
log N + log log N + log log log N + . . .
34.
34 Chaitin: Metabiology
bits long. That’s the Russian dolls aspect of this.
The 1970s version of AIT, which takes the idea of being self-delimiting
from the preﬁx and applies it to the whole program, gives us even better
perfect languages. AIT evolved in two stages. First we concentrate on those
U with
U(πC p) = C(p)
with πC self-delimiting, and then we insist that the whole thing πC p has also
got to be self-delimiting. And when you do that, you get important new
results, such as the sub-additivity of program-size complexity,
H(x, y) ≤ H(x) + H(y),
which is not the case if you don’t make everything self-delimiting. This just
says that you can concatenate the smallest program for calculating x and the
smallest program for calculating y to get a program for calculating x and y.
And you can’t even deﬁne the halting probability Ω in AIT1960. If you
allow all N-bit strings to be programs, then you cannot deﬁne the halting
probability in a natural way, because the sum for deﬁning the probability
that a program will halt
Ω =
p halts
2−(size in bits of p)
diverges to inﬁnity instead of being between zero and one. This is the key
technical point in AIT.
I want the halting probability to be ﬁnite. The normal way of thinking
about programs is that there are 2N
N-bit programs, and the natural way
of deﬁning the halting probability is that every N-bit program that halts
contributes 1/2N
to the halting probability. The only problem is that for
any ﬁxed size N there are roughly order of 2N
programs that halt, so if you
sum over all possible sizes, you get inﬁnity, which is no good.
In order to get the halting probability to be between zero and one
0 < Ω =
p halts
2−(size in bits of p)
< 1
you have to be sure that the total probability summed over all programs p
is less than or equal to one. This happens automatically if we force p to
be self-delimiting. How can we do this? Easy! Pretend that you are the
35.
The search for the perfect language 35
universal computer U. As you read the program bit by bit, you have to
be able to decide by yourself where the program ends, without any special
punctuation, such as a blank, at the end of the program.
This implies that no extension of a valid program is a valid program, and
that the set of valid programs is what’s called a preﬁx-free set. Then the fact
that the sum that deﬁnes Ω must be between zero and one, is just a special
case of what’s called the Kraft inequality in Shannon information theory.
But this technical machinery isn’t necessary. That 0 < Ω < 1 follows
immediately from the fact that as you read the program bit by bit you are
forced to decide where to stop without seeing any special punctuation. In
other words, in AIT1960 we were actually using a three-symbol alphabet for
programs: 0, 1 and blank. The blank told us where a program ends. But
that’s a symbol that you’re wasting, because you use it very little. As you
all know, if you have a three-symbol alphabet, then the right way to use it
is to use each symbol roughly one-third of the time.
So if you really use only 0s and 1s, then you have to force the Turing
machine to decide by itself where the program ends. You don’t put a blank
at the end to indicate that.
So programs go from N bits in size to N +log2 N bits, because you’ve got
to indicate in each program how big it is. On the other hand, you can just take
subroutines and concatenate them to make a bigger program, so program-
size complexity becomes sub-additive. You run the universal machine U to
calculate the ﬁrst object x, and then you run it again to calculate the second
object y, and then you’ve got x and y, and so
H(x, y) ≤ H(x) + H(y).
These self-delimiting binary languages are the ones that the study of
program-size complexity has led us to discriminate as the ideal languages,
the most perfect languages. We got to them in two stages, AIT1960 and
AIT1970. These are languages for computation, for expressing algorithms,
not for mathematical reasoning. They are universal programming languages
that are maximally expressive, maximally concise. We already knew how to
do that in the 1960s, but in the 1970s we realized that programs should be
self-delimiting, which made it possible to deﬁne the halting probability Ω.
Okay, so that’s the story, and now maybe I should summarize all of this,
this saga of the quest for the perfect language. As I said, the search for the
perfect language has some negative conclusions and some positive conclu-
sions.
36.
36 Chaitin: Metabiology
Hilbert wanted to ﬁnd a perfect language giving all of mathematical truth,
all mathematical knowledge, he wanted a formal axiomatic theory for all of
mathematics. This was supposed to be a Theory of Everything for the world
of pure math. And this cannot succeed, because we know that every formal
axiomatic theory is incomplete, as shown by G¨odel, by Turing, and by my
halting probability Ω. Instead of ﬁnding a perfect language, a perfect for-
mal axiomatic theory, we found incompleteness, uncomputability, and even
algorithmic irreducibility and algorithmic randomness.
So that’s the negative side of this story, which is fascinating from an
epistemological point of view, because we found limits to what we can know,
we found limits of formal reasoning.
Now interestingly enough, the mathematical community couldn’t care
less. They still want absolute truth! They still believe in absolute truth, and
that mathematics gives absolute truth. And if you want a proof of this, just
go to the December 2008 issue of the Notices of the American Mathematical
Society. That’s a special issue of the Notices devoted to formal proof.
The technology has been developed to the point where they can run real
mathematics, real proofs, through proof-checkers, and get them checked. A
mathematician writes the proof out in a formal language, and ﬁlls in the
missing steps and makes corrections until the proof-checker can understand
the whole thing and verify that it is correct. And these proof-checkers are
getting smarter and smarter, so that more and more of the details can be
left out. As the technology improves, the job of formalizing a proof becomes
easier and easier.
The formal-proof extremists are saying that in the future all mathematics
will have to be written out formally and veriﬁed by proof-checkers.
The engineering has been worked out to the point that you can formally
prove real mathematical results and run them through proof-checkers for
veriﬁcation. For example, this has been done with the proof of the four-color
conjecture. It was written out as a formal proof that was run through a
proof-checker.
And the position of these extremists is that in the future all mathematics
will have to be written out in a formal language, and you will have to get
it checked before submitting a paper to a human referee, who will then only
have to decide if the proof is worth publishing, not whether the proof is
correct. And they want a repository of all mathematical knowledge, which
would be a database of checked formal proofs of theorems.
This is a substantial community, and to learn more, go to the December
37.
The search for the perfect language 37
2008 AMS Notices, which is available on the web for free in the AMS website.
This is being worked on by a sizeable community, and the Notices devoted a
special issue to it, which means that mathematicians still believe in absolute
truth.
I’m not disparaging this extremely interesting work, but I am saying that
there’s a wonderful intellectual tension between it and the incompleteness re-
sults that I’ve discussed in this talk. There’s a wonderful intellectual tension
between incompleteness and the fact that people still believe in formal proof
and absolute truth. People still want to go ahead and carry out Hilbert’s
program and actually formalize everything, just as if G¨odel and Turing had
never happened!
I think this is an extremely interesting and, at least for me, a quite
unexpected development.
These were the negative conclusions from this saga. Now I want to wrap
this talk up by summarizing the positive conclusions.
There are perfect languages, for computing, not for reasoning. They’re
computer programming languages. And we have universal Turing machines
and universal programming languages, and although languages for reason-
ing cannot be complete, these universal programming languages are com-
plete. Furthermore, AIT has picked out the most expressive programming
languages, the ones that are particularly good to use for a theory of program-
size complexity.
So there is a substantial practical spinoﬀ. Furthermore, since I’ve worked
most of my professional career on AIT, I view AIT as a substantial contri-
bution to the search for the perfect language, because it gives us a measure
of expressive power, and of conceptual complexity and the complexity
of ideas. Remember, I said that from the perspective of the Middle Ages,
that’s how many yes/no decisions God had to make to create something,
which obviously He will do in an optimum manner.2
From the theoretical side, however, this quest was disappointing due to
G¨odel incompleteness and because there is no Theory of Everything for pure
math. Provably there is no TOE for pure math. In fact, if you look at the
bits of the halting probability Ω, they show that pure mathematics contains
inﬁnite irreducible complexity, and in this precise sense is more like biology,
the domain of the complex, than like theoretical physics, where there is still
2
Note that program-size complexity = size of smallest name for something.
38.
38 Chaitin: Metabiology
hope of ﬁnding a simple, elegant TOE.3
So this is the negative side of the story, unless you’re a biologist. The
positive side is we get this marvelous programming technology. So this dream,
the search for the perfect language and for absolute knowledge, ended in the
bowels of a computer, it ended in a Golem.
In fact, let me end with a Medieval perspective on this. How would all
this look to someone from the Middle Ages? This quest, the search for the
perfect language, was an attempt to obtain magical, God-like powers.
Let’s bring someone from the 1200s here and show them a notebook
computer. You have this dead machine, it’s a machine, it’s a physical object,
and when you put software into it, all of a sudden it comes to life!
So from the perspective of the Middle Ages, I would say that the perfect
languages that we’ve found have given us some magical, God-like powers,
which is that we can breath life into some inanimate matter. Observe that
hardware is analogous to the body, and software is analogous to the soul,
and when you put software into a computer, this inanimate object comes to
life and creates virtual worlds.
So from the perspective of somebody from the year 1200, the search for
the perfect language has been successful and has given us some magical,
God-like abilities, except that we take them entirely for granted.
Thanks very much!4
3
Incompleteness can be considered good rather than bad: It shows that mathematics
is creative, not mechanical.
4
Twenty minutes of questions and discussion followed. These have not been transcribed,
but are available via digital streaming video at http://pirsa.org/09090007/.
39.
Chapter 3
Is the world built out of
information? Is everything
software?
From Chaitin, Costa, Doria, After G¨odel, in preparation. Lecture, the Technion, Haifa,
Thursday, 10 June 2010.
Now for some even weirder stuﬀ! Let’s return to The Thirteenth Floor and
to the ideas that we brieﬂy referred to in the introductory section of this
chapter.
Let’s now turn to ontology: What is the world built out of, made out of?
Fundamental physics is currently in the doldrums. There is no pressing
unexpected, new experimental data — or if there is, we can’t see that it
is! So we are witnessing a return to pre-Socratic philosophy with its em-
phasis on ontology rather than epistemology. We are witnessing a return
to metaphysics. Metaphysics may be dead in contemporary philosophy, but
amazingly enough it is alive and well in contemporary fundamental physics
and cosmology.
There are serious problems with the traditional view that the world is
a space-time continuum. Quantum ﬁeld theory and general relativity con-
tradict each other. The notion of space-time breaks down at very small
distances, because extremely massive quantum ﬂuctuations (virtual parti-
cle/antiparticle pairs) should provoke black holes and space-time should be
torn apart, which doesn’t actually happen.
Here are two other examples of problems with the continuum, with very
39
40.
40 Chaitin: Metabiology
small distances:
• the inﬁnite self-energy of a point electron in classical Maxwell electro-
dynamics,
• and in quantum ﬁeld theory, renormalization, which Dirac never ac-
cepted.
And here is an example of renormalization: the inﬁnite bare charge of the
electron which is shielded by vacuum polarization via virtual pair formation
and annihilation, so that far from an electron it only seems to have ﬁnite
charge. This is analogous to the behavior of water, which is a highly polarized
molecule forming micro-clusters that shield charge, with many of the highly
positive hydrogen-ends of H2O near the highly negative oxygen-ends of these
water molecules.
In response to these problems with the continuum, some of us feel that
the traditional
Pythagorian ontology:
God is a mathematician,
the world is built out of mathematics,
should be changed to this more modern
→ Neo-Pythagorian ontology:
God is a programmer,
the world is built out of software.
In other words, all is algorithm!
There is an emerging school, a new viewpoint named digital philosophy.
Here are some key people and key works in this new school of thought: Ed-
ward Fredkin, http://www.digitalphilosophy.org, Stephen Wolfram, A New
Kind of Science, Konrad Zuse, Rechnender Raum (Calculating Space), John
von Neumann, Theory of Self-Reproducing Automata, and Chaitin, Meta
Math!.1
These may be regarded as works on metaphysics, on possible digital
worlds. However there have in fact been parallel developments in the world
of physics itself.
1
Lesser known but important works on digital philosophy: Arthur Burks, Essays on
Cellular Automata, Edgar Codd, Cellular Automata.
41.
Is the world built out of information? Is everything software? 41
Quantum information theory builds the world out of qubits, not matter.
And phenomenological quantum gravity and the theory of the entropy of
black holes suggests that any physical system contains only a ﬁnite number
of bits of information that grows, amazingly enough, as the surface area
of the physical system, not as its volume — hence the name holographic
principle. For more on the entropy of black holes, the Bekenstein bound, and
the holographic principle, see Lee Smolin, Three Roads to Quantum Gravity.
One of the key ideas that has emerged from this research on possible
digital worlds is to transform the universal Turing machine, a machine
capable of running any algorithm, into the universal constructor, a ma-
chine capable of building anything:
Universal Turing Machine → Universal Constructor.
And this leads to the idea of an information economy: worlds in which
everything is software, worlds in which everything is information and you can
construct anything if you have a program to calculate it. This is like magic
in the Middle Ages. You can bring something into being by invoking its true
name. Nothing is hardware, everything is software!2
A more modern version of this everything-is-information view is presented
in two green-technology books by Freeman Dyson: The Sun, the Genome and
the Internet, and A Many-Colored Glass. He envisions seeds to grow houses,
seeds to grow airplanes, seeds to grow factories, and imagines children using
genetic engineering to design and grow new kinds of ﬂowers! All you need is
water, sun and soil, plus the right seeds!
From an abstract, theoretical mathematical point of view, the key concept
here is an old friend from Chapter 2:
H(x) = the size in bits of the smallest program to compute x.
H(x) is also = to the minimum amount of algorithmic information needed
to build/construct x, = in Medieval language the number of yes/no decisions
God had to make to create x, = in biological terms, roughly the amount of
DNA needed for growing x.
It requires the self-delimiting programs of Chapter 2 for the following
intuitively necessary condition to hold:
H(x, y) ≤ H(x) + H(y) + c.
2
On magic in the Middle Ages, see Umberto Eco, The Search for the Perfect Language,
and Allison Coudert, Leibniz and the Kabbalah.
42.
42 Chaitin: Metabiology
This says that algorithmic information is sub-additive: If it takes H(x) bits of
information to build x and H(y) bits of information to build y, then the sum
of that suﬃces to build both x and y. Furthermore, the mutual information,
the information in common, has this important property:
H(x) + H(y) − H(x, y) =
H(x) − H(x|y∗
) + O(1),
H(y) − H(y|x∗
) + O(1).
Here
H(x|y) = the size in bits of the smallest program to compute x from y.
This triple equality tells us that the extent to which it is better to build
x and y together rather than separately (the bits of subroutines that are
shared, the amount of software that is shared) is also equal to the extent
that knowing a minimum-size program y for y helps us to know x and to
the extent to which knowing a minimum-size program x for x helps us to
know y. (This triple equality is an idealization; it holds only in the limit of
extremely large compute times for x and y.)
These results about algorithmic information/complexity H are a kind
of economic meta-theory for the information economy, which is the asymp-
totic limit, perhaps, of our current economy in which material resources
(petroleum, uranium, gold) are still important, not just technological and
scientiﬁc know-how.
But as astrophysicist Fred Hoyle points out in his science ﬁction novel
Ossian’s Ride, the availability of unlimited amounts of energy, say from nu-
clear fusion reactors, would make it possible to use giant mass spectrometers
to extract gold and other chemical elements directly from sea water and soil.
Material resources would no longer be that important.
If we had unlimited energy, all that would matter would be know-how,
information, knowing how to build things. And so we ﬁnally end up with the
idea of a printer for objects, a more plebeian term for a universal constructor.
There are already commercial versions of such devices. They are called 3D
printers and are used for rapid prototyping and digital fabrication. They are
not yet universal constructors, but the trend is clear. . . 3
In Medieval terms, results about H(x) are properties of the size of spells,
they are about the complexity of magic incantations! The idea that every-
thing is software is not as new as it may seem.
3
One current project is to build a 3D printer that can print a copy of itself. See
http://reprap.org.
43.
Bibliography
[1] A. Burks, Essays on Cellular Automata, University of Illinois Press
(1970).
[2] G. J. Chaitin, Meta Math!, Pantheon (2005).
[3] E. Codd, Cellular Automata, Academic Press (1968).
[4] A. Coudert, Leibniz and the Kabbalah, Kluwer (1995).
[5] F. Dyson, The Sun, the Genome and the Internet, Oxford University
Press (1999).
[6] F. Dyson, A Many-Colored Glass, University of Virginia Press (2007).
[7] U. Eco, The Search for the Perfect Language, Blackwell (1995).
[8] E. Fredkin, http://www.digitalphilosophy.org.
[9] F. Hoyle, Ossian’s Ride, Harper (1959).
[10] J. von Neumann, Theory of Self-Reproducing Automata, University of
Illinois Press (1966).
[11] L. Smolin, Three Roads to Quantum Gravity, Basic Books (2001).
[12] S. Wolfram, A New Kind of Science, Wolfram Media (2002).
[13] K. Zuse, Rechnender Raum (Calculating Space), Vieweg (1969).
43
45.
Chapter 4
The information economy
S. Zambelli, Computable, Constructive and Behavioural Economic Dynamics, Routledge,
2010, pp. 73–78.
In honor of Kumaraswamy Velupillai’s 60th birthday
Abstract: One can imagine a future society in which natural resources are
irrelevant and all that counts is information. I shall discuss this possibil-
ity, plus the role that algorithmic information theory might then play as a
metatheory for the amount of information required to construct something.
Introduction
I am not an economist; I work on algorithmic information theory (AIT). This
essay, in which I present a vision of a possible future information economy,
should not be taken too seriously. I am merely playing with ideas and trying
to provide some light entertainment of a kind suitable for this festschrift
volume, given Vela’s deep appreciation of the relevance of foundational issues
in mathematics for economic theory.
In algorithmic information theory, you measure the complexity of some-
thing by counting the number of bits in the smallest program for calculating
it:
program → Universal Computer → output.
If the output of a program could be a physical or a biological system, then
this complexity measure would give us a way to measure of the diﬃculty of
45
46.
46 Chaitin: Metabiology
explaining how to construct or grow something, in other words, measure
either traditional smokestack or newer green technological complexity:
software → Universal Constructor → physical system,
DNA → Development → biological system.
And it is possible to conceive of a future scenario in which technology is
not natural-resource limited, because energy and raw materials are freely
available, but is only know-how limited.
In this essay, I will outline four diﬀerent versions of this dream, in order
to explain why I take it seriously:
1. Magic, in which knowing someone’s secret name gives you power over
them,
2. Astrophysicist Fred Hoyle’s vision of a future society in his science-
ﬁction novel Ossian’s Ride,
3. Mathematician John von Neumann’s cellular automata world with its
self-reproducing automata and a universal constructor,
4. Physicist Freeman Dyson’s vision of a future green technology in which
you can, for example, grow houses from seeds.
As these four examples show, if an idea is important, it’s reinvented, it keeps
being rediscovered. In fact, I think this is an idea whose time has come.
Secret/True Names and the Esoteric Tradition
“In the beginning was the Word, and the Word was with God,
and the Word was God.” John 1:1
Information knowing someone’s secret/true name is very important in
the esoteric tradition [1, 2]:
• Recall the German fairy tale in which the punch line is “Rumpelstiltskin
is my name!” (the Brothers Grimm).
• You have power over someone if you know their secret name.
• You can summon a demon if you know its secret name.
47.
The information economy 47
• In the Garden of Eden, Adam acquired power over the animals by
naming them.
• God’s name is never mentioned by Orthodox Jews.
• The golem in Prague was animated by a piece of paper with God’s
secret name on it.
• Presumably God can summon a person or thing into existence by calling
its true name.
• Leibniz was interested in the original sacred Adamic language of cre-
ation, the perfect language in which the essence/true nature of each
substance or being is directly expressed, as a way of obtaining ultimate
knowledge. His project for a characteristica universalis evolved from
this, and the calculus evolved from that. Christian Huygens, who had
taught Leibniz mathematics in Paris, hated the calculus [3], because
it eliminated mathematical creativity and arrived at answers mechani-
cally and inelegantly.
Fred Hoyle’s Ossian’s Ride
The main features in the future economy that Hoyle imagines are:
• Cheap and unlimited hydrogen to helium fusion power,
• Therefore raw materials readily available from sea-water, soil and air
(for example, using extremely large-scale and energy intensive mass
spectrometer-like devices [Gordon Lasher, private communication]).
• And with essentially free energy and raw materials, all that counts is
technological know-how, which is just information.
Perhaps it’s best to let Hoyle explain this in his own words [4]:
[T]he older established industries of Europe and America. . .
grew up around specialized mineral deposits—coal, oil, metallic
ores. Without these deposits the older style of industrialization
was completely impossible. On the political and economic fronts,
48.
48 Chaitin: Metabiology
the world became divided into “haves” and “have-nots,” depend-
ing whereabouts on the earth’s surface these specialized deposits
happened to be situated. . .
In the second phase of industrialism. . . no specialized deposits
are needed at all. The key to this second phase lies in the pos-
session of an eﬀectively unlimited source of energy. Everything
here depends on the thermonuclear reactor. . . With a thermonu-
clear reactor, a single ton of ordinary water can be made to yield
as much energy as several hundred tons of coal—and there is no
shortage of water in the sea. Indeed, the use of coal and oil as a
prime mover in industry becomes utterly ineﬃcient and archaic.
With unlimited energy the need for high-grade metallic ores
disappears. Low-grade ones can be smelted—and there is an am-
ple supply of such ores to be found everywhere. Carbon can be
taken from inorganic compounds, nitrogen from the air, a whole
vast range of chemical from sea water.
So I arrived at the rich concept of this second phase of industri-
alization, a phase in which nothing is needed but the commonest
materials—water, air and fairly common rocks. This was a phase
that can be practiced by anybody, by any nation, provided one
condition is met: provided one knows exactly what to do. This
second phase was clearly enormously more eﬀective and powerful
than the ﬁrst.
Of course this concept wasn’t original. It must have been at
least thirty years old. It was the second concept that I was more
interested in. The concept of information as an entity in itself,
the concept of information as a violently explosive social force.
In Hoyle’s fantasy, this crucial information — including the design of ther-
monuclear reactors — that suddenly propels the world into a second phase
of industrialization comes from another world. It is a legacy bequeathed to
humanity by a nonhuman civilization desperately trying to preserve anything
it can when being destroyed by the brightening of its star.
49.
The information economy 49
John von Neumann’s Cellular Automata
World
This cellular automata world ﬁrst appeared in lectures and private working
notes by von Neumann. These ideas were advertised in article in Scientiﬁc
American in 1955 that was written by John Kemeny [5]. Left unﬁnished
because of von Neumann’s death in 1957, his notes were edited by Arthur
Burks and ﬁnally published in 1966 [6]. Burks then presented an overview
in [7]. Key points:
• World is a discrete crystalline medium.
• Two-dimensional world, graph paper, divided into square cells.
• Each square has 29 states.
• Time is quantized as well as space.
• State of each square the same universal function of its previous state
and the previous state of its 4 immediate neighbors (square itself plus
up, down, left, right immediate neighbors).
• Universal constructor can assemble any quiescent array of states.
• Then you have to start the device running.
• The universal constructor is part of von Neumann’s self-reproducing
automata.
The crucial point is that in von Neumann’s toy world, physical systems
are merely discrete information, that is all there is. And there is no dif-
ference between computing a string of bits (as in AIT) and “computing”
(constructing) an arbitrary physical system.
I should also mention that starting from scratch, Edgar Codd came up
with a simpler version of von Neumann’s cellular automata world in 1968 [8].
In Codd’s model cells have 8 states instead of 29.
50.
50 Chaitin: Metabiology
Freeman Dyson’s Green Technology
Instead of Hoyle’s vision of a second stage of traditional smokestack heavy
industry, Dyson [9, 10] optimistically envisions a green-technology small-is-
beautiful do-it-yourself grass-roots future.
The emerging technology that may someday lead to Dyson’s utopia is be-
coming known as “synthetic biology” and deals with deliberately engineered
organisms. This is also referred to as “artiﬁcial life,” the development of
“designer genomes.” To produce something, you just create the DNA for it.
Here are some key points in Dyson’s vision:
• Solar electrical power obtained from modiﬁed trees. (Not from ther-
monuclear reactors!)
• Other useful devices/machines grown from seeds. Even houses grown
from seeds?!
• School children able to design and grow new plants, animals.
• Mop up excessive carbon dioxide or produce fuels from sugar (actual
Craig Venter projects [11]).
On a much darker note, to show how important information is, there
presumably exists a sequence of a few-thousand DNA bases (A, C, G, T)
for the genome of a virus that would destroy the human race, indeed, most
life on this planet. With current or soon-to-be-available molecular biology
technology, genetic engineering tools, anyone who knew this sequence could
easily synthesize the corresponding pathogen. Dyson’s utopia can easily turn
into a nightmare.
AIT as an Economic Metatheory
So one can imagine scenarios in which natural resources are irrelevant and
all that counts is technological know-how, that is, information. We have just
seen four such scenarios. In such a world, I believe, AIT becomes, not an
economic theory, but perhaps an economic metatheory, since it is a theory
of information, a theory about the properties of technological know-how, as
I will now explain.
51.
The information economy 51
The main concept in AIT is the amount of information H(X) required to
compute (or construct) something, X. This is measured in bits of software,
the number of bits in the smallest program that calculates X. Brieﬂy, one
refers to H(X) as the complexity of X. For an introduction to AIT, please
see [12, 13].
In economic terms, H(X) is a measure of the amount of technological
know-how needed to produce X. If X is a hammer, H(X) will be small. If
X is a sophisticated military aircraft, H(X) will be quite large.
Two other concepts in AIT are the joint complexity H(X, Y ) of produc-
ing X and Y together, and the relative complexity H(X|Y ) of producing
X if we are given Y for free.
Consider now two objects, X and Y . In AIT,
H(X) + H(Y ) − H(X, Y )
is referred to as the mutual information in X and Y . This is the extent to
which it is cheaper to produce X and Y together than to produce X and Y
separately, in other words, the extent to which the technological know-how
needed to produce X and Y can be shared, or overlaps. And there is a basic
theorem in AIT that states that this is also
H(X) − H(X|Y ),
which is the extent to which being given the know-how for Y helps us to
construct X, and it’s also
H(Y ) − H(Y |X),
which is the extent to which being given the know-how for X helps us to
construct Y . This is not earth-shaking, but it’s nice to know.
(For a proof of this theorem about mutual information, please see [14].)
One of the reasons that we get these pleasing properties is that AIT is
like classical thermodynamics in that time is ignored. In thermodynamics,
heat engines operate very slowly, for example, reversibly. In AIT, the time
or eﬀort required to construct something is ignored, only the information
required is measured. This enables both thermodynamics and AIT to have
clean, simple results. They are toy models, as they must be if we wish to
prove nice theorems.
52.
52 Chaitin: Metabiology
Conclusion
Clearly, we are not yet living in an information economy. Oil, uranium,
gold and other scarce, precious limited natural resources still matter. But
someday we may live in an information economy, or at least approach it
asymptotically. In such an economy, everything is, in eﬀect, software; hard-
ware is comparatively unimportant. This is a possible world, though perhaps
not yet our own world.
References
1. A. Coudert, Leibniz and the Kabbalah, Kluwer, Dordrecht, 1995.
2. U. Eco, The Search for the Perfect Language, Blackwell, Oxford, 1995.
3. J. Hofmann, Leibniz in Paris 1672–1676, Cambridge University Press,
1974, p. 299.
4. F. Hoyle, Ossian’s Ride, Harper & Brothers, New York, 1959, pp. 157–
158.
5. J. Kemeny, “Man viewed as a machine,” Scientiﬁc American, April
1955, pp. 58–67.
6. J. von Neumann, Theory of Self-Reproducing Automata, University of
Illinois Press, Urbana, 1966. (Edited and completed by Arthur W.
Burks.)
7. A. Burks (ed.), Essays on Cellular Automata, University of Illinois
Press, Urbana, 1970.
8. E. Codd, Cellular Automata, Academic Press, New York, 1968.
9. F. Dyson, The Sun, the Genome, & the Internet, Oxford University
Press, New York, 1999.
10. F. Dyson, A Many-Colored Glass, University of Virginia Press, Char-
lottesville, 2007.
11. C. Venter, A Life Decoded, Viking, New York, 2007.
53.
The information economy 53
12. G. Chaitin, Meta Maths, Atlantic Books, London, 2006.
13. G. Chaitin, Thinking about G¨odel and Turing, World Scientiﬁc, Singa-
pore, 2007.
14. G. Chaitin, Exploring Randomness, Springer-Verlag, London, 2001, pp.
95–96.
1 July 2008
55.
Chapter 5
How real are real numbers?
We discuss mathematical and physical arguments against continuity and in
favor of discreteness, with particular emphasis on the ideas of ´Emile Borel
(1871–1956). Lecture given Tuesday, 22 September 2009, at Wilfrid Laurier
University in Waterloo, Canada.
I’m not going to give a tremendously serious talk on mathematics today.
Instead I will try to entertain and stimulate you by showing you some really
weird real numbers.
I’m not trying to undermine what you may have learned in your mathe-
matics classes. I love the real numbers. I have nothing against real numbers.
There’s even a real number — Ω — that has my name on it.1
But as you
will see, there are some strange things going on with the real numbers.
Let’s start by going back to a famous paper by Turing in 1936. This is
Turing’s famous 1936 paper in the Proceedings of the London Mathemati-
cal Society; mathematicians proudly claim it creates the computer industry,
which is not quite right of course.
But it does have the idea of a general-purpose computer and of hardware
and software, and it is a wonderful paper.
This paper is called “On computable numbers, with an application to the
Entscheidungsproblem.” And what most people forget, and is the subject of
1
See the chapter on “Chaitin’s Constant” in Steven Finch, Mathematical Constants,
Cambridge University Press, 2003.
55
Be the first to comment