This document discusses exponential growth versus linear growth and how humans naturally think linearly. It provides an example of exponentially folding a sheet of paper to illustrate how exponential growth is counterintuitive. Technology progresses exponentially as each new innovation builds on previous ones in an S-curve pattern, unlike nature which follows S-curves but is ultimately limited by resources. The document examines how technology allows "doing more with much less" by compressing matter, energy, space and time through innovation and miniaturization. It argues this trend will continue as more "intelligence" in the form of information and computing power is incorporated into everyday objects.
Salient Features of India constitution especially power and functions
Technology, progress and the emergence of collective intelligence
1. Exponential growth given by
the sum of S
curves
S curve
tempo
sviluppo
Linear versus exponential
tempo
s
vilu
p
p
o
linerare
esponenziale
Caveat: this article was written in Italian in 2011 while the English translation is of 2015. Not all the data
has been upgraded and some data might no longer be actual; at that time the IoT (Internet of Things)
acronym was not in use and I used with the same meaning the word Pervasive Computers. The Cloud
concept was not in use either but the ideas expressed in the text already implied it (Computational
Exoskeleton= Cloud + IoT) . RS, April 2015
Technology, progress and the emergence of collective intelligence
by Roberto Siagri
June-2011
Linear vision, exponential growth
We are living in an era of unprecedented innovation opportunities made available by
technology, but we hardly realize it because it is typical of humans to perceive the world in a
linear fashion. This is what prevents us from understanding exponential growth, and from
having a clear long-term vision. Exponential processes are counterintuitive, because humans
are fundamentally anchored to a linear way of thinking (Fig. 1). To give an example, let us
imagine to take a sheet of paper and fold it in two, and then again in two for fifty times.
Obviously there is a physical limit, but in theory, if I asked you to guess the thickness reached
by this imaginary folded sheet, I doubt that the majority would give a length of more than
twenty or thirty centimeters, or a few feet at best. In reality, the final thickness of the folded
sheet would be 250
(about 1015
) times the paper's thickness. And given that a paper sheet is
about one tenth of a millimeter thick, the final value would be roughly equivalent to the
distance between the Earth and the Sun!
Fig. 1 Fig. 2 Fig. 3
This example helps us to understand how unfamiliar exponential trends are for the human
mind. In reality, all natural phenomena are exponential, but their exponential expansion is
inexorably masked by saturation, which limits its duration and effects: at some point, growth
begins to slow down, then it stops, then decay begins. Think of a pond gradually filled by
water lilies; the pond's surface is certainly covered in an exponential way, but this trend goes
unnoticed because growth is limited in time and space: once the whole pond has been filled,
2. water lilies stop growing and finally die, having exhausted all the available resources. This is
what normally happens in nature: exponential growth ceases due to resource depletion, or to
the rupture of a biological equilibrium or mechanism that made it possible. In time, this type of
growth follows an S-curve (Fig. 2). Contrary to nature, technology is immune to the
phenomenon of saturation (and will be for a long time ahead) because innovation processes
create, on an on-going basis, increasingly efficient technological substrates, and also
because in the world of technology failures can, or at least should, be repaired fairly easily. In
addition, each technology is the starting point for a new technology which pushes progress
even further. Each technology “learns” from the previous one, adding one S-curve upon the
other, in an overall growing trend which is in fact an exponential curve (Fig. 3).
Conservation and Innovation
After these brief considerations on the nature of exponential progress and on our difficulty in
understanding it, I would like to make some reflections on the social and economic
consequences of progress in human society. In order to keep innovating, we need new
technologies, new springboards from which to make another leap forward: innovation uses
technology as a new, mostly unknown, "weapon" to modify the social and economic structure
in which we live. We call this modification “progress”. One interesting thing is that if we
analyze progress in the long run, we can see that innovation processes, preparatory to
changes in the socio-economic structure, occur regardless of how much the existing structure
is resisting. In other words, the process of technological innovation is somewhat unstoppable:
it does not depend on the will of human groups, and is not conditioned by local contexts,
whether of war, religious conflict, epidemics, famines or whatever. It is more of a collective
phenomenon which concerns all humanity, and which in the long run will bring great benefits
in terms of welfare. In turn, social changes occur because innovation allows the replacement
of the ruling class, and the advent of a new way of thinking and looking at the future. Let us
not forget, however, that introducing new visions of the world and of human life is never a risk-
free enterprise, as the Italian historian and philosopher Machiavelli well knew. As far back as
1513, he wrote to Lorenzo de' Medici:
ʺAnd it ought to be remembered that there is nothing more difficult to take in hand, more perilous to
conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things,
because the innovator has for enemies all those who have done well under the old conditions, and
lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the
opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily
believe in new things until they have had a long experience of them.ʺ 1
Innovation has also inevitably a strong economic impact, because each new level that is
reached spreads the benefits arising from it to more and more people (Fig. 4). An example
could be the invention of printing, or the automobile after the Fordist factory. The fact is that
innovation allows to do things that previously could not be done, or that no one could afford to
do.
1 Niccolò Machiavelli, The Prince, 1513.
3. Fig. 4
We could say that innovation consists in doing more with much less. Lowering production
costs for a given level of performance is the basic paradigm of innovation. This does not
necessarily mean running after the lowest price, but rather lowering the entry barrier to a
particular benefit. For a nation, having the possibility to travel to the moon at an affordable
cost means innovating. For all of us, having access to mobile telephony, which was initially
thought for the military, also means innovating... and so on, up to the most recent innovations.
The essence of innovation
I would like to make some simple observations to get even closer to the essence of what
innovation really is. The nature of things depends, or rather depended until recently (before
the pervasiveness of computers), on four physical entities: matter, energy, space and time.
Innovation defines how these entities can be combined, assisted by human intelligence which
provides information on production and processes. However, until the advent of embedded
computers information remained outside of things. Now that computers have begun to enter
inside things, innovation has been accelerating at a higher and higher rate, because a fifth
entity has joined the other four, and it is information. Information has become an intrinsic
component of things, and thanks to it, the compression of matter, energy, space and time has
reached previously unthinkable levels. Just think of the current multimedia mobile phones: in
most cases, their performance is equal, if not superior, to the first supercomputers of the '70s.
For example, the Cray-1 supercomputer, put on the market in 1976 by Cray Research2
, was
2
http://en.wikipedia.org/wiki/Cray-1
Introduction of the steam-engine
The Renaissance
Columbus lands in NorthAmerica
Introduction of the electrical-energy
and the use of oil-fuel
Introduction of computers
USD
World GDP per-capita from year 0 to 2000
4. an incredibly fast computer for its time, designed specifically for scientific computing: it had a
theoretical performance of 160 MIPS (Million Instructions Per Second) and 8Mbytes of main
memory; it weighed 5.5 tons (including the freon refrigeration system), a power consumption
of 115 kWh and a price of $ 5.5 million, equivalent to about $ 29 million of today. To get a
sense of the compression produced by technological innovation, consider that in 2010 a
device of about 150 grams had the same theoretical performance of a 5.5 ton computer of the
'70s, with a compression factor of 35,000 times in terms of weight, 50,000 times in terms of
consumption and 70,000 times in terms of absolute cost. If on the contrary we look at the
price per kg, we can see that things have not changed much: the Cray1 supercomputer cost
about $ 1,000 per kg, roughly equivalent to $ 5,000 of today, and an iPhone4 of today also
costs about $ 5,000 per kg, although it luckily weighs a mere 137 grams.
Another striking example of of the compression occurred: think of a telephone call from
Europe to America before the advent of satellites: at the time of transoceanic cables, 170,000
tons of copper were needed to connect Europe to America; now a quarter ton satellite does it
much better and with much less energy.
Progress and dematerialization
For obvious reasons of cost containment, in order to extend the use of new products we must
necessarily use less material and less energy, take up less space, and make the product
available in less time. In so doing, the products not only reach a wider share of the population,
but also progressively become more sustainable. This trend toward sustainability is a side
benefit of technological progress, and has also been rapidly accelerating since objects have
begun to incorporate computers, thus gaining a minimal amount of intelligence: the new
information content makes up for the reduced amount of matter and energy used. This pattern
of dematerialization had already been observed in 1938 by the American inventor R.
Buckminster Fuller3
, who owing to his remarkable versatility and exceptional intuition had
been able to revolutionize traditional conceptual schemes and understand how renewable
energy, sustainable development and survival of the human race went hand in hand. He not
only conceived research as an activity in the service of human welfare, but was also
interested in the individual and the world where he lives, adopting a multidisciplinary and
systemic approach. Buckminster Fuller coined the term "ephemeralization", postulating that in
nature "progress goes from material to abstract". Later, he will reformulate this concept,
defining ephemeralization as "the principle of doing more with less and less weight, time and
energy for any given level of functional performance", that is to say, for any given S-curve
This principle finds a perfect application in the current miniaturization of products, made
possible by new technologies.
To clarify how the compression operated by technology on products has an impact on
sustainability, just look at the cost per kg of some products. This value gives an immediate
idea of how the GDP (Gross Domestic Product) of highly technological countries is
dematerializing, getting lighter and lighter every year. The table below illustrates how the cost
per kg of a product increases with the increase of the relative amount of information it
contains. In other words, we can have more value with less and less matter, as long as we
know how to put inside things the right amount of information.
3
R. Buckminster Fuller, Nine Chains to the Moon: An Adventure Story of Thought , Lippincott, 1938.
5. Object Price for 1 kg
(Euro/kg)
Minimum quantity
that can be buyed
(in kg)
Gold ~35.000 -
New generation chip ~15.000 ~0,040
Fashion glasse frame ~10000 ~0,02
F35 fighter plane ~6000 ~13.000
SmartPhone ~5000 ~0,12
Airbus 380 ~1500 ~ 270.000
Notebook ~1000 ~1
Supercar ~150 ~1200
M1 tank ~150 ~60.000
Sedan car ~30 ~1500
Compact car ~15 ~800
Iron (AISI 304) ~2 -
The central role of information
Two concepts presented above deserve some further explanation. The first is the concept of
information. In a general sense, information is intended as an ordered sequence of symbols
used to store or transmit a message. Information is anything we did not know before; what we
already know is not information, and what we can deduce from previous data is not
information. Only what is new, although derived from past knowledge, has informative
content. If the sequence of symbols of a message has already been stored, another identical
sequence can be stored easily, since the only required information is that a second message
has been received, identical to the previous one.
The second concept I would like to clarify is the concept of intelligence, which can be defined
as the ability to compress or increase information. Going back to the previous example, an
intelligent system is capable of recognizing the two messages and compressing the second
one. Putting the two concepts together, a system is intelligent when it can activate processes
that change the information content of the system itself. In strictly physical terms, we could
say that a negative or positive variation of entropy takes place in the system. Given that in
physics entropy in some way describes the system's complexity, entropy is closely linked to
the information content of the system itself.
It follows from this definition that the necessary and sufficient condition for an intelligent
system is to have at least one sensor connected to an actuator. This sensor-actuator
machine, while it apparently has no need for computation or memory, is in fact able to change
the surrounding environment and therefore the information necessary to describe it4
. In the
material world, intelligent action stems from the arrangement and combination of objects (and
from the resulting entropy change). If we take this consideration into the computer world, we
can say that intelligence stems from the way in which information changes through the
combined use of computation and memory. It is this kind of information, in the sense given to
it by information technology, that we are discussing here. Needless to say, in order to increase
4
Note: Actually, even in this elementary case the system is provided, by the very definition of feedback, of a memory cell
and of a computation unit that manipulates a unit of information.
6. a system's intelligence we need more and more computing power and more and more
memory. The increase in intelligence, due to the improved performance of computers, will
enable new levels of miniaturization, which in turn will result in a greater and greater number
of intelligent objects, thus triggering a virtuous cycle of progress.
The physical limitations of computation
The whole history of computer evolution can be summed up in the compression of the four
physical quantities (energy, matter, space, time) and in the expansion of a fifth non physical
quantity: information. But how far can we move forward with this miniaturization process?
According to calculations made by Eric Drexel5
, device miniaturization could, at least
theoretically, reach a level where a cube with a 1 cm side could have a computing capacity of
1.000 billion billion operations per second (1 followed by 21 zeros). To get an idea of what this
means, just think that all the computers currently on the planet, from the smallest to the
largest, taken together do not reach a computing power of more than 10 billion billion
operations per second (1 followed by 19 zeros). In other words, a single computer the size of
a sugar lump might contain, in terms of operations executed per second, 100 times all the
computers currently on the planet. But how much power would such a small computer use,
compared to the consumption of all the computers in use today? The estimated consumption
of all supercomputers, data servers, personal computers, notebooks, DVD and MP3 players,
game consoles and mobile phones today reaches 400 GigaWatts6
. According to the
calculations made by Drexler, the sugar lump computer would consume only 1,000 Watts,
(versus the 400 billion Watts of today's computers) for an equivalent performance. To get an
idea of the magnitudes involved, 1 Watt corresponds to the energy used to lift a cup of coffee,
while 400 billion Watts is almost twice the annual power consumption of California. Such
levels of miniaturization and consumption are still a long way ahead, but the nice thing is that
the laws of physics allow it, and all things that do not violate physical laws and are good for
humanity sooner or later become a reality.
To date, the most efficient computer is still (although not for long) a biological one: the human
brain. A human brain weighs 1.5 kg, occupies a space of about 1.5 dm3
, has a computing
power of 10 million billion operations per second and a mere 25 Watts consumption. The
biggest computer in the world installed recently in Japan has a computing power of just over 8
million billion operations per second (noa: this refer to 2011 now the fastest supercomputer is
installed in China with aprox. 34 Petaflos). At the current rate of progress, in a dozen years a
computer with these performance will be found on supermarket shelves, at the size, weight
and price of today's notebooks. Around 2025, the human brain will therefore cease to be the
most efficient computing machine on the planet (Fig. 5).
To fully understand the physical limits of computer performance, we have to forget the laws of
classical physics, as we have been doing up to now, and begin to think in terms of quantum
physics, which explain the world of the very small. Using quantum physics, Seth Lloyd has
proved that it is possible to reach computing capabilities unimaginable today. Only 1 kg of
matter will be enough to build a computer capable of executing 1051
operations per second
with zero power consumption. Based on how we use the quantum properties of electrons and
other particles, we could theoretically equate the entire universe to a giant computer. Although
we are still far from being able to build real quantum computers, research laboratories are
5
K.Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, Wiley Interscience, 1992.
6
The Planetary Computer, in www.wired.com/images/article/magazine/1607/st_infoporn_1607.pdf
7. already getting the first results, albeit at a very elementary level and with a lot of functional
limitations.
Fig. 5
There is still plenty of room to innovate: just follow the trends!
On the basis of the above considerations, we can affirm that the opportunities for innovation
are immense or, as would Feynman7
say, there is still plenty of room in the very small.
However, innovation is based on a fundamental raw material: ideas. Never, in the whole
history of humanity, have new ideas been so easily accessible as in recent years, although
disseminated all over the planet. Thanks to the combination of information and ideas,
innovation is gaining momentum, growing faster than ever before. Innovating means being
able to select ideas that have the power to shape the future and to impact the social and
economic structure. Innovators must be able to imagine scenarios in which the future can be
reasonably developed.
In this regard, let us take a look at what happened in the last thirty years. We are going to
follow three trends, represented by three well-known laws. The first is Moore's Law, which
7
Richard Feynman, Annual Meeting of the American Physical Society 29-12-1959
Exponential growth of computation
Computations per sec.
All human brains
Human brain
Rat brain
Insect brain
Dates
8. dates back to the early '60s and attempts to find a rule for the growth of the number of
transistors in a chip. This law states that the number of transistors doubles every 24 months
(as do processing speed and storage capacity) for the same cost. The practical consequence
of this law is summed up in the statement cheaper, smaller, faster. Over time, computers
become simultaneously cheaper, smaller and faster. This law explains the presence of many
products that we use today, but it does not explain phenomena such as the advent of the
Internet. For this, we have to look at Metcalfe's law, which dates back to the early '90s. This
law deals with network connections and their value, and states that the usefulness of a
network grows quadratically compared to the number of connected computers. This happens
because each time a computer is added to the network, it uses up the resources available in
the network but also makes new resources available for the network. It follows that the value
of a community grows according a power law. The combination of Moore's law with Metcalfe's
law helps us explain the explosion of the Internet. Let us now introduce a third law, proposed
by Gilder at the beginning of the millennium and related to broadband growth. Gilder's law
states that the total bandwidth of communication systems triples every twelve months,
suggesting that the transmission speed of large amounts of data is increasing extremely fast.
To make an example, the replacement of copper cables with fiber optic cables has taken the
data transmission capacity almost to infinity. If we combine these three effects, we begin to
truly understand the evolution of the Internet, which is increasingly becoming a data stream
which everyone joins at will. It is the development of these three laws that has led to the
emergence of Cloud computing (also known as "the Cloud"). With a delay of about ten years
since they were conceived, the effects of the three laws have been felt respectively: in the
success of personal computers in the late '70s, in the explosion of the Internet in the late '90s
and in the early stirrings of Cloud computing in the last few years. With Cloud computing,
cyberspace has finally come of age, and the same is true for the new digital economy, where
value has shifted from atoms to bits. In the words of Negroponte8
, "Computers are not about
computing, but everyday life”. The cyberspace is composed of networks of networks of
computer networks. These networks connect a multitude of intelligent devices that can
amplify our skills and perceptions, by acting as one living organism.
Somewhere the future is already written
The on-going evolution of the network (the Internet) is plain for all to see and affects us all.
Despite being a very recent innovation, what we now call the Internet had already been
theorized from an anthropological point of view. As far back as 1940, we can find hints on the
network in the writings of the Jesuit philosopher Teilhard de Chardin9
, according to whom: "No
one can deny that a network (a world network) of economic and psychic affiliations is being
woven at ever increasing speed which envelops and constantly penetrates more deeply within
each one of us. With every day that passes it becomes a little more impossible for us to act
or think otherwise than collectively”. As if to say that the Internet is inescapable: it is one of
the steps of evolution and therefore of human development.
Not only is each technology layer more efficient and smarter than the previous one, it also
facilitates the interconnection of individuals by allowing the development of increasingly
extended communication networks. If we look at the great technological innovations of
mankind from language, writing and so on, all the way to the Internet, what we are really
seeing is no other than the natural trend of knowledge globalization. This seems to be an
8
Nicholas Negroponte, Being Digital, Vintage Publishing. 1995
9
Pierre Teilhard De Chardin, The Human Phenomenon, Sussex Academic Press, 1999.
9. unavoidable trend, not just an accelerative phenomenon. This route starts a long way behind,
and has been very lucidly described by the same Teilhard de Chardin, who saw the path of
evolution as consisting in three stages: geological evolution, biological evolution and finally
memetic and technological evolution10
. The third stage would lead to the creation of the
noosphere, the planetary thinking network, namely an interconnected system of
consciousness and information, a global network of self-awareness, instant feedback and
global communication. Behind this vision, it it easy for us to recognize the Internet, the
broadband, the Web 2.0 and the Cloud.
Another precursor - in this case of the technological aspects that lie behind the world wide
web, such as hypertexts and personal computers - is the American scientist and technologist
Vannevar Bush11
. Back in the '30s, Bush realized that scientific knowledge was expanding
much faster than the human capacity to understand or control it. To overcome this problem,
and being well aware that the current technology was inadequate, he conceptualized the
"memex" (short for memory expansion) which can be defined as the first personal computer
making use of what we now call hypertexts. The memex in fact should have allowed the
creation of stable connections between different documents, simply by selecting them and
pressing a key. While defining this imaginary machine, Bush also worked a selection of the
technologies of the time and made hypotheses on their future development, also speculating
on the consequences that such an innovation could have had on the social and economic
structure. However, while Teilhard de Chardin sensed a general value for the community,
Bush could only see benefits at the individual level.
As William Gibson said: "The future is already here, it's just not very evenly distributed”. The
future is already present somewhere, in the writings, ideas, networks where someone is
talking about the future and creating it. Building one's future means looking around for
sources and ideas that are here but still not evenly distributed, being curious, activating the
right networks. Through this attitude, writes Alan Kay, we remain proactive in front of the
future. The forecasters of today predict the future because they are inventing it.
Progress and technology
Our discussion on technology has been well summarized by John Smart12
in three interesting
laws. The first says that technology "learns" a million times faster than humans. The second
says that humans are selective catalysts of technologies, and therefore the process of
technological innovation is beyond human control. The third states that technology moves
through three stages: in the first stage it seems de-humanizing, in the second it is indifferent
to humanity, in the third technology becomes (hopefully) network-humanizing.
To confirm the veracity of this law, let us try to analyze the effects of the development of ICT
(Information and Communication Technology) and pervasive computing: instead of de-
humanizing the world, these technologies are actually humanizing the network. Technology
therefore grows exponentially, and we have also learned, based on the laws of quantum
mechanics, that it will not saturate any time soon. These two effects suggest the consideration
that everything is accelerating, and that this acceleration involves the compression of time
intervals between one development stage and the next. This observation led Ray Kurzweil13
to theorize the "law of accelerating returns", which could be summarized by saying that in the
10
Note: a new way of thinking is emerging from this massive collaboration and competition. “Memes", or units of cultural
information (as opposed to genes, or units of biological information) are gaining more and more importance.
11
Vannevar Bush, As We May Think, The Atlantic Monthly, luglio 1945
12
John Smart, Acceleration Studies Foundation, http://www.accelerating.org/
13
Ray Kurzweil, The Singularity is Near, http://www.singularity.com/
10. This chart is normalized
to the area of the first
Intel processor, the 4004,
which held 2,300
transistors on a 12 mm2
surface.
With current production
technologies the same
area can hold more than
60 million transistors.
21st century we will not witness 100 years of progress, but most likely 20,000 years of
progress. If we adopt the exponential view mentioned above, and we apply it to the current
growth trend, we should be less surprised by the thought that the current century will witness
200 times more progress than the last. Just think of Moore's Law: today more than 60 million
transistors can be packed in a 12 mm2
chip14
(noa: today means 2011) ; in the '70s, the
same chip could contain only 2,300 (Fig. 6).
Fig. 6
Looking into the future
In the near future, the computer as we know it will disappear. Instead, a myriad of micro-
computers will be hidden in all the things that surround us, constantly connected to the
communication network. This will improve our perceptions (the so-called "amplified" reality) to
such a level that we will be able to capture many more inputs than an average non connected
person, and we will do so in an ubiquitous way, that is, without the constraint of being
physically present where the phenomenon we are interested in manifests itself. Thanks to the
pervasiveness of new computers, interconnected both at the local and the global level, reality
will not simply be visualized, nor virtualized: it will be augmented. The GPS navigation system
is a clear example of the application of these technologies: when we drive from one city to
another, what we see from our car is the reality surrounding us. But if we plan our route with
the navigator, we increase the perceived reality. Using sensors that localize our position, the
navigator will tell us how much traffic we will find on the way, and will even warn us of
obstacles well in advance, suggesting alternative routes, in short, giving us much more
information than our five senses alone would be able to perceive. Pervasive computing affects
all computing devices distributed in the physical world: wearable computers (Fig. 7) and
sensors inserted in everyday objects and in the environment around us. In this vision,
pervasive computing involves both peripheral devices and centralized high-performance
14
Note: this is referred to the Intel 10-Core Xeon Westmere-EX processor, produced with a 32 nm production process,
which contains 2,600,000,000 transistors on a 512 mm² surface.
Moore's Law
Number of transistors per 12mm
2
Intel 4004, 2.300 transistors
12mm
2
of silicon surface
11. computers (HPC), including communication infrastructures like WiFi networks and 3G cellular
networks, which support ubiquitous applications through the new Cloud computing
technology15
. In the near future, the Cloud computing infrastructure, which I prefer to call
"computational exoskeleton", will be extended to many human activities and will be an ideal
starting point for the creation of a future class of applications and services, which will be the
result of a new collective and collaborative approach, and whose drivers will be open source
software and crowdsourcing16
.
Fig. 7 - The ZyPad wearable computer
The pervasiveness of computers marks the historic transition from a world where the
computer was at the center of people's lives to a world where computers will connect
everything, but will be so well integrated into reality as to be imperceptible. In this context, a
strange paradox will arise: people will be surrounded by computers, but the focus will be on
human beings. This new world will bring with it the undeniable benefit of the so-called "virtual
ubiquity". The term ubiquity means the ability to be physically in different places at the same
time. The great changes that have affected computers in the last twenty years can be
summarized as follows: from one computer for many people (the mainframe) to one computer
for one person (the PC), to the state of the art technology, with many interconnected
computers for each person. Yesterday's computers filled entire rooms because of their
dimensions; tomorrow's computers will fill entire rooms because of their number! Thanks to
the Cloud technology, the computational exoskeleton has already taken shape to create a
whole new set of services and applications. In the words of Ian Foster17
, it “coordinates
resources that are not subject to centralized control, uses standard, open, general purpose
protocols and interfaces, and delivers nontrivial qualities of service”.
We will no longer use computers as single devices separated from each other: the new
interconnected elements will give us the tools to amplify external reality and to increase our
ubiquity in the web and throughout the computational grid. Progress will be so remarkable
that we will no longer see computers as machines, but as an integral part of our world, as an
extension of ourselves. That's what we mean when we talk about the disappearance of
computers, or the invisibility of computers: they will become an integral part of our
15
Cloud computing represents a computation and memory resource available on-demand and pay-per-use, just like
electricity. In other words, thanks to cloud computing technology, computation and storage have become an actual utility.
16
The term crowdsourcing is a neologism which defines a business model in which an entity requires the
development of a project/service/product to a virtual community of non-organized people by making use of web tools and/or
internet portals.
17
Ian Foster, Carl Kesselman, The Grid 2: Blueprint for a New Computing Infrastructure, Morgan Kaufmann Publishers Inc.
12. environment, to such an extent that they will completely escape our attention.
We already know that computers are present within telephones, TVs, DVDs, DVRs,
microwave ovens, refrigerators, cash registers, motorcycles, cars and many other devices
and equipment of everyday use. But this ubiquitous presence is not enough: it is not enough
to simply make devices smarter, we must also interconnect them to the computational
exoskeleton and equip them with the ability to "feel" the world. When this gap is closed, the
exoskeleton will finally work as an extension of our five senses. We will evolve from a body
with acceptable processing capabilities and too few sensors to a super-body full of sensors
and processing capabilities, which will allow us to better understand the world around us.
There have been precursors even for this scenario, and Mark Weiser18
was one of them. He
worked in the late '80s and early '90s at Xerox PARC (Palo Alto Research Centre), and he
envisioned a scenario that can be summarized into three statements, taken from his writings:
1. Computers are bound to disappear
2. Computation will become ubiquitous and pervasive
3. Technologies are soothing.
When the exoskeleton is complete, we will be able to explore different types of interactions: of
people with the amplified reality, of people with people, of people with machines, of machines
with machines, of machines with people and of machines with reality. With these new images
in mind, we can create a world totally different from the one we know today. Just imagine how
helpful digital technologies will be for basic care: home monitoring for people with medical
conditions (Fig. 8), lifting of heavy weights without the slightest fatigue, driving of potentially
dangerous vehicles without humans on board. Specialized and invisible computers will
become an integral part of the natural human environment, so that we can "compute without
computers".
18
Mark Weiser, http://sandbox.parc.com/weiser/
Fig. 8. Pervasive Computing scenario for health care.
Wearable devices Embedded
PC devices
Home
appliances
Smart Sensor
& Actuators
High
Performance
Computers
13. Conclusion: collective intelligence and the new paradigms
We are entering a new world of smart collaborative objects. Things are intelligent because
they incorporate small, cheap and lightweight processors, and they are collaborative thanks
to wireless communications, which enable the creation of spontaneous networks. Compared
to traditional objects, smart things have completely different characteristics: they have a
memory, they can remember specific events, they adopt context sensitive behaviors, they are
aware of the location or situation, they are reactive, they communicate with their environment
and they are connected with other smart objects and with all the other devices in the Cloud.
In the face of these changes, we might be led to believe that direct contacts between people
will lose value. By dint of virtual reality, will "digital" relationships replace "real" ones? Of
course we cannot predict the future, but we can refer to history. When Gutenberg invented
printing, some of his contemporaries prophesied: "It is abominable, people will lock
themselves up with their books and give up dialogue with others". Instead, people continued
to talk to each other after the invention of printed paper! The same goes for the cinema, which
has not killed the theater, as the TV did not kill movies and books. Furthermore, this question
might not even make sense, if virtual reality were indistinguishable from the material world.
We cannot but make a reflection on the positive impact that the dematerialization of the
communication infrastructure (and the consequent computer virtualization) will have in
developing countries. If in economically developed countries dematerialization led to the
flourishing of innovations that would enhance and increase connections between individuals,
the lack of such innovations in developing countries would cause an unbridgeable gap in the
improvement of the quality of life. A digital economy based on immaterial assets is a way to
give a creative contribution to the economy and trade of areas such as Africa, Latin America,
Southeast Asia and India, without necessarily having to make heavy investments on factories
and machines whose costs are becoming unsustainable (not only economically but also for
the environment). In fact, technological progress is helping to reduce the gap between
developed and developing countries, and just as Teilhard de Chardin had imagined, the world
will be more cohesive and more democratic.
From the emerging phenomenon of a global thinking network, or collective intelligence, we
can also derive some rules that apply to human activities and businesses. We have come to a
new form of globalization: the globalization of collaboration between people. In the words of
Friedman19
, in a global world rule number one is “Don’t try to build walls” and if we apply this
rule to human activities then the key word is collaboration. Even in economic activities, more
and more business will be done in collaboration rather than in competition, since it is
increasingly difficult for a single company to dominate alone the complexity of the value
creation chain. Yet another paradigm stems from this reflection: knowledge globalization leads
to the superiority of "know-who" and "know-where" on the more traditional "know-how". A
paradigm which can be easily confirmed by a trivial, but enlightening, example: is it more
valuable to know where oil is or how to extract it?
With the twenty-first century, we have entered a stage that cannot be compared to any other
period of history from the point of view of the opportunities for technological development and
progress. A new exciting season is coming, a season in which we can have a new external
"body", the computational exoskeleton, which will allow us to have a more complete view of
19
Thomas Friedman, The World is Flat, Farrar, Straus and Giroux, 2005
14. the world and the universe. But we need a paradigm shift in order to understand and fully
appreciate the opportunities that this smart infrastructure can offer, and we will face many
challenges related to issues such as security, privacy, reliability. This is the century of on-
going knowledge and innovation, and we are on the threshold of a great mutation in society.
But, as Alessandro Baricco20
writes, mutation does not mean being swept away. We must
simply be able to decide which part of the old world we wish to take along with us. We will not
take into the future what we have saved, nor what we have protected and hidden, but what
we have allowed to change, so that it can come back in another form.
20
Alessandro Baricco, The Barbarians, Rizzoli Intl Pubns, 2014