Shell Technology Futures 2004 - This is the summary of two sets of weeklong discussions that took place in Amsterdam and Houston, each of which included around 20 experts from across multiple disciplines all looking out 20 years at how technology may, or may not influence society. This was the first run of the Technology Futures programme and was followed in 2007 by similar discussions in Bangalore and London.
This first 2004 programme took a very wide view and covered everything from mesh networks, natural language processing and nano-technology to adaptive systems, automated sensing, tissue scaffolding and 3D printing.
The technologies which have had the most profound effects
on human life are usually simple. A good example of a
simple technology with profound historical consequences is
hay. Nobody knows who invented hay, the idea of cutting
grass in the autumn and storing it in large enough
quantities to keep horses and cows alive through the winter.
All we know is that the technology of hay was unknown to
the Roman Empire but was known to every village of
medieval Europe. Like many other crucially important
technologies, hay emerged anonymously during the so-
called Dark Ages. According to the Hay Theory of History,
the invention of hay was the decisive event which moved
the center of gravity of urban civilization from the
Mediterranean basin to Northern and Western Europe. The
Roman Empire did not need hay because in a Mediterranean
climate the grass grows well enough in winter for animals
to graze. North of the Alps, great cities dependent on horses
and oxen for motive power could not exist without hay. So
it was hay that allowed populations to grow and
civilizations to flourish among the forests of Northern
Europe. Hay moved the greatness of Rome to Paris and
London, and later to Berlin and Moscow and New York.
Freeman Dyson Infinite in All Directions,
Harper and Row, New York, 1988
Technology Futures 2004
Background and acknowledgements 1
Scenarios and forecasting 6
Technology forecasting 8
Continuous interaction and connectedness 10
Small and distributed 20
Sensing and making sense 28
Cultivating chemistry 38
Technology development maps 54
The future of business 84
Key challenges 94
Technology impacts: summary 110
Critical technology development pathways 112
This report draws on work undertaken by Shell
during 2003 and 2004.
This programme was led by Shell’s Group GameChanger
team with the support of the Scenario Planning team in
PXG, Innovaro Ltd and the Eastbury Partnership.
The process relied heavily on the contributions of leading
sources of technology insight – CEO, CTO, R&D Director,
Head of Innovation, Corporate Demographer, Chief Scientist
etc – from a wide range of companies and organisations:
a global civil engineering firm
Europe’s leading media organisation
a leading US pharmaceutical firm
one of the world’s leading aircraft firms
a global telecommunication company
a major global food and grain supplier
a sensing technology start-up
a major chemical company
a major European internet bank
a leading multinational automotive company
the world’s leading IT company
the leading microprocessor supplier
a major US based food manufacturer
one of the world’s largest software firms
a major US food ingredients company
a key telecommunications manufacturer
a major European food and drink supplier
a leading Nordic telecommunications firm
a top 3 defence technology firm
a leading household products company
the world’s leading pharmaceutical firm
a top 3 automotive manufacturer
a global business newspaper
a leading technology publication
a top UK university
the leading centre of new technology
a major US research institute
a top 5 UK research university
a leading German research institute
a leading business school
a pre-eminent US research university
All of the contributors from outside Shell gave their time
freely and as individuals, and on an unattributable basis.
Without their contributions, the exercise would not have
been possible. However, none of the participants shares any
responsibility for the opinions and conclusions reported in
Background and acknowledgements
The world of technology is a world that is moving fast. It is
an opaque world, due to an overload of information coming
at us. It is a world that seems to grow endlessly in all
directions. It is a world that creates wave after wave of new
hypes. In essence, it appears to be a huge, misty swamp
through which a network of trails exists, but only for those
who make an effort to find them. Sometimes, trails join up
and become a path, sometimes several paths join up and
become a motorway: a disruption occurs; a new technology
area is born.
At the end of 2003, GameChanger started on a long
journey to identify these technology pathways. The
intention was to identify the onset of possible new
technology waves, identify possible disruptions and indicate
possible threats or opportunities for Shell at a very early
stage. In short, develop an Early (technology) Warning
System. This Early Warning System will have several
benefits. It will warn Senior Management of possible
disruptions that may offer a threat or opportunity for Shell,
and it will inform technology managers and innovators of
trends to come and areas to focus on.
More than a hundred upcoming technologies were
identified, more than 30 technology thought-leaders from
academia and industry interviewed and three major
workshops were held, in Amsterdam, Sheffield and
Houston, each bringing together a broad range of
academics, technical experts, businesspeople, commentators
and thinkers. This resulted in a wealth of information that
has been summarised in a set of possible technology
futures. Possible means that they are technically possible.
To turn the possible into probable will depend on their
acceptance by society.
This booklet is to my knowledge the first attempt to create
long-term technology futures, at least in Shell. They are
designed to challenge our thinking so that we can make
better business decisions today. They can help us focus our
Research and give direction to our Innovation efforts and
provide insights in the technology world outside Shell. It is
in that spirit that we offer these to a broader audience than
the GameChanger community, as we know that by
engaging with the views of others we improve our own
understanding. We hope you find these futures thought-
provoking and look forward to hearing your views.
processes which make life better or more fun. At the same
time, we fear technologies such as genetic modification or
nuclear power, and have unfocused anxieties about privacy
or new diseases. The world seems dangerous: will
technology make it safer? or more dangerous still?
Our world exhibits massive disparities of health, wealth
and happiness. Millions of people in the so-called
developed world have greater freedom, opportunity and
affluence than ever; and suffer more stress, obesity, civic
dysfunction and alienation. In the less-developed world –
and in many countries where Shell operates – millions
starve, or lack the most rudimentary amenities such as
clean water. Will technology help eliminate these
disparities and improve the lives of the disadvantaged
wherever they are? Or will it deepen inequality, between
the West and the rest, and between the privileged and the
excluded in the developed world?
Shell is one of the largest industrial groups in the world.
Our products and services touch the lives of millions of
people in every continent and we are directly responsible
for tens of thousands of employees and their families who
depend on us.
Shell is also a high-technology company. We employ
thousands of scientists, engineers and technologists. Our
core operations – exploration for oil and gas resources,
extraction, processing, refining, production – as well as our
other activities in renewable energy or hydrogen fuels
depend on the application of leading edge technology.
No business can afford to take its operating environment
for granted. In Shell we have a responsibility to look ahead,
to try to foresee and anticipate the context in which we
shall be doing business in the decades to come. We need to
understand how the needs and desires of our customers,
shareholders and other stakeholders will change over time;
how the way businesses function may change; how changes
in social attitudes will impact on our business. Crucially,
we need to explore how technological change will both
drive and be directed by broader social and economic trends
around the globe and what that will mean for how we do
business – and for the business which we do – in 2020 or
2030 and beyond.
This report reflects a process of engagement with such
issues undertaken during 2003 and 2004. This was a
conscious attempt to review cutting edge developments in
technology across a range of fields, from biotechnology to
artificial intelligence to communications technology. How
were these technologies likely to develop? Were there any
core themes which could be discerned? What would be
their impacts? By exploring the likely pathways of
technology development over time was it possible to
identify critical nodes or points of inflexion? What were the
potential implications and how should we respond?
At one level, the stimulus for this exercise was corporate
self-interest. Was it possible to identify critical threats or
opportunities arising from future developments in
technology which should be reflected in Shell’s long-term
planning? Were there any key technologies where we
should be directing R&D now to ensure competitive
advantage in the future? At another level, though, the
impulse was more general and disinterested. What are
likely to be the most significant medium-term impacts of
technological developments on all of us as citizens,
consumers and individuals?
We live in an age where technological change is both rapid
and wide-ranging. It is often difficult to understand
individual developments and form considered judgements
on their implications. We value new gadgets, products and
This report is a response to these issues of technology,
business and society. It does not aspire to be definitive or
comprehensive. But it does attempt to map some pathways
to the future which are more likely than others, and explore
their possible implications.
One of the dangers of such an enterprise is that of reading
the future through the perspective and preoccupations of
the present; of reflecting current concerns and aspirations –
on global warming, say, or terrorism or AIDS – in
projections of technological development. A sense of
historical perspective can go some way towards guarding
against this, but it is impossible to avoid completely. This
report, then, should be read with an awareness of its
contemporary context – the first half of 2004 – and in the
understanding that it is a snapshot, intended primarily as a
stimulus to further discussion rather than a definitive set of
predictions of the future.
“Nigerian mobile phone users have been anxiously checking who is calling them before
answering them in recent days. A rumour has spread rapidly in the commercial capital,
Lagos, that if one answers calls from certain ‘killer numbers’ then one will die
immediately. Experts and mobile phone operators have been reassuring the public via
the media that death cannot result from receiving a call.”
BBC News 19 July 2004
“Any sufficiently advanced technology
is indistinguishable from magic.”
Arthur C Clarke
create technology/innovation weather maps, and thereby to
link the global forces encapsulated in the scenarios with
an analysis of likely technological developments to
forecast their potential impacts on people, society and
Although in practice, inevitably, the process has thrown up
surprises, dead-ends and the need to revisit earlier
conclusions, the work of the last year or so has followed a
■ the first task was to identify 120+ key technologies with
potential for major impacts, and prepare initial
overviews and assessments based on desk research
■ the next step was to identify a range of authorities in the
relevant fields – academics, businesspeople,
commentators – and carry out in-depth individual
interviews with each of them
■ these experts were then brought together in a series of
workshops, each deliberately addressing a range of
technologies and potential applications, and attempting
to reach a consensus on potential key themes, impacts
■ the results were formalised into likely technology pathways
■ it became apparent that there were critical enabling
technologies which sat at the nodes where a number of
pathways intersect: the identification of these
technologies, and the exploration of the social and
economic forces which may drive – or block – their
acceptance, are a key output of the whole process
Shell has a deserved reputation for its scenario planning
process, which has been informing the Group’s business
strategy for more than 30 years. The origins of scenario
planning are generally traced back to US military planning
after the second World War. Subsequently, pioneering
‘futurologists’ such as Herman Kahn developed the
technique as a tool for businesses to forecast the future and
reduce uncertainty. Shell’s early success in using scenario
planning to forecast the OPEC oil price crisis in the early
1970s is now a classic business school case study.
Since then, Shell has published global scenarios every three
years, and applied them to help decide policy in a wide
range of situations. Shell has also used scenario planning in
work with the World Business Council for Sustainable
Development and the Inter-Governmental Panel on
These global scenarios are intended to provide a broad
overview of, typically, two contrasting narratives of how the
future may develop. They explore the forces driving change
in the wider business environment, helping to challenge
current perceptions of how the world operates and
encouraging the Group to maintain flexibility in
anticipating change. They provide a general framework for
discussions of business development, and stimulate debate,
but are not designed to determine medium-term planning
In the context of the current exercise, a metaphor has been
developed in which scenarios have been likened to
meteorological jet-streams: powerful, gross and high-level
forces which drive global weather systems. The aim of this
study is to map the technological forces operating at a
lower, finer level of detail. To extend the metaphor, it is to
Scenarios and Forecasting
“Shell did not buy oil fields when the
price was $30/barrel, they bought when it
was $15. Scenarios gave them a huge long
“Although futurologists make their living in claiming to predict coming events [this] is
at best an exercise in scientific wild-ass guessing. Unless taken to heart and acted upon,
most such attempts are harmless, and may even offer some minor insights. But the
future is and will remain uncertain.”
US Col Harry G Summers
“People have an innate ability to build
scenarios, and to foresee the future.”
The exercise has yielded important new insights into the
likely impacts and timing of technological developments on
business and society, insights which complement Shell’s
scenario planning and ground it more directly in business
strategy and future R&D and investment decisions. In so
doing, both the process and the output break new ground
in the field of technology forecasting.
Technology Forecasting Many scientists working at the forefront of technology get
carried away by their own hype, predicting fundamental
changes that in the event will not happen. However, it does
seem that technology is enabling the acceleration of
innovation. Technology is creating a virtuous circle – more
technology enables faster innovation, which creates more
technology, which enables faster innovation etc. In the
1980s, computer games were only just beginning to appear.
Now they are a multi-billion dollar global business that is
driving large swathes of innovation, especially in computer
graphics and the human-machine interface. The user
interface in computer games makes the interface with most
corporate web sites look clunky. But there will be spillover
and the games business will take other applications of
computing along with it.
One underlying conceptual problem in technology
forecasting is that while technology is a well-defined term
(the application of science and engineering knowledge to
develop tools, materials, techniques, and products) a
technology is not. The key outputs of technology are tools,
materials etc which perform various functions and meet
various needs. Every time a new piece of science or
knowledge is brought to bear on creating a tool or product
to solve a problem, a new technology emerges. So ‘a
technology’ may refer to (the manufacture of) a silicon chip,
its combination with radio technology in a mobile phone,
the combination of mobile phones and internet technology
to create 3G mobile phone technology, and so on.
This project has therefore adopted a necessarily pragmatic
approach to the identification and definition of technologies,
and has avoided spurious differentiation between technologies
and applications. The focus has been on forecasting the likely
applications and impacts of technological developments,
understood in a commonsense manner. By concentrating on
development pathways, uncovering which developments will
be necessary before others can succeed, and identifying
critical nodes within that framework, a more robust and
insightful set of projections can be made.
All new technology is potentially disruptive of existing
taxonomies. Since the essence of technological development
is the combination of new and existing technologies and/or
their application to different fields, the field is neither
linear nor coherent. In presenting the results of this project,
we have found it helpful to explore five key themes which
emerged as it developed:
■ continuous interaction and connectedness
■ small and distributed
■ sensing and making sense
■ cultivating chemistry
We also discuss potential impacts in six areas:
■ the future of business
■ key challenges
Given the essentially integrated and interdependent pattern
of technological development under review, these discussions
are less discrete accounts of separate issues and more
complementary perspectives on a broad process of change.
Finally, we propose a number of important technology
development pathways, and identify critical points on
which they depend.
It is not of course literally possible to forecast the future. If
it were, we should in fact not know what to do – and there
would be no point in doing it. Everybody would doubtless
become an instant billionaire, which would be entirely
worthless; our notion of free will would be destroyed; life
would become unsustainable.
Despite this, we have to plan for the future, anticipate
likely trends, and take steps to promote valuable
developments and avoid or ameliorate undesirable
outcomes. For governments and policy makers, technology
forecasting helps in social and economic planning. For
businesses, it can inform strategic product planning, R&D
and capital investment. Effective forecasting of
technological disruptions or discontinuities can reduce risk
and reduce competitive threats.
Since technology has such a profound and pervasive impact
on our lives, it is no surprise that forecasting the way it will
develop is a massive industry: a Google search for
“+technology +forecast” yields 2,990,000 hits. But the
value of much of this effort is debatable: a lot is froth and
speculation; some is more akin to science fiction; and yet
more is corporate hype or PR aimed at attracting
Even responsible and academic work in this field is of
variable value. A recent study of Kahn and Wiener’s well-
known “One hundred technical innovations very likely in the last
third of the twentieth century”, published in 1967, concludes
that fewer than 50% were judged accurate and timely.
However, deeper analysis reveals wide variation between
forecasting accuracy in different fields. In particular,
forecasts in the field of communications and computers
were judged to be 84% accurate. The primary reason is
that technological trends in these areas – such as Moore’s
Law for the exponential growth in the number of
transistors per integrated circuit, or trends in
semiconductor density and performance/cost – were already
well-established in 1967. The most well-founded forecasts
are those based on growth trends that will be sustained for
long periods or where the growth in enabling technologies
will have a spin-off effect:
“With careful study and analysis it is possible to find
strong under-pinning trends for many of our technology
forecasts” Richard E Albright, 2002
It is more difficult to forecast discontinuities or critical
nodes on technology pathways in isolation. In Megamistakes:
Forecasting and the Myth of Rapid Technological Change,
Stephen Schnaars argued that ‘technological wonder’ is at
the root of many failures: forecasters become enamoured
with an innovation’s underlying technology and fail to
consider its ultimate application:
“Most growth market forecasts, especially those for
technological products, are grossly optimistic. The only
industry where such dazzling predictions have consistently
come to fruition is computers.”
“Computers in the future may weigh no
more than 1.5 tons” Popular Mechanics, 1949
Continued rapid development of computing power,
communications technology and data management
software will enable a world where communication
and information devices will be ‘always on’, and
where individuals can be ‘continuously connected’ –
and continually monitored. Issues of trust and identity
will become prominent.
The rapid development of mobile computing and
communications technology is bringing about a world of
permanent connectedness and continuous interaction. The
laptop and the mobile phone were the first significant
devices to allow computing and telecommunication on the
move. Their impact has been both enormous and
widespread: 80 percent of European adults, and 60 percent
of Americans, are now believed to own a mobile phone;
there are a quarter of a billion mobile phone users in
Within a few years, these technologies have revolutionised
the worlds of business and leisure. It is already possible to
send and receive email and surf the internet virtually
anywhere. But developments over the next 20 years are
likely to be more profound again.
If Moore’s Law is sustained, we will see a massive increase
in computing power by 2025. The capabilities of a desktop
computer will be astonishing; and computing power will
be so cheap that any artefact can have significant
computing power built in. In parallel, there will be major
advances in presentation and visualisation, such as the use
of polymers to make ‘roll up’ displays. These advances will
bring very much smaller and cheaper digital video. Video
could be put everywhere and anywhere.
Two barriers stand in the way of a continuation of Moore’s
Law: limits on the way electronic circuits behave as they
get smaller and smaller; and the dramatic increase in the
capital cost of the plant needed to produce each new
generation of electronic devices. But nanotechnology may
find a way round these problems by enabling entirely new
architectures for logic and memory devices. Molecular
computing and quantum computing may be delivering
these objectives by 2025.
3G mobile phone technology (and 4G and 5G) provide for
much higher data transfer rates, catering for bandwidth-
hungry applications such as full-motion video, video-
conferencing and full internet access, deliverable to phone
or computer alike. The distinction between them will
continue to blur, and new devices will combine different
functionalities for business, entertainment or information.
These developments promise an environment in which
people will be able to access computation and information
anywhere at any time.
Wireless networks using developments of technologies such
as Bluetooth, DECT, ultra-wide band and wi-fi, combined
with ubiquitous networked devices, will provide
continuous IP based connectivity and communication.
Continuous interaction and connectedness
Smart sensors will transmit location details; someone
interested in antiquarian bookshops or fitness centres will
automatically be notified of facilities in the vicinity.
All our interactions will increasingly be monitored and the
data captured and analysed. Companies – and governments
– will know more and more about our habits and our
preferences. Businesses will be able to deliver better
customer relationship management and personalisation.
Fraud detection will be enhanced. Governments will use
data mining and knowledge discovery technologies to
identify and apprehend terrorists.
Already, call-centre software can ‘pop up’ customer details
on agents’ screens before a call is answered, recognising the
caller’s number. The same principle will allow, for example,
automatic number-plate recognition at banks and filling
stations, so that management can greet customers by name
and offer specific additional products and services.
These developments will depend on a small number of
critical technologies. Better battery devices and power
management systems will be crucial. More efficient
bandwidth management will be necessary to cater for the
massive increase in transmitting and receiving devices. In a
continuously connected world, the quantity of data and
information communicated will be enormous: more
effective technologies for storing, managing, searching and
filtering such material will be required.
A further critical issue will be data storage technology
itself. Historically, storage technology has taken second or
third place in IT behind the development of computing
power and communications. These latter two have reached
– or are reaching – the point where they are ‘good
enough’; for example, doubling the power of a desktop
computer will not now make much of a visible impact on
Natural language processing – enabling software systems
to understand, process and respond to language as it is
used by human speakers – is one of the most complex
challenges in computing. The problems range from
simple input and interpretation, such as the accurate
segmentation of words or phonemes, to complex
philosophical issues of grammar, semantics, ambiguity
and cultural interpretation. The crude performance
currently achievable by automatic translation systems
demonstrates the difficulties of handling even well-
formed written input effectively. Speech recognition
systems are a generation behind in their ability even to
understand and reproduce accurately spoken language.
Success will come through steady incremental
improvements across a range of fronts: in computing
power, in probabilistic systems for parsing and in the
creation of more effective and accessible encyclopedias of
human knowledge and culture. But qualitative
breakthroughs in theoretical linguistics will also be
necessary. Language is the most complex and creative
product of the human brain; true natural language
processing will eventually require computer systems with
Today’s computer networks require extensive – and time-
consuming – monitoring, support and maintenance. IBM
estimate that up to half of all IT expenditure is directed at
implementation and repair rather than productive operations.
Their promotion of autonomic computing as the future of
systems development is part of an industry-wide trend going
under various other names such as self-healing systems and self-
managing software. The concept is simple: to create systems
which operate largely or completely without human
intervention, monitoring their own operation, adjusting
their own processes to reflect the systems environment and
modifying themselves to cope with errors or problems.
Autonomic systems rely on continuous control loops to
monitor system activities and adjust the system to pre-
determined operational objectives:
■ they re-configure themselves, adding new features,
system resources and software updates while the system
■ they heal themselves by discovering, analyzing, and
remedying the causes of system disruptions
■ they protect themselves against unauthorised intrusion
and corruption of data
■ they optimise themselves dynamically, monitoring and
tuning systems components such as databases, networks
and servers continually
In the next five years, such systems will:
■ increase IT responsiveness
■ improve business resiliency
■ improve operational efficiency
■ help secure information and resources
Autonomic self-healing systems
People will soon enjoy seamless roaming and access to all
the information they need when and where they need it.
Wireless communication between these computer-based
devices – and between human beings – will be enhanced as
‘software defined radios’ and other technologies enable
efficient use of the radiofrequency spectrum, deliver true
international connectivity, create more opportunities for
virtual private networks and provide freedom of choice over
the applications we use. We will be able to participate in
any number of ‘virtual private networks’ depending on our
interests – everything from chess clubs to ad hoc networks
formed by the crowd attending a football match. We will
never be ‘out of touch’ with each other.
Advances in computing power, bandwidth and data
management (supported by data mining) will allow ‘mass
customisation’ of communication. Individuals will be able
to receive information directly relevant to them personally
and to their current circumstances. Databases of individual
preferences, habits and interests will allow targeted
communication of news, recommendations, advertisements. Natural language processing
the average user. By contrast, the technology to store vast
amounts of data and a wide variety of data will become
It is one thing having enormous storage capabilities, it is
another having the software and systems to manage and
access that knowledge. If we do not get the content
management, data mining and knowledge discovery tools
writing and reading ‘surface’ to a point measuring
nanometers at its tip allows massively increased storage
Many other basic enabling technologies already exist, in
terms of communications, security, cryptography etc.
Biometrics needs further development. But the key
developments will need to be in binding everyday objects to
identities, and identities to credentials. For example, most
current purchases do not depend on a verified identity.
have some way of processing all the data we have recorded.
This may mean adding even more data through automatic
indexing systems, generating the need for even more
Nor is the sheer volume of information the only issue; the
nature of the information may also pose challenges. In
particular, the degree of knowledge we will have about our
current and probable future health may be intimidating.
We will need intelligent computers to deal with the
information, computers that understand our interests and
needs, can digest the raw information and provide what we
want in a form we can readily assimilate.
The principal developments will occur with the
combination of advanced data storage capabilities and
advanced techniques for archiving, management, retrieval
and processing. A few areas of IT (eg modelling of weather
or financial markets) will still require massive raw
processing power. But in the main, the limiting factor will
increasingly be storage.
Current generations of storage technology – using magnetic
and optical devices – have experienced massive growth, but
are reaching saturation as they come up against physical
limits. New technologies, and cheaper technologies, will
replace them. Historically, every time that magnetic storage
has been challenged by new technologies, it has responded
with technological improvements. Now that the physical
limits of magnetic storage are being reached, this is no
The most promising new technologies involve a
combination of ‘probe’ technology emerging from atomic
force microscopy with nanotechnology. Hitherto, all storage
media (eg tape, CD) have been written and read using
heads, with significant physical dimensions. Reducing the
What they require is that the holder of that identity has
verified credentials: in this case the money in the bank to
pay for the purchase. So technology for validating and
confirming credentials will become more necessary.
The connected world will bring enormous advantages in
business efficiency and in personal experience: we shall be
able to do much more, much faster and more enjoyably.
But there are also risks, in particular to individual privacy
right, people will be flooded by information and unable to
function. Information will be stored in a variety of formats:
we will need to be able to manipulate these different
formats and bring material together easily. The issue will
be how to process this content. The human memory auto-
discards information so that we are not overloaded. But
artificial memory will grow and grow. We shall have to
Universal translation (UT) encapsulates the dream that
eventually automatic real-time translation from one
speaker’s language to another will be possible. The
challenges are enormous. UT will depend on major
advances in speech recognition, natural language
processing (qv), dictionary management technology,
semantics and cultural representation. In all of these
areas, major conceptual and theoretical advances will be
necessary before acceptable performance can be achieved.
In practice, therefore, most UT developments focus on
achieving ‘good-enough’ performance, taking more of the
drudgery out of the translation task by incremental
improvements to existing technology. Existing voice-
recognition systems (eg IBM’s ViaVoice) and translation
software (eg AltaVista’s BabelFish and Google’s web page
translation tool) demonstrate both the extent and the
limits of what is currently possible.
However, the eventual prize is potentially enormous: not
simply in practical terms of everyday business and
personal interaction but in the more nebulous, but
important, area of facilitating cross-cultural
communication and understanding. Progressively more
efficient and convenient devices will appear in the short
and medium term. And in the longer term, miniature
portable translating devices could be available to all.
One of the key design principles of the internet is that
there is no central network control. Each router on the
system is independent and autonomous, and data split
into packets have many millions of possible routes from
source to destination.
Wireless mesh networks will extend the same principle to
radio communications. In such networks, each node will
operate as both transmitter and receiver, and act as a relay
point to other nodes. This promises significant benefits.
Mesh networks are an order of magnitude more reliable:
if a single node fails, many other nodes are available. And
it also opens the prospect of creating robust networks in
environments unfavourable to radio transmission, for
example where there are massive steel or concrete
structures; multiple, closely-spaced nodes can relay
communications reliably across the network.
Mesh networks will be able to re-configure themselves in
response to the local environment, making new
connections with adjacent nodes. So mobile mesh
networks will allow cars to communicate with each other
on the move; home automation and entertainment
systems will configure themselves automatically; military
communications on the battlefield will be transformed.
luxury available only to the affluent. Who will control
access to the information recorded about individuals? Will
it be controllable?
Increasing urban populations, continuous connectivity,
global personal tracking and high levels of advertising and
media communication will increasingly mean that solitude
for the individual is a luxury – one that can only be
bought or earned. The ability to disconnect, physically and
virtually, may itself become a commodity to be bought,
sold and traded.
A further threat is the issue of credibility and confusion. It
is no longer true that ‘the camera does not lie’. It is now
virtually impossible for the public to tell if a photograph
has been artificially created. In a digital, wired world full of
virtual communities and long-distance communications,
how will people know what is true? There could be some
very credible lies.
Technology will in principle be available to allow people to
exercise control. Communication can be encrypted. RFIDs
can be disabled when they pass outside a defined
geographical sphere. All sensing and communicating
devices can in principle be switched off. But will people be
allowed control? Where will the balance lie?
Continuous communication and connectedness will raise
new issues of identification, identity and trust. To what
extent will we be able to choose and manage our own
identities? To a limited extent we do so already by using
different storecards with their reward points. We use
different identities to participate in different communities,
both real and virtual. How far will such identity
‘partitioning’ be possible in the future? Successful
miniaturisation of RFIDs and equivalent systems will be
key, not least as the technology behind validating and
confirming the credentials that underpin the trend towards
partitioning of money and identity.
Already, mobile phones transmit and record their location
within a kilometer or two: any user’s daily movements can
in principle be tracked. Credit card transactions track date,
time, location and content of many purchases. Vehicle
number-plate recognition tracks our cars at many points
today, and future road-charging systems will be able to do
this continuously. Internet service providers can track our
surfing and purchasing habits. Retailers, financial service
companies and government agencies – both on-line and
physical – build profiles of users and target accordingly.
RFID tags on products we buy, use and carry around will
broadcast our identity and position all the time.
In 2004, Legoland in Denmark began issuing children
visiting the theme park with RFID tags. Ostensibly
intended to ensure they could not get lost, it was quickly
realized that there could be commercial benefits from the
technology. An IBM VP who works with RFIDs
commented, “Lego will now know exactly where each
customer is, how long they are spending in each area and
which products are proving to be most popular.”
The critical issues in managing this world will be control
over communication and connectedness, and trust.
How much connectedness will be dis-connectable? We may
be able to choose whether to opt in or out of information
services, advertising programmes or affinity groups. But
will we be able to insist on anonymity or privacy in our
movements, habits, preferences? Privacy is already
becoming a purchasable commodity: we have the option to
choose certain basic services which reveal personal
information and entail receiving advertising, or premium
services which shield us. Perhaps privacy will become a
It is clear that computing, electronic devices and network
technology are becoming both more powerful and more
miniaturised. And that they are converging. These
technologies will steadily extend their penetration into all
aspects of daily life – to the point where we live in a world
of ubiquitous computing. This does not imply ‘more of the
same’, in the sense that computers as we currently
understand the term will be seen everywhere. Instead,
artificial intelligence and communications capability will
be embedded in more and more everyday items around us,
so that ubiquitous computing becomes simply part of the
environment of daily life and work.
Tiny, mobile transmitters, sensors and network devices
will connect household objects, consumer appliances,
buildings, highways and so on. New generations of smart
devices will manage themselves, respond to their
environments and communicate with people only when
necessary: to receive instructions or transmit information
when it is wanted. The potential benefits in terms of
efficiency, productivity and freeing people to concentrate
on what is important are obvious. Equally, though,
exploiting these benefits to the full will require a degree
of individual monitoring, tracking and potential control so
far never experienced.
RFID – radio frequency identification – technology is
rapidly penetrating many areas of manufacturing and
business. Until recently, RFID tags were relatively large
and expensive. But advances in miniaturisation, and
improvements in power consumption, mean that RFIDs
are set to invade many areas of everyday life.
An RFID tag (or ‘smart tag’) can be applied to – or built
into – any solid object. Consisting essentially of a
receiving coil and a microchip, when it passes through
the radio-frequency field produced by a tag reader, it will
transmit its memory contents. These data can include
basic identification details, and a potentially infinite
range of other information: for example source
information, manufacturing history, inventory tracking
information. The concept is similar to a barcode, except
that RFIDs can transmit far greater amounts of
information, at greater distances, much more rapidly and
Advanced RFIDs are now being developed with dynamic
read-write memory. In theory, any object containing an
RFID will soon be able to have its whole life history – and
by extension that of its owner – continuously recorded and
monitored. Unfortunately, the first software which allows
hackers to change the information held on the RFID – the
price of the item it’s attached to, for example – has already
points from different physical and on-line stores. We
currently use different monies in different currency zones.
In future, we may use very many more different monies for
The implications are more radical when we consider
partitioning identities. Instead of one identity, we are
already moving to having a number of identities, each with
a different set of relationships and characteristics. We may
have different identities in on-line communities and chat-
Points to consider
■ Research shows that technology forecasting in
computing and telecommunications has a high degree of
accuracy: these developments will happen, and probably
sooner than we expect.
■ But how able will we be to strike an acceptable trade-off
between increased utility and individual privacy?
■ In virtual communities, how will trust be negotiated?
■ Who will control the ‘on-off’ switch?
Sharing and communicating information implies that both
parties trust each other. Differential trust in the source of
information and its protection of our privacy will determine
which communities we feel we belong to and want to be
part of. Governments, employers, business, social groups
and voluntary associations will have to compete for trust or
impose conventions. Given the choice, we may feel more
loyal to a particular social group, retailer or virtual
community than to a national government or local
authority. How much choice will we have?
New technologies will provide opportunities to ‘unpick’
digital identity and money, and rebuild them. The key
trends over the next 25 years will involve partition. Hitherto,
technology has been about aggregation. On the money side,
credit cards, smart cards and electronic money have allowed
us to make more and more expenditures with fewer and
fewer physical devices: one card, or a single internet
banking connection rather than a multitude of notes and
coins. On the identity side, larger and more integrated
databases tend to amass more and more information about a
single person in one place. The majority of security and civil
liberties concerns about digital money and identity derive
from these aggregative characteristics.
By contrast, partitioning will allow us to use different
moneys and different identities for different purposes. It
could be the key to resolving the concerns that derive from
aggregation. For example, to a small extent we already use
different ‘monies’ when we collect and spend reward
Extensible Markup Language (XML) is an application of
the Standard Generalized Markup Language (SGML)
specifically designed for electronic publishing and World
Wide Web publishing. A markup language is a
metalanguage designed to describe aspects of the
structure or content of a text in another, normally
natural, language: Hypertext Markup Language (HTML)
is probably the best-known example, allowing the
structure and formatting of World Wide Web pages to
be described by tags inserted into the text.
However, while HTML describes structure and
formatting, XML is designed to describe content. So
where the content consists of structured data – lists of
products, categories, function for example, or address
databases – XML can rigorously describe and define the
meaning of any item. This means it is ideally suited for
content-rich data communication. Although the early
hype about XML revolutionising the internet has subsided
– the great majority of web pages contain unstructured
information – XML is progressively supplanting
electronic data interchange for business data transfer.
For a list of all the ways technology has
failed to improve the quality of life, please
Bandwidth is the key determinant of the capacity of any
communications technology. Wireless technology offers the
greatest ease and convenience of communication, and is
obviously essential for portable devices. But the bandwidth
capacity of the radio spectrum is by definition limited. The
emerging ultra-wideband technology will provide very high
capacity communications capability for short-range
applications, typically up to some tens of metres.
As its name implies, ultra-wideband (UWB) uses an
extremely wide band of frequencies to transmit data: in
the USA, the FCC has recently licensed UWB
transmission from 3.1GHz to 10.6GHz. But to avoid
interference, transmissions are limited to very low power.
This combination of high bandwidth and low power
means that UWB is an ideal technology for highly
efficient, low-energy consuming communication between
closely-located electronic devices. Cable connections
between pieces of equipment could finally be a thing of
the past; data transmission between transmitters and
sensors will become massively simpler and more efficient.
Standards are now being developed for combining wireless
UWB with universal serial bus (USB) technology and for
audio-video consumer electronics.
rooms. Stores may track our spending habits and
preferences, but we show different identities to each: you
may shop at a supermarket for moderately-priced Chilean
wine, and be recorded with this characteristic, but you may
buy first-growth claret from a wholesaler by the case.
The nature of citizenship depends on a continuing
relationship between an individual in all his or her
characteristics with a single state. In future, perhaps people
might partition different aspects of their identity between
different relationships. Their privacy, or their passport
details, might be better managed by Shell than by the
government. Voting might be partitioned: you have strong
views on certain issues, and will vote on these accordingly,
but are content to delegate other issues to the partners in
Technological advances are driving a wide range of
applications in the same direction: miniaturisation,
portability and customisation, combined with the
dispersal and distribution of devices. Computing and
communication power will become increasingly
pervasive but – apparently paradoxically – less
visible as they are seamlessly embedded in many
We have seen that computing will become ever more
pervasive. Assuming that Moore’s Law for computing
power will continue to hold, the cost of computing power
will fall dramatically, enabling ‘chips with everything’.
Developments in nanotechnology will allow computer-
based devices to be manufactured and deployed on a tiny
scale. Computers will be embedded in everyday objects
enabling our environment to adapt automatically to our
individual needs. Large-scale automation will dramatically
cut the cost of making computers and electro-mechanical
devices. Manufacturing will become a commodity; many
more businesses will be built purely on knowledge.
Advances in bioprocessing will enable small scale
manufacture in distributed systems to be economically
competitive. No longer will countries and companies gain
competitive advantage through their ability to access large
amounts of capital.
Vast amounts of data will be available. But storage
technologies will advance so that we can carry around with
us enough storage capacity to contain everything we will
ever experience during the whole of our lives. The amount
of available memory will be gigantic – more than enough
to capture everything we ever experience. Advanced data
Small and distributed
Fuel cell technology (qv) is well understood, and rapid
advances mean they are becoming increasingly practical
and competitive. For large power consumption devices,
for instance automobiles, the major challenges centre
round hydrogen storage and transportation: the weight
and bulk of safe hydrogen tanks, and the need for
extensive new fuel distribution networks are significant
barriers. But for low power applications, such as in
portable electronic devices, micro fuel cells could be
replacing batteries in the near future.
Such cells are already at the prototype stage. Typically,
they will be fuelled by a methanol-water mixture. In
contrast to conventional batteries for laptop computers,
which last from three to five hours before recharging is
necessary, the first generation of laptop micro fuel cells
will last up to 12 hours. And no time-consuming
recharging will be necessary: they will be refuelled by a
quick squirt from a fuel can in the same way as a
cigarette lighter. Shrinking the technology to fit in
mobile phones will take a little longer. But not much.
Micro fuel cells
Decentralised power supply
Decentralised energy production – useful conversion of
energy close to the consumer – has been familiar to
humankind for thousands of years, from simple burning of
fuel for heat to the exploitation of water, wind and animal
power. Over the last century or so, the advent of large,
industrial-scale energy production, especially electricity
generation, has created centralised systems distributing
power to thousands or millions of individuals. Small-scale
local power generation has continued to exist, where its
inherently poorer economics have been offset by other
benefits. Increasingly now, though, technological advances
in small-scale power production, coupled with growing
awareness of the downside of centralised production, are
driving a new generation of decentralised energy
Significant advances in traditional internal-combustion
technology, alongside newly commercial technologies such
as fuel cells, offer the prospect of local or neighbourhood
systems generating economic power close to its end use,
with generators several orders of magnitude smaller than
traditional power stations. Micro-generators exploiting
renewable sources such as wind-power and photovoltaics
are increasingly competitive. The massive infrastructure
investment in centralised power production and
distribution in the industrialised world means that a
major shift to a decentralised paradigm will have to
overcome stiff resistance. However, such distributed
systems would have significant advantages in terms of
reliability, low capital cost and overall system efficiency,
including the potential for combined heat and power.
Ambient intelligence is a term coined by the Information
Society Technologies Advisory Group of the European
Commission to draw together the themes of ubiquitous
computing, pervasive communications and intelligent
devices (qq.v.) According to the ISTAG vision statement,
an ambient intelligent environment will be characterized
by intelligent interfaces supported by computing and
networking technology embedded in everyday objects such
as furniture, clothes, vehicles, roads and smart materials –
even particles of decorative substances like paint.
Such an environment will respond to the specific
requirements of human presence and personalities; adapt
to the needs of users; be capable of responding
intelligently to spoken or gestured commands; and
eventually be capable of engaging in intelligent dialogue.
Ambient intelligence will be pervasive and ubiquitous but
also unobtrusive – allowing constant and effortless
interaction and affording seamless transitions between
different environments – home, vehicle, public space, etc.
Its ambient characteristics will depend on major advances
in smart materials, sensor technology, embedded systems,
ubiquitous communication, I/O device technology and
self-managing software. Its intelligent characteristics will
include media management and handling, natural
language interaction, computational intelligence,
contextual awareness and ‘emotional computing’ that
embodies systems to express emotion and respond to the
moods of their users.
Computer software and hardware will have to operate to
currently unimaginable levels of reliability. Since
computing power will be embedded in everything, safety-
critical errors and ‘blue-screen’ crashes will have to have
been eliminated. Every new item of software and hardware
will be ‘plug and play’. Software itself will be much
simpler, built up of modules; all of us will be familiar with
customising and upgrading our software packages.
‘Software-defined radios’ (SDRs) will enable devices to use
the most efficient part of the wireless spectrum as they need
it. SDRs will accommodate the many international
standards, frequency bands and applications by offering
end-user devices that can be programmed, fixed or
enhanced by ‘over-the-air’ software. In principle, this will
enable true international connectivity; freedom of choice
over the applications you want to use with your device
(MP3, DAB, FM etc.); virtual private networks, closed user
groups, combined delivery of voicemail, messages and faxes,
and so on.
Personal area networks, in which wireless technology
enables us to hook into and out of smaller communities,
will be created. Ad hoc area networking will be common –
mining and knowledge discovery tools linked to intelligent
human interfaces will always enable us to access the
information from these enormous databases. The word
‘forgotten’ will lose its meaning.
We may all carry a number of network devices around with
us. The parts of a PC will no longer be all together in one
box. It will fragment into a number of small units: the
memory, the CPU, the display, the user-interface will
probably all be separate. All technology will be compatible.
So we could carry around our miniature memory and CPU
and ‘plug it into’ (although the connection will be wireless)
any display, CPU or input device at home, in the office, in
shops, the airport. We might even carry virtual reality
displays in the form of spectacles with us.
We will see fewer visible computers of the type common
today. Tomorrow’s computers will be embedded in everyday
objects – just as the telephone today enables access to a
‘computer’ (the digital phone network). The end-user’s
requirement is not for a computer as such: it is for a
responsive, automated environment, such as a house with
touch pad or voice recognition controls, computerised
safety systems in cars or automatic ticket machines.
extent that tiny sensors may be everywhere. Individually
simple, networks of these sensors could communicate with
each other to form sophisticated systems, taking action on
their own or reporting back to central computers. They will
be able to perform a wide variety of functions – monitoring
machinery, tracking components through the supply chain,
monitoring environmental impact, providing security, even,
perhaps, monitoring us. The word ‘lost’ would cease to have
any meaning – everything could be traced (and nothing
could be stolen).
Companies will be able to use sensors reporting to their
computer systems to track products through their entire
life cycle. They will be able to understand better how
people use their products, enabling mass customisation.
Products like washing machines will be linked to enhanced
services. For example, if sensors in a washing machine
indicate a breakdown is likely to happen, the manufacturer
could proactively send a service engineer before any
problem arises. This would build enormous trust with the
with all the crowd entering a football stadium, say,
connecting to a temporary network. Networks will be able
to scale to the number of users.
Successful miniaturisation of radio-frequency identification
(RFID) tags will reduce costs to the point where they are
insignificant, paving the way for massive deployment of
tags on a wide range of products, goods and commodities.
These tags will also be ‘always on’, capable of transmitting
data about their situation, position, state and local
environment. There are obvious potential benefits in supply
chain management, transport, logistics and delivery systems.
RFIDs can bind everyday objects to identities. In the future,
if every object contains a chip, it will be impossible for
anything to be lost. Sensor technology will develop to the
The high capital cost of large power reactors and public
perceptions of risk are prompting a move to develop
smaller nuclear reactors of 300MWe or less.
Small reactors potentially offer simpler designs, reduced
siting costs and economies of scale through their
production in greater numbers. Many of the designs
currently being worked on include passive safety features
that rely on physical phenomena to protect against
accidents, rather than the active operation of engineered
Some small reactor concepts are intended to operate in
remote locations for small loads (including floating
nuclear power plants); others are designed to operate in
clusters to compete with larger units for supply to a
Small reactor designs ranging in size from 5kWe upwards
are at an advanced stage in Russia, Japan, Argentina, the
US, South Korea, France and South Africa. A wide range
of technologies are being explored using a variety of
coolants, including pressurised water, high temperature
gas, liquid metal and molten salt.
Small nuclear reactors
If nano-technology (qv) is to result in microscopic devices
which can operate at the molecular scale, they will need
comparable power supplies. In 2003, a team of US
researchers was awarded a patent for a method of making
nano-scale batteries, about a millionth of the size of
conventional car batteries. Although there is a long way
to go before this technology is practical and can be
commercialised, it holds out the prospect of powering
nano-technology machines to operate at molecular scale.
These micro-batteries are made by partially dissolving an
aluminium sheet in acid solution, leaving a very fine
honeycomb structure which is subsequently filled with a
polymer electrolyte. Each side of the sheet is then sealed
with a film of conducting particles to act as individual
electrodes. An atomic force microscope charges individual
cells, which produce a tiny current – about a millionth of
Elsewhere, researchers are working on inserting copper
molecules into the crystal lattices of carbon nano-tubes.
The next challenge for both of these approaches is to
solve the problem of assembling a complete device with
wires and connections.
of a metre), and are often loosely called nano-technologies.
By contrast, the original sense of the term implies the
development of nano-scale manipulators and assemblers
controlling physical and chemical reactions atom-by-atom.
If such an assembler could be created, complete with its
own instructions, power and machinery, it could form the
basis of a self-replicating system – a nano-replicator. But
individually guiding separate molecules into a specified
place in a structure may never be possible. As one
authority has claimed: “self-replicating, mechanical
nanobots are simply not possible in our world. To put
every atom in its place – the vision articulated by some
nanotechnologists – would require magic fingers.”
Nano-technology essentially deals with the field of
engineering structures smaller than 100 nanometres (one
nanometre is one billionth of a metre). Two rather
different technologies are involved: the manipulation of
matter at the atomic level to position individual molecules
precisely where they are wanted; and – what is much more
difficult – the construction of functional machines on a
Chemical and biological processes can already manipulate
individual atoms and molecules, of course; thin films,
super-fine fibres and sub-micron lithography all involve
manipulation at the nano-scale (a micron is one-millionth
New social behaviours will evolve from the combination of
distributed technology networks and human networks. In
the US, taxi drivers in a couple of cities can already view
and keep updated in real-time a web site that shows where
the cheapest gasoline can be bought; this is a public web
site that helps everyone. Social groups have rapidly realised
the ability of mobile phones to coordinate social and
political gatherings. They were instrumental in organising
people to help overthrow the government in The
Philippines; in the UK the same phenomenon occurred
when farmers were co-ordinating their protests at an
increase in duty on certain types of diesel. Distributed
technology, portability, customisation, ubiquitous
computing can enable networks of people to do things that
they could not previously do.
Constant monitoring of equipment will mean maintenance
can be conducted as and when necessary rather than
happening on a fixed schedule (even when it is not
necessary). Many routine maintenance tasks will be carried
out automatically as the sensors ‘talk’ to robotic devices like
Intelligent buildings will interact continuously with their
owners and inhabitants. They will sense the presence and
More than a billion people in the world lack a basic water
supply, and 2.4 billion – about two fifths of the world’s
population – do not have access to adequate sanitation.
On present trends, by 2025 as many as 5.5 billion people
will be suffering from water shortages.
Technology for water purification is readily available in
the industrialised world. But it tends to be too complex
and expensive for mass deployment in the third world.
Appropriate and effective technology need not be
advanced technology: in this case the need is for a
solution which is cheap, simple and easy to use.
The McGuire Water Purifier was developed by Duvon
McGuire, an Indiana inventor and missionary. It is made
of plastic tube and is powered by a car battery. The 12-
volt design was chosen deliberately because while third
world villages may not have a power supply, there are
normally at least a few people with a car, whose battery
can power the purifier. Using a few salt tablets, the
system can produce up to 55 gallons of potable drinking
water a minute. Coupled with large tanks and pumps,
safe drinking water for up to 10,000 people can be
produced relatively quickly; smaller units are available
for locations with limited storage. McGuire purifiers are
now in use in nearly 40 countries, providing safe
drinking water to orphanages, clinics, villages and
refugee camps from Indonesia to West Africa to Ukraine.
Point of use purification of water
needs of occupants. They will learn behaviour and response
patterns and configure domestic and workspace
environments automatically. Control of domestic systems
such as lighting, heating, entertainment, refrigeration will
be simple and transparent from remote locations.
Other forms of miniature sensors will be used in a wide
range of applications, from traffic monitoring to weather
measurement to pollution control. The UK government has
recently announced plans to introduce within 10-15 years a
road pricing system which will track the position of 30
million vehicles at once, and tax their users variably
according to where and at what time of day they are driving.
Points to consider
■ As sensing, communication and computing become all-
pervasive, but less obtrusive, will we become less aware
of what they are doing?
■ Will we value the benefits more than the potential for
increased surveillance, social control and misuse?
‘Smart dust’ is a catchy term for self-contained,
millimeter-scale sensing and communication devices. The
aim is to combine sensors, computational ability, bi-
directional wireless communications, and a power supply
within a device about the size of a grain of sand; and to
manufacture them cheaply enough that they could be
deployed in their hundreds and thousands to build
massively distributed sensor networks. The basic
technologies are all available. Current research and
development focus is on integration, miniaturization, and
Each dust ‘mote’ is controlled by its own microprocessor,
which periodically receives readings from a sensor
measuring one of a number of physical or chemical stimuli
such as temperature, ambient light, vibration, acceleration
or air pressure. It also turns on its receiver at regular
intervals to check if another mote, or a base station, is
communicating with it. Distributed intelligent technology
allows a large number of motes to operate as a coherent
Current prototype motes are about 5mm in size, and cost
about $50. Once devices as small as 1mm, and costing less
than $1, become possible, it will be feasible to use ‘smart
dust’ for remote sensing, meteorological, geophysical or
planetary research, and – inevitably – for a range of
Continuous connectedness and widespread
distribution of small computing and communicating
devices will transform the nature of our interaction
with the world and how we make sense of it.
Intelligent sensors will allow new generations of
intelligent devices, at the same time as new visual
and oral technologies transform the way we interact
As we have seen, sensor technology will mediate many of
these interactions. By 2025, use of sensors will be
widespread. We can expect significant miniaturisation and
cost reduction to have occurred. Sensors will be cheap
enough for extensive use throughout industry. Anything
and everything will be able to be monitored by sensors that
can talk to each other and to the central network accessing
the data analysis. Each individual sensor might be simple,
only carrying out one or two functions (measuring
vibration, or heat etc.) but taken together, they will provide
Sensing and making sense
Micro-sensors will monitor the performance of the
machines throughout their life, either communicating with
each other to optimise performance, or escalating issues to
some central location if external intervention is needed, for
example to replace a part that has the potential to fail. As a
result, no longer will aeroplanes fall from the sky through
metal fatigue or trains crash because the rails are buckled or
Small sensors will be used extensively in large structures
and machines to continuously monitor and optimise safety
and performance. Failures will be predicted and avoided.
Actuators will be applied on bridges to provide active
control of the structure and keep them stable. Overall, we
are likely to see a huge increase in the safety and
performance of a wide variety of machines and systems.
Companies will install sensors in their machines –
including domestic white goods like washing machines –
that continuously monitor the condition of the machine
and report back to a centralised network, providing data to
the manufacturers on how their products are used, enabling
mass customisation. Faults will be predicted in advance and
an engineer sent before the machine breaks down.
Manufacturers will extend their offer into providing highly
efficient lifetime service, enormously strengthening their
customer relationships. Service providers themselves will
make use of the new technologies. For example, sensors in
cars combined with GPS technology will alert the AAA
automatically in the event of a breakdown.
“By 2025, use of sensors will be
widespread. We can expect significant
miniaturisation and cost reduction to
These sensor networks could also be linked to the Internet,
meaning that in principle anything that moves, grows,
makes a noise, heats up etc., around the world could be
The combination of fairly simple biosensors, known
technologies and chemistries will create robust diagnostic
tools. The combination of advanced technology and
advanced data handling will make medical diagnosis less
invasive. Examples include using optical tools such as smart
endoscopes to look at tumours in the body; in vivo Raman
spectroscopy; and Optical Computer Tomography. All of
these will provide non-invasive diagnostic techniques.
Electronic nose technology, comprising multiple gas sensor
arrays, has been around for a long time. But in conjunction
with advanced data handling it will produce a very strong
tool for diagnosis of diseases like TB. It will be possible to
build up a global picture of a disease and then readily
identify which microbacterium is causing a specific
Technology will both enhance our perception of the world
and enable the world to respond automatically to us, for
example in the form of car seats and houses that
automatically recognise who we are and adjust their shape,
temperature etc. accordingly.
By 2025 ultra-small ‘Smart Dust’ sensors will have been
improved. Smart dust devices are tiny wireless
microelectromechanical sensors (MEMS) that can detect
everything from light to vibrations. These motes could
eventually be the size of a grain of sand, though each would
contain sensors, computing circuits, bi-directional wireless
Forgoing the attempt to ‘understand’ a whole piece of
text, text mining concentrates on generating, for example,
an automatic summary of an article by extracting the most
common and prominent nouns and noun phrases. Or it
might extract the author, title and date of publication of
an article, the acronyms defined in a text or the articles
mentioned in the bibliography. The use of statistical
analysis, collocation analysis and simple phrase-structure
grammar can recognise and distinguish with a high degree
of accuracy the significant content of a text. Already
programmes exist which can read in CVs and extract
people’s names, addresses and job skills with accuracies in
the high 80 percents.
communications technology and a power supply. Motes
would gather data, run computations and communicate
that information using two-way band radio.
Potential commercial applications are varied, ranging from
catching manufacturing defects by sensing out-of-range
vibrations in industrial equipment to tracking patient
movements in a hospital room. Hundreds of tiny sensors
can be scattered around a building to monitor temperature
or humidity. Or a network of minuscule, remote sensor
chips could be deployed, like dust, to track enemy
movements in a military operation.
It will be many years before it is possible to develop
systems for automatic ‘understanding’ of natural
language text (cf natural language processing). Text
mining has more limited goals – and is therefore
achieving comparatively more success. Text mining is a
form of data mining, the technology of finding
meaningful patterns in large databases. Text mining
techniques search for regularities, patterns or trends in
natural language text which fit predetermined criteria
and which point to useful knowledge. Where data
mining typically extracts patterns from highly structured
databases, text mining focuses on unstructured or semi-
precisely defined rules. But many apparently simple tasks
carried out by humans with hardly a thought turn out to
be immensely subtle. Sight is not passive reception of
visual signals, but a complex process of interpretation,
recognition, understanding and creation of meaning.
Natural language processing (qv) is similarly complex.
Since we do not at present understand what consciousness
is or how and why it operates, it has so far proved almost
impossible to create machines to mimic any but the most
simple cognitive processes. Although progressively
‘cleverer’ machines are constantly being developed, the real
limit on the progress of AI is the rate of our
understanding of the human brain.
“There is much that we do not know about brains:
including what they do and how they do it.” Aaron
Sloman, School of Computer Science, Birmingham
Artificial intelligence (AI) has become an extensive and
ill-bounded discipline embracing developments in
everything from robotics to expert systems and chess-
playing computer programmes; some of its important
ideas and techniques have been absorbed into software
engineering. Underlying all these activities, however, is
the aim of understanding the mechanisms mediating
human thought and intelligent behaviour and replicating
these in machines. This can be interpreted equally as the
study of information: how it is acquired, stored,
manipulated, used, and communicated, both in artificial
systems and in humans and other animals.
The history of AI to date has revealed an apparent paradox:
it has proved relatively easy to develop programmes which
undertake superficially complex tasks, like playing chess or
carrying out complex calculations. Such tasks are well-
suited to a computer’s ability to operate on large numbers
of precisely defined symbols very rapidly, according to
As the technologies of virtual reality evolve, the
applications become unlimited. VR will reshape the
interface between people and information technology by
offering new ways for the communication of information,
the visualization of processes and the creative expression of
ideas. A virtual environment can represent any three-
dimensional world that is either real or abstract. Uses
include training, education, design evaluation (virtual
prototyping), architectural walk-through, human factors
and ergonomic studies, help for the handicapped,
There will be two main modes of application: personalised
displays (on the head – as ‘sunglasses’ for example – or held
in the hand); and centralised displays (for example in
control rooms). Personal displays will obviously have
entertainment applications, but they can also be used for
such tasks as navigation – if linked to GPS – and the
provision of personal information systems (e.g. the personal
display could be linked to the Web or other computer or
The advent of cheap digital video linked to wireless
communications is likely to result in a profusion of
monitoring cameras, reducing the level of street crime.
This ubiquitous video monitoring, together with the data
available from all the communication and electronic
transactions we conduct, will be combined with data
mining, knowledge discovery and artificial intelligence
tools to identify criminals and track their movements.
Genetic fingerprints and retinal scans will make
individual identification foolproof and help to reduce the
risk of fraud.
Our relationship with computers will change. Intelligent
computerised customer support will transform the help-line
as emailed questions are answered automatically in real
outbreak. The electronic nose can also be used in renal
dialysis, ‘sniffing’ blood in real time as it is being processed
so that precisely the right amount of dialysis takes place
(currently dialysis is based on blood tests that are done off-
line perhaps weeks before).
Making sense of this world will need to go hand in hand
with its effective management. Much of this will be
automated. But human intervention and control will
depend on effective visualisation, representation and
problem avoidance. The use of virtual and enhanced reality
systems in control rooms and elsewhere will drastically
reduce the amount of human error problems historically
associated with the man-machine interface.
Networked users at different locations will be able to meet
in the same virtual world, seeing the same environment
from their respective points of view. Users will see each
other, communicate with each other and interact with the
virtual world as a team.
Complex adaptive systems
importance to fields as diverse as artificial intelligence,
network configuration and management and neuroscience.
At its most theoretical, such work involves identifying
and exploring properties common to all such complex
adaptive systems, using cellular automata, Boolean
networks, neural networks, genetic algorithms and
similar techniques. More practically, the study of complex
adaptive systems has led to deeper understanding of
phenomena such as the self-organization of neural cell
tissue and the development of altruism and cooperation in
The deliberate creation of complex systems which can
organise and adapt themselves will represent a step
change in the development of goal-directed, problem-
time and only those questions the computer does not
recognise are escalated for human response. As the
computers learn, the amount of time-consuming escalation
will be minimised.
Fabrics coated with conducting polymers have already
been incorporated into items such as sports bras, knee
braces and socks. As these materials are stretched,
conductivity changes in the polymer coating allow them
to operate as sensitive strain gauges, with the resulting
data transmitted to a remote base-station. This can provide
real-time biomechanical feedback during movement over
extended periods of time. The application of such
technology to investigate neuromuscular pathological
conditions – such as stroke, rheumatoid arthritis and
peripheral neuropathies – is already being developed. In
future, such functional clothing will be able to monitor
breathing and heart function, and track body movements,
providing data for carers and health professionals.
Miniature instruments are being developed to monitor
pollutants in water – including ammonia, phosphates and
heavy metals. A chip with a network of micron-sized
channels mixes water samples with reagents. Resulting
colour changes are measured using low-power LEDs and
photodetectors integrated into the chip platform (cf lab-
on-a-chip). Inclusion of wireless communications
capability will allow water quality data to be gathered
from remote locations.
Applications of automatic sensing
If recognizable and functional ‘intelligence’ is to be
developed in man made systems – for example massive
networks of distributed sensors or teams of self-organising
robots – then they will have to be designed to behave much
more like natural and biological systems. In such systems,
the actions of simple, locally-interacting components give
rise to coordinated global behaviour. In these complex
adaptive systems, whether they are galaxies, human
economies or insect colonies, structure and organisation
emerge from inherent qualities of the system rather than
external constraint; successful systems achieve equilibrium
by adapting to their environment. One of the most obvious
examples of such a system is the human brain itself.
The study of complexity, and of how adaptive systems
arise, operate and develop, is therefore of critical
But computers find it very difficult to understand spoken
human language. By contrast with developments in
computer graphics – which have achieved astonishing
progress in just a few years – developments in using spoken
human language to interact with computers have been slow
and tortuous. So, by 2025 we will see voice applications in
certain areas – like elevators, cars and cellphones – where
specific voice commands and responses can be applied. But
we are unlikely to see the science fiction world of effectively
having a conversation with a computer until much later.
Currently, computers are very bad at recognising human
facial and other expressions (and easily fooled). Even the
monitoring of blood pressure, respiration, sweat etc. is not a
foolproof way of telling how we are feeling because
individuals vary so much. However, people cannot control
their micro-expressions, the fraction of a second muscle and
other responses that give the game away. Computers are
good at reacting on that time frame. So they can be
responsive to our moods and emotions and adjust our
representation of information has been implicated in major
disasters. The Chernobyl nuclear accident occurred in part
because control engineers had hung up a sign obscuring a
warning lamp. The Columbia Accident Investigation
Board into the US Shuttle disaster directly attacked the
use of PowerPoint slides in technical analysis and briefing:
When engineering analyses and risk assessments are
condensed to fit on a standard form or overhead slide,
information is inevitably lost. In the process, the
priority assigned to information can be easily
misrepresented… The Board views the endemic use of
PowerPoint briefing slides instead of technical papers as
an illustration of the problematic methods of technical
communication at NASA.
If we are to make sense of the data that new technology
makes available, the challenges will become progressively
Greater progress will be made in the area of written
language. For example, computers will act as virtual
secretaries, scanning incoming material, prioritising,
summarising and prompting. Intelligent computerised
customer support will allow computers to recognise the
nature of an emailed question and provide a response (at
least nine times out of 10). This will provide a huge cost
saving for companies.
The longer term will see significant advances in the brain-
machine interface to mediate our perception and
understanding of the world.
There are two key issues. First is the representation issue.
How does the brain go about representing the outside
world? The brain consists of billions of neurons. Any one
neuron is not that important because it is populations of
The key developments in GPS will not come primarily from
improvements in the basic technology (power, signal
accuracy, interoperability of the GPS system – developed by
USA – with the parallel European Galileo system) but from
the rapid spread of applications. Already, GPS systems are
driving automobiles, ships, farm vehicles, aircraft and
military systems. For business, GPS technology will allow
completely accurate tracking of goods in transit, and facilitate
precise targeting of advertising and promotion to mobile
users depending on their location. In transport, GPS systems
will lead to completely automated aircraft landing systems.
GPS opens the prospect of a future in which everything –
and everyone – could be tracked and located with pinpoint
GPS (global positioning system) technology is widespread
already, and some estimates suggest that the market for
GPS devices is doubling every year. Frost & Sullivan
estimate the US market alone to be worth $4 billion
dollars a year.
The technology is conceptually simple. A network of 24
satellites each contains an atomic clock, and transmits a
continuous stream of time-stamped data. A GPS receiver
compares the time taken for signals to reach it from four
separate satellites and ‘triangulates’ the results in three
dimensions to calculate a relative position. Cross-
checking against an almanac of satellite time-space
coordinates yields position and altitude coordinates
accurate to a foot.
The massive amounts of data made available by new
technologies and the digital revolution pose exceptional
challenges for human cognition and interpretation. The
development of automatic systems for evaluation and
decision-making can go some way to coping with large
amounts of information. But for critical applications
human analysis and judgement remain essential. The
development of new techniques of data visualisation and
representation will have a key role to play.
Automatic tools for visualisation can represent data to
people in ways which provide information which can be
acted on. Such tools need to become more intuitive and
easily understood. In order to achieve this, more
sophisticated analytical engines and algorithms will be
required, working behind the scenes to recognise and
extract significant patterns, collocations and correlations.
These are not ‘mere’ presentational matters. Poor visual
neurons that do the work. For example, to represent an
external object in the brain may involve a couple of billion
neurons – how does the brain spread out the external data
amongst these neurons for the processing to take place, and
how does it then bring it all back together?
Leading banks are looking at software for a new generation
of cash machines that personalise transactions, allowing
ATMs to greet customers by name, automatically offer their
usual service and even remind them about important dates
or bills outstanding.
The scene is a high street. A man in a suit
approaches a cash machine, inserts his card, and
begins reading the screen.
ATM Good. Now you weren’t thinking about underwear were you?
Dave presses yes.
ATM Dave, Dave, Dave. Shall I give you some options?
Dave presses yes.
ATM Luxury chocolates; a pashmina; perfume; romantic dinners;
Sleepless in Seattle on DVD?
Dave selects romantic dinner.
ATM That’s more like it. Here’s another £50. Now have you
paid the car insurance?
Dave presses no.
ATM How many times do I have to tell you? Do you want me to
Dave presses yes.
ATM Have you read Men are From Mars, Women are From
Dave presses no.
ATM I though not after tonight’s episode. You should. It really
nails the gender gap. Do you want £10 for it?
Dave presses yes.
ATM Here you are. I’ll expect you to have read it by the next time.
Dave moves to retrieve card.
ATM Before you go, Dave. That tie… it’s so last year
Financial Times 28 July 2004
Currently available sensors are relatively large – and
hence costly – and of limited capability. They simply
record or report the physical phenomenon they are aimed
at: temperature, gas concentrations etc. The convergence
of nanotechnology (qv) and micro-electrical-mechanical
systems (MEMS) will allow the development of
miniature, low-cost sensors using minute amounts of
power. Although such sensors will only have a small
range, the development of intelligent self-managed
networks will allow them to form large, robust sensor
networks of potentially thousands of nodes.
The range of potential applications for these intelligent
sensor networks is enormous, from real-time monitoring
over wide areas, automated control of industrial processes,
automated utility meter reading and intelligent building
control. Applications such as flood warning, weather
forecasting and earthquake prediction will use low cost
sensors in their thousands.
The very large number of data sources involved means
that sensor technology will come to share many of the
features of pattern-recognition and complex adaptive
Low cost sensing
Second is the computation issue. There is no single place in
the brain where ‘seeing’ is done or ‘hearing’ is done.
Different parts of the brain carry out different elements of
seeing or hearing. One part may be responsive to curves or
straight lines, another to colour, yet another may respond to
movement. The information – in the form of electrical
codes – flows from the external input to site A, then from
site A to site B, then from site B to site C and so on. If a
patient has a stroke and loses site B, we cannot just wire up
site A directly to site C because the whole process requires
site B to exist. We must replace site B with an artificial
equivalent and wire it up between site A and site C.
Understanding the information processing capability of
neurons and networks of neurons will be a key step towards
allowing the replacement of damaged or dead neurons with
Points to consider
■ The potential applications of these technologies will be
immensely widespread. In which directions will
economic drivers direct development?
■ Which will be the ‘killer applications’?
■ Will we simply be seeing cleverer gadgets and more
entertaining toys? or will ‘real-world’ applications
ATM Good evening Dave. Your usual £150?
Dave presses yes.
ATM How was last night? Did you get completely ratted?
Dave sheepishly presses yes.
ATM I’ve had enough of this. I’m not giving you any more
money if you’re just going to waste it. So: £150? What do you
want it for? CDs? New shirt?
Dave presses CDs.
ATM You’ve spent enough on them this month. It’s two weeks
Dave angrily hits return card.
ATM What are you doing Dave? Okay, okay. We said £150?
Dave presses yes.
ATM So we’re not bothering with your anniversary this year?
Dave looks a little panicked.
ATM You forgot again, didn’t you?
Dave presses yes.
ATM So how much more then?
Dave presses £20.
ATM £20! £20! Do you like sleeping in the garage?
Dave presses no. Taps in £30.
ATM Come on Dave. Don’t be a schnorrer. Seven years of
marriage and all she gets to show for it is a £30 gift.
Dave taps in £50.
Despite three decades of rapidly expanding global
food supplies thanks to the original Green
Revolution, there are still an estimated 840 million
undernourished people in the world. The new
biotechnology revolution has the potential to help
alleviate this suffering and bring major health
benefits to everybody on the planet.
Over the next twenty years, increases in nutritional and
energy content per grain will improve crop yield per acre,
widen the conditions under which particular crops can be
grown economically (bringing currently unproductive land
into production) and enable more intensive use of existing
land without further depleting its resources. All this will
mean greater food security and should enable some
developing countries to significantly boost their food
production. These countries will begin to see significant
economic growth as surplus food is sold and resources are
freed from the land to create wealth in other parts of the
economy. In China 20 years ago, anyone wealthy enough to
buy a car could not do so, because none was being
manufactured. Now much of the population has been freed
from the simple need to work on the land to grow food,
and the economy is booming.
In the developed world, as global competition increases,
agriculture is likely to become increasingly focused on
niche, high value products. Biotechnology will also enable
food to be tailored to a specific purpose. Food companies
will be able to offer more choice at affordable prices – e.g.
grains with more starch in to produce better cooking oils.
Food development will enable the inclusion of specific
healthy ingredients – such as antioxidants that help protect
against cancer, heart disease, cataracts etc. – targeted to
specific groups of people.
Because of this technology, combined with advances in
regenerative medicine and the use of stem cells, we will be
Biodiesel is produced by the esterification of fats or oils,
such as rapeseed, sunflower, soybean, palm or olive oil. In
a well-proven process, a fat or oil is reacted with an
alcohol, like methanol, in the presence of a catalyst to
produce glycerine and methyl esters (the biodiesel).
Biodiesel is used either in pure form – currently
available at petrol stations all over Germany – or as a
blend with mineral diesel, which is available in such
countries as the US, France and Italy. For the combustion
of pure biodiesel, cars need to be slightly modified.
Blends of up to certain percentage can be used in every
common diesel engine.
The use of biodiesel results in a substantial reduction in
unburned hydrocarbons, carbon monoxide and
particulates compared to conventional diesel fuel.
adapted to combat viral and other disease. Further
modification of corn genetics may make the corn more
susceptible. Failure to understand the deep biology of the
relationship between genes and proteins is a potential
Outside the technological field, the main threats to
agribusiness relate to the potential impact on its social and
political acceptability of campaigns by activists and
able to lead healthier – and even longer – lives. These
medical developments will be supported by advances in
food made possible by combining our knowledge of genetic
structure with a greater understanding of how what we eat
acts on the individual cells of our body. Nutrigenomics and
knowledge of each individual’s gene structure could allow
optimisation of the food individuals eat to suit their
particular genetic make up. For example, people whose
genetic structure suggests a predisposition to a particular
disease may be given a special diet that helps protect the
body against that disease or limits the impact of the
disease. Even if our understanding of how food and the
body interact at the cellular level fails to improve as
expected, foods may be specifically tailored to treat
particular groups of people, like the elderly, or those with
high blood pressure (cornflakes for children, cornflakes for
the elderly, cornflakes for pregnant women).
One of the key technical challenges to be overcome in
meeting these objectives is the adaptability of germ plasma
to the changes. Over 60-70 years of development the
existing varieties of hybridised corn have become well
Bacterial fuel cells
opponents resisting the adoption of new technology. There
is great potential. But realising it will depend on the
ability of science and industry to convey its message
effectively, and on governments’ responses to the
challenges. In general, politicians understand the issue, and
in certain areas are trying to be helpful. But it is not
currently making a lot of difference. For example,
Monsanto’s ‘golden rice’ is not being adopted in Africa
because politicians resist GMOs.
Researchers have found ways to generate electricity from
the energy created when bacteria feed on sugars.
In one route, the hydrogen produced when microorganisms
like E. coli metabolise carbohydrates like sugar in the
absence of air is captured and used in a fuel cell. A special
conducting polymer anode is used, which allows hydrogen
to diffuse through but blocks larger molecules. The
polymer also helps cleanse the anode of excreted
metabolites that might otherwise clog up the process.
In another route, a marine sediment bacterium has been
found to directly transfer to an electrode the electrons
generated as it feeds. This is much more efficient than the
hydrogen fuel cell approach.
Bacterial fuel cells offer the promise of renewable
electricity produced at the point of need. However, they
are currently much less powerful than conventional
hydrogen fuel cells and years of work remain to make
them a practical reality.
Nutraceuticals – also called functional foods – are
generally defined as foods marketed as having specific
health effects, other than the purely nutritional. Key
health risks being addressed include heart disease, cancer,
osteoporosis, gut health and obesity.
Spreading fats currently constitute one of the biggest
functional food sectors, certainly in the UK.
Manufacturers are incorporating mono or polyunsaturated
fatty acids, credited with reducing the risk of heart
disease by changing levels of blood cholesterol. Some
spreads provide omega-3 fatty acids from fish oils. In
sufficient quantities, these can lower triglyceride fats in
Another large functional food area is that of dairy foods
containing friendly or probiotic bacteria, which are
claimed to promote gut health. There is evidence to
support the positive effects of probiotics, which have been
found to help in fighting food-poisoning bacteria like E-
coli, some forms of diarrhoea and certain allergies.
Boosting the calcium content of cereal and grain products is
also a common way to deliver a functional food. Kellogg’s,
for example, is a leader with All-Bran Plus (also containing
vitamins C and E) and calcium-fortified Nutrigrain bars.
Finally, drinks are a fast-developing functional food. Some
are fortified with the antioxidant vitamins A, C and E,
and others with herbal extracts. They claim to help
overcome problems ranging from PMS to a lack of energy.
Nutrigenomics takes nutraceuticals one step further. It is
essentially the study of how food might affect gene
expression to combat human disease. For example, extracts
from some orange peels have been found to enhance the
expression of a cancer-preventing gene. This work is
currently in the very early stages.