Sample of My Writing Style. MBA - MIS class final paper. See accompanying slides on my Slideshare account.
The University of
April 10, 2011
The Joys of Access to
Anywhere, Anytime, by
Where will Information Technology lead us over the next decade in regard to
the concept of ubiquitous computing? This paper addresses the anticipated
scientific changes as well as their impact on humanity and the world.
Table of Contents
Ubiquitous Computing 3
, 2023 4
Technology Enablers – Just the Science, Please 5
Wireless Internet 7
Wireless Electricity 8
Speech Recognition Software 9
Constantly Evolving Interfaces 10
Artificial Intelligence 11
The Transformational Effects by 2020 14
Health, Medicine, and Safety Improvements 14
Convenience and Security Improvements 15
Other Areas of Growth 16
In Closing 17
Take a moment to consider the boldness of this assignment to describe access to any information,
anywhere, anytime, by anyone as it would exist in the next decade or so. So, to break this down into its logical
parts, we start at the top. Clearly, technology is not going to give everyone omniscience by 2020 or 2030. If it
were, then this assignment would be better placed in a class on philosophy or religion or even science fiction as
we see with the Borg in Star Trek. So, what exact meaning is ascribed to this assignment requires that we
consider the basic constraints of living within the finite and limited confines of space and time. Consider that we
are, therefore, constrained by the laws of physics and by the fluctuations of the economy. Both are powerful
influences to which humans both react and shape as in weaving a large tapestry through time.
The goal of this paper is to highlight the most likely changes and most interesting changes in technology
over the next decade (and beyond in some cases) and how this could affect us as a civilization. Furthermore,
given that this is an MIS class within the context of an MBA program, the scope of this paper is to address the
incredible growth of computing that is anticipated during the aforementioned timeframe and how we can
promote, interact with, and/or react to those anticipated changes in business and our daily lives. In some cases,
we may merely need to be aware of upcoming potential shifts in technology so that we can ready ourselves and
our organizations for that next wave of change.
However and whenever these changes come about, it is true that most of us who have lived the better part
of half of a century and have seen the dawning of computing in its common use will “feel” that the title of this
paper is fairly appropriate. In a relative sense, it is very appropriate. Yet, to be fair to the topic, I will define some
terms so we can put our arms around this without making any preposterous claims.
What is ubiquitous computing? Mark Weiser of Xerox PARC coined the term in 19911
. As he described it,
the computer designs and interfaces had, up until that time, been aiming to make computers more "dramatic"
such that we would be motivated by excitement and interest to use them and feel that we could not do without
them. Mark recommended an approach where machines would seemingly disappear such that their use would
be hidden. Gracefully and unobtrusively, computing would simply be nearly ever-present -- melted down and
absorbed into the subconscious undercurrent of our everyday life. As we breathe without thinking consciously
about it, so would computing shrink and shrink into tiny particles and literally vanish into the woodwork
becoming a part of our everyday environment and landscape. It would be there, but we wouldn’t have to think
about it. Things would simply work implicitly. Mark Weiser’s ideas seemed radical at the time, but he based it on
Moore’s Law which is exponential. He reasoned that it wouldn’t be long before chips approached the size of
Indeed, we see Mark’s vision coming closer to reality. The silicon transistor can continue to shrink
smaller for a while. Other alternatives are on the table to allow even smaller versions of computing power than
is possible with silicon transistors. Yet, to bring this vision to fullness, we also need to rethink all of the ways that
humans interface with computing and how computers interface with other computers. We need to rethink how
all of this is powered currently and how that might need to change in order to power tiny pieces of intelligence
that could be imbedded in the very fabric of our walls, clothes, cars, and even our contact lenses, glasses, eyes
and our veins.
Knowing where we are headed is important. Current technologies are being fashioned based on these
visions. How quickly we are able to achieve these goals is up for grabs. However, technology has surprised us
with rapid growth, more rapid than anticipated in many cases. So, we should not doubt the possibility that these
visions are attainable by 2020 or 2030 either fully or in part. This paper helps to identify those aspects of the
overall vision of ubiquitous computing which are likely vs. those which are not by the year 2020 or 2030.
Since having a vision and a goal in mind is helpful, I describe, in the next section, my vision of just a few
minutes of the future. Afterwards, I categorize and explain the anticipated major technology enablers of the
coming decade to fulfill that vision and other visions. I later synthesize them by describing a few anticipated
applications of these technology enablers in business. I close the paper with questions and considerations in
regard to these stunning advances that we will surely see in the next decade.
August 19th, 2023
The below describes a few minutes of my future life. It is the morning of my 60th
birthday. The below dialogue is
automatically heard by tiny chips imbedded in the walls of my house and recorded automatically in my “diary”
database which is linked to my draft autobiography. All verbal commands to my computer are shown in blue
fonts. My computer’s name is Sally.
“Sally? PROJECT: Diary.” This is a simple voice command to record my voice and save it to the diary project database in
both text and verbal form. These are simple verbal commands to the computer(s). Voice recognition software is not as fully
advanced as we would like it to be in its ability to interpret sentence structure. Similar to ‘graffiti’ shorthand that first showed up on
the Palm Pilot in the 1990’s, it will be easier to train the human to use verbal shorthand in 2023 than to train the computer to
dissect human sentences. Consider that the computer’s capabilities are similar to a dog or a baby or young toddler in interpreting
“Sally? BEGIN RECORDING.”
“Dear Tiny Computers, When the coffee began perking after you awoke me, I forgot that it was my birthday. I appreciate that you
remembered that I like real cream in my coffee as a treat. I enjoyed seeing snippet videos of my children’s birthday wishes on my
kitchen wall. I am sure that you had to remind my son, am I right? Never mind, don’t tell me. My daughter, no doubt, had this on her
mind for weeks. She said she bought me something that she thinks I will really like and she will bring it to the party this evening. I
wonder what it is. My son’s rendition of the former Prince William, now King William of England, was priceless. I should tell him.”
“Sally? PAUSE PROJECT: DIARY. PROJECT: messages. UP: Richard’s last message. SEND: reply. VIDEO on. AUDIO
“Richard, I know you are at work by now. I just wanted to let you know that was one of the funniest renditions I have ever seen you
do! Thanks for the laughs. See you later today for the dinner party. Are we meeting at Arianna’s apartment first or will you pick me
up since you are passing nearby? Let me know before 4pm, okay? Thanks. Love you!”
“Sally? END. PREVIEW.” Computer plays it back in my glasses. “SEND”
“Sally? SET reminder. TO Richard. TIME: 4 o’clock pm. DATE: Today. SUBJECT: call Mom about dinner plan
An ALERT enters my glasses that a traffic jam is ahead.
“Sally? SHOW: traffic map.”
I can both see the road as well as see the map super-imposed in the upper right corner of my glasses. But, every time I go to look in
my rear view mirror, I lose view of it. I so badly want to buy the new contacts that have all of this imbedded in them so that my views
move with my eyes! Engineers are working on ways to imbed the technology behind the lenses, but we’re not there yet.
“Sally? APP: car route. OPTIONS: avoid highways. OPTIONS: Voice Guidance ON.”
With this, the car begins to give me guided directions to my office and avoids all highways. Note: I do not like the computer to
automatically re-route my travels, so that is why I only received an ALERT earlier.
An ALERT enters my glasses that the supply chain at work will likely experience a slowdown in 2 weeks due to the lack of materials
coming from Australia due to their most recent storms.
Sally? PAUSE: alerts. TIME: 10 minutes.
Sally? SET music ON. GENRE: Classical. OPTIONS: Composer … Brahms. Volume: medium.
Technology Enablers – Just The Science, Please
In the brief few minutes described above, we saw that a re-thinking of almost every aspect of computing
needs to be done to realize this vision. This section explains those major categories: transistors, wireless power
and wireless networks, voice recognition, and all types of user interfaces. Although tempted to describe the real
world applications of these technology enablers in this upcoming section, I realized that most of them required a
synthesis of all of the technology enablers and could not be fully explained without knowledge of the other.
Thus, the real world applications of these technology enablers are shared in the next major section entitled “The
Transformational Effects by 2020”.
Silicon transistors, which power our computers, will continue to rapidly evolve between now and 2020.
Moore’s Law has been so accurate and so successful is because Moore based it on his understanding of
quantum theory as related to the silicon transistor and because the economy continues to push for smaller and
cheaper transistors. Due to these two dynamics, it is predicted Moore’s law could crumble around the year 2020
or slow down. The shrinking of the silicon transistor size will no longer be physically possible after ~ 2020 due to
the laws of quantum theory (assuming Moore’s law continues at its historical pace through 2020). As you reduce
the size of silicon-based transistors, quantum forces acting inside them simply become too strong to allow for
the unit to correctly work. Small amounts of energy will leak and cause short-circuiting at lower levels2
Currently, we use UV light to etch smaller and smaller transistors onto silicon wafers. Even if the size of a
silicon transistor would not affect its capabilities, then the smallest size transistor would be a mere 30 atoms
wide due to the fact that UV light has a wavelength of 10 nanometers. Moore’s Law cannot go on forever using
current technologies. Even if we can find a different way of etching transistors, the law will collapse when we
reach transistors the size of 1 atom which, amazingly, will be rather soon. Using a three-year Moore’s Law
calendar instead of a two-year window, the 5-nanometer chips won't reach the market until 2018 or 2019. The
last generation of integrated circuits which use a reduced-size chip would come around 20213
Recently, Intel built a $9 Billion dollar new plant to manufacture 22 nm wafers under the codename “Ivy
Bridge”. These will be commercially available in late 2011 or early 20124
. Yet, even sooner, by 2014, chips with
sizes less than 22 nanometers will be extremely expensive to manufacture since new chip plants cost billions of
dollars to build5
. Still, Intel is poised to move forward with plans to manufacture a 15-nm wafer in 2013. Yet,
they know that the implications to the economy, accustomed to growth due to technology, could be large if
alternatives to the silicon-based integrated circuit are not found.
Scientists have been studying in the field of nanotechnology for decades and now know that
nanotechnology could be a potential solution to the problems with silicon. “Nanotechnology is the engineering
of functional systems at the molecular scale.” 6
The National Nanotechnology Initiative invested nearly $2 billion
in research in 2009. According to Dr. M.C. Roco of the National Science Foundation and National
“ Nanotechnology can profoundly affect the ways we live, how healthy we are, what we produce,
how we interact and communicate with others, how we produce and utilize new forms of energy,
and how we maintain our environment.7
We estimated a $1 trillion nanotechnology related market
of nanoproducts incorporating nanotechnology … and the demand for 2 million workers worldwide
by 2015 …”8
. “Nanotechnology developments will allow increasing the power of computers by
about 100,000 times and building billion sensor networks by 2020”9
Using diamonds instead of silicon is one form of nanotechnology which could extend Moore’s law.
Synthetic diamonds produced from carbon have replaced silicon in tests. In 2009, a diamond-based transistor of
only 50 nanometers long was created in the UK. Diamonds can withstand much greater heat than silicon and,
therefore, if synthetic diamond production could be mass-produced from cheap carbon, we could potentially
extend the lifecycle of transistors10
My brother worked at a start-up company for several years and engineered new processes to speed up
the conversion process of carbon to synthetic diamond. He was successful in the first phase of the project.
However, he was unable to find ways to speed up the process more and left in 2009. Industry would require
mass production of synthetic diamond chips in order to effectively replace silicon. To date, there are only a small
handful of groups in the world who are attempting to mass produce synthetic diamonds and nobody has
accomplished the goal yet.
In an article published in Science Daily on April 5th
, 2011, progress has been made in the development of
self-cooling transistors made with graphene (a diamond-like substance) which could allow for smaller transistors
due to both the hard properties of graphite and due to these self-cooling properties11
. In 2008, a student
invented a gallium nitride (GaN) transistor which might also be promising12
Nanotechnology applications in real world business have, so far, largely been of the “passive” nature in
the tracking and management of individual items using RFID. However, we will see a lot more activity in the
realm of active nanotechnology over the next decade. Wireless sensors represent the next stage. These wireless
sensors can be passive, active, or hybrid RFID tags which are coupled with motion sensors, radiation sensors,
temperature loggers, and anything else people need to monitor. Alternatively, wireless sensors can be
extremely small computers that run their own operating systems, include their own sensors, and communicate
with each other. These could also be self-recharging active RFID tags which pull energy from mechanical stress
and/or kinetic energy using piezoelectric materials13
Satellite technology coupled with broadband wireless controllers allow anyone, anywhere, at anytime to
currently access the internet provided they have the needed software and hardware on their lap or in their
hands and are near the proper 3G or 4G cell phone transmitters. However, these technologies are rather
expensive for the some people in the world. Although prices are dropping all of the time, the killer app of today
is video according to Ted Anderson, the curator of TED14
. Yet, video is eating up bandwidth at alarming rates
with the on-line demand of movies and video via Netflix and YouTube. Huge expenses in infrastructure upgrades
will continue to be necessary to help keep up with the pace of demand. According to the Nemertes Research
Group, infrastructure improvements have only been linear (not exponential like Moore’s Law). Additionally, a
real issue exists with IP “address exhaustion” that even an anticipated solution to IPv4, termed IPv6, will not be
enough. Nemertes claims that a slowdown could be experienced by the average user beginning in 2012 unless
alternative solutions are developed in short order15
In recent years, the FCC has had a goal to deliver broadband internet by passing signals through existing
power lines, termed Broadband Power Lines, or BPL. Ubiquitous internet access could be available to everyone,
therefore, by passing it along existing electric lines. Some technical costs would be involved to leapfrog the
internet signals over transformers on the electric grid (called ‘repeaters’) and pass the internet down into homes
and commercial buildings. In addition, it might be technically necessary to mount Wi-Fi transmitters just prior to
reaching the housing transformers in order to avoid bandwidth issues. In the United States, transformers tend to
be numerous with one per household or one transformer per group of adjacent homes. These technical hurdles
do not seem insurmountable. The largest obstacle to date, however, has been that repeaters interfere with
existing technologies such as amateur (ham) radios and US military communication systems. Currently, there are
three areas in the USA where BPL is actively being used. Feedback about technical issues and interference are
on-going and ham radio operators are retaliating. Some technical solutions are on the horizon.
At an international level, the IEEE has been involved for several years in setting standards for BPL
worldwide. The standards were published in final form in September 2010 and approved in February 2011.
Cable companies offering internet services are rushing in to extend their infrastructure in rural communities in
order to remain competitive16
There are two methods of wireless power transmission: (1) direct induction followed by resonant
magnetic induction (RMI), otherwise known as Resonant Inductive Coupling (RIC), and (2) electromagnetic
radiation with microwave or laser. Resonant Inductive Coupling (RIC) is the science behind short-range power
transfer from a transceiver to a passive RFID tag. A renewed interest in seeking out wireless electricity has been
prompted by the widespread use of the internet.
The Earth has a natural magnetic pulse which was first discovered by Nikola Tesla in the late 1800’s.
Nikola Tesla felt that he could harness this to create long-range wireless electricity (and wireless
communications) utilizing his principle of resonant inductive coupling. In the 1900’s, his research on this topic
was funded by J.P. Morgan, but the project became too expensive to continue and was abandoned. More
recently, companies such as WiTricity and Intel have small, working models of short-range wireless electricity
which are based largely on Tesla’s principles.
In 2007, Prof. Marin Soljačid of MIT showed how he could wirelessly light a 60 watt bulb, in short-range,
using Tesla’s principles. (Of interest is that a light bulb requires far more energy than an RFID tag in order to
operate). Professor Soljačid left MIT to found a start-up called WiTricity (based on the words “Wireless
Electricity”) which he still runs today. "The goal now is to shrink the size of these things, go over larger distances
and improve the efficiencies," according to Professor Soljačid17
Intel demonstrated the use of “Wireless Ambient Radio Power” during the WARP project. WARP works
by drawing low levels of power from the air using an RF (radio frequency) energy harvester. In a YouTube
, an Intel researcher demonstrates this by powering a calculator and a smart phone-sized weather station
which had two temperature sensors and a humidity sensor. The RF signals were coming from a TV transmission
tower about 4 kilometers away and the TV station was broadcasting normally. Intel also has demonstrated WREL
or Wireless Energy Resonance Link which is an extension of Tesla’s principles on passing electricity wirelessly via
. For now, this could replace the powerpad that people use to recharge their cell phone by
allowing them to recharge it by leaving the phone anywhere in the room that is within range.
Today’s attempts at long-range wireless electricity involve the use of microwave or light (laser). NASA
has been successful in using infrared laser beams to re-power flying aircraft without requiring them to land20
This sort of power beaming is not a new concept, but the capability to beam laser light for long distances
without significant diffusion is still under development. Recent advances have been seen using clustered tunable
lasers at the Lawrence Livermore Laboratory. One huge downside of power beaming is that a clear line of sight is
Many researchers are seeking renewable energy sources with which to wirelessly pass energy. These
would power the tiny computers of the future. In 2009, researchers at the University of Washington in Seattle
were able to power a small gadget, only 130 nanometers in size, exclusively with the power from a maple tree21
The hope of using solar power as a renewable source that is passed on wirelessly via laser light is one goal in the
research community as well.
Speech Recognition Software
Historically, advances in speech recognition functionality have been slow. This will still be the case by
2020. However, applications will use existing functionality with some enhancements and use it far more
frequently. Speech recognition will be coupled with other software functions making it invaluable. For example,
we will use voice recognition software to:
send instant voice messages
record (and listen to) voice annotations/comments on digital text
keep real-time transcripts during conversations
instruct and answer computers in a hands-free environment, e.g. while driving
eventually, interact more regularly with computers using the Linguistic User Interface (LUI) as is
done with the iPhone’s Personal Assistant App (by Siri)22
As one computer is transcribing your voice in real time, another will index your words. Another
application will, optionally, search the internet to find out if anyone else is discussing this topic and ask you if
you would like to collaborate with them after you have read some of their transcript. There are many other
possible extensions and uses of Speech Recognition Software23
Concerning the translation of languages, termed Machine Translation (MT), there will be some progress
made. Recent advances, as seen on Google translate and other similar software apps, will continue steadily and
be significantly more accurate by 2020. More languages will be added over the years to facilitate translation
allowing the Internet to break down cultural barriers further.
Constantly Evolving Interfaces
The number and variety of ways to interact with the wide, wide world of the internet today is
astounding and continues to grow. iPads, smart phones, iTouch, PC Tablets, laptops, computers, and so many
other devices. In the future, we will continue to see the use of hand-held internet devices. However, costs will
be reduced such that nearly every human will own one or have access to one.
This is a listing of what is likely to be commercially available to the general public throughout the next decade:
Displays that can be projected
Touch and multi-touch (more than 1 finger) and multi-user touch (more than one person on the
screen) screens or visual interfaces will become commonplace
Although these are mainly visual windows, some also included tactile and aural interfaces.
Internet-enabled glasses and contacts will allow people to be able to see “real” life while internet data
and pictures augment their vision. Ubiquitous Augmented Reality will be possible with either of these
technology enablers to further augment what is seen. For example, a doctor would be able to see a
superimposed x-ray of the patient on top of actually viewing the patient.
Internet-enabled walls will basically act as large, brilliant flat screen monitors that offer touch screen
capabilities. Displays that can be projected onto any surface will be worn and coupled with finger pad “air”
control such that the user can manipulate what is being seen very easily without necessarily touching the
surface which holds the projection. This same sort of arrangement can be combined with other hardware and
software that is “worn”, such as webcam, so that the user can remove the items at will.
There is a movement which aims to limit ubiquitous access (in the sense of small, hidden computers
everywhere) and, instead, allow persons to choose when they are wired to the internet. This movement has
gathered some momentum in recent years and the latter interfaces described fall into this category of wearable
Within 10 years, Intel believes that “programmable matter” or “catoms” will be possible in the labs.
These are sand-sized highly intelligent computers which stick together like Play-Doh and are easily manipulated
by the human hand. Computer World termed the resultant glob of “catoms” as “shape-shifting robots” whose
parts stick together due to magnetism. So, we will be able to form them to any shape we like and use them in
any way that makes sense. For example, if you need a cell phone, then simply use your hands to form the shape
of a cell phone using your glob of catoms and then verbally command the resultant shape to tell it what you
want it to do. If you want to make the keyboard larger, just re-shape it. If you are done talking and want to stick
your “cell phone” in your pocket, flatten it into the shape of your pocket and put it away. Later, you could use
the same catoms to create a hand-held calculator or a child’s toy. The most difficult technical part of this
endeavor, according to Intel, is to teach the “catoms” to behave as a “swarm”. Commercially available versions
of this sort of enabling technology would not be available by 2020. However, it is key to know where the
technology is headed. It could be commercially available by 2030. Intel feels that this will be “everyday
technology” sometime over the next 40 years24
The field of Artificial Intelligence (AI) has made strides despite many people’s feelings about it. What
was formerly impossible or difficult for computers to do are now commonplace such as spelling and grammar
checking in word processors and language recognition when you call a customer service hotline. Pixar films,
robotics, and many games are a credit to Artificial Intelligence as well. However, because these features are
commonplace, they are taken for granted and the field of AI does not receive credit publicly. However, the
original goal of artificial intelligence was and is lofty: to match or exceed human intelligence. This has remained
largely elusive due to a computer’s inability to generalize and due to its basic lack of common sense. AI’s
manifestations largely still only do exactly what they are told (or programmed) to do and no more. Learning to
learn and think outside of the box are still human traits that are not easily matched25
As an undergraduate, I signed up for a senior level Artificial Intelligence class during my sophomore year
in 1984. Somehow I was admitted despite my standing (as a 2nd
semester sophomore) and despite a long line to
get into the class. In that class, we had a new, zany teacher brought in from California who taught in a very loose
fashion. He chose a draft edition textbook from MIT. We were to be the guinea pigs for Patrick Winston’s 2nd
Edition of ‘Artificial Intelligence’. Winston is well-known in the field of AI and was the director of the MIT
Artificial Intelligence Lab from 1972 to 1997. We also learned to program in LISP in my class. The teacher made
us feel like we were at the edge of some sort of huge wildfire that was ready to ignite into something much
larger. Although AI is of huge importance, its perceived growth has been more of a slow burn than a wildfire
compared to progress elsewhere in IT.
LISP is still used today as the primary language in Artificial Intelligence due to its recursive properties.
And, from what I can gather, Artificial Intelligence has not changed substantially since that time in its core,
mathematical foundation. What has changed is the processing power of computers and, therefore, the
capability to use some of the originally envisioned aspects of AI. However, I can still remember us studying
heuristics, a special way of taking “most likely” routes in decision trees, in order to program chess moves. At that
time, a computer could not outwit a human chess player very well. Also, it took it a long time (a day or a week)
to respond with its next move due to the lack of computing power for the task. Today, a computerized chess
player can give nearly instant answers and almost always outwit its human opponent unless the opponent is a
world-level champion. At those levels, the score can go either way, i.e. it’s almost equal.
So, it’s not that the languages or processes have changed all that much in AI. It’s that the computing
power has finally reached a point at which Artificial Intelligence can begin to make some strides. Unfortunately,
AI was over-hyped twice and was unable to deliver successfully both times despite massive amounts of
government funding. Again, AI is beginning to gain momentum. This time, researchers say that AI is poised to
Based on the applications that have been successful which utilize AI so far, I would have to mostly agree.
However, I think that through 2020, AI will continue to be used mostly to serve niche needs. After that time, I
feel that we will shift away from a focus on shrinking transistors and increasing computing speed (since we will
have likely exhausted that option) and we will be thinking about continuing Moore’s Law by using Artificial
Intelligence to do more than service niche areas.
Currently, in the world of Computer Science, a heuristic is the closest thing to “common sense”. Answers
are not necessarily accurate using heuristics, but this problem-solving mathematical method mimics human
decision making very closely. Heuristics are used when it is impossible to generate every single possibility to find
the optimal choices. They are used in search engines and to make quick estimates and estimate fault designs in
engineering. Also, anti-virus software uses heuristic signatures. Because heuristics are not fail-safe, they are
used only as an aid to humans.
Intelligent Agents are definitely a good return-on-investment. These AI agents are autonomous,
proactive software tools which use heuristics. They can be extremely helpful to companies who need to find
innovative ways to compete. For example, these agents help pharmaceutical companies mine data to find
trends. For eCommerce, Intelligent Agents use heuristics in Expert Systems to acquire, synthesize, and mine
competitive data from the web. Another use of computer heuristics is to beat human chess masters. Yet, the
same program that beats a human at chess can do nothing else useful and has no knowledge of poker26
Machine Translation (MT), mentioned earlier as related to language processing, is a form of AI as well.
It’s largest, first use was in car manufacturing to translate manufacturing instructions into other languages27
Almost every field, person, and thing will be affected by Artificial Intelligence over the next decade.
However, the following fields especially will see continued use and improvements: language processing,
robotics, stock trading, medical diagnostics, law, scientific discovery, music and toys.
Still, something bothers me about AI that could lead to an AI bubble. At the lowest level, bits still only
hold 1’s and 0’s. They only represent ON and OFF. Neurons can fire or not, but they also can work in other ways.
Via qEEG, neuroscientists have captured the essence of brain multi-processing which is termed “coherence”. To
mimic such, sensor networks would require intense parallel processing in a computerized shared sensor network
for a common goal. There are even further limitations. For example, if we remove parts of a computer, it doesn’t
work. If we remove part of a human brain, even an entire hemisphere of it as pioneer Dr. Ben Carson did, the
brain amazingly re-wires to reproduce needed functionality elsewhere. The brain is incredibly plastic. So, for
computers, how do we approach such things as rewiring when portions of it disappear so that needed
functionality is still attainable? What about abstract human thought such as symbolism, creativity, and
interpretation of sarcasm? These still seem to be a far reach for such base-level, logical constructs of 1’s, 0’s,
ANDs, ORs, NANDs, and NORs.
I believe that there will need to be some fundamental changes in the basic architecture of computer
circuitry such that it is more similar to the way human brain cells operate. We may need to move to a
chemically-based DNA-style chip and compose them in a 3-D fashion. Then, we may need to encode special
functions at some level which will allow the system to re-network under almost any condition and prune those
areas where less activity is occurring. We also need to encode an “automatic restructure” feature based on a
lattice-chip network’s “learnings”, not just where less activity is occurring. As neurons are pruned and shed in
the human brain and others given more priority (based repeat usage of well-worn areas and positive
reinforcement for such), I imagine that AI would succeed by doing the same. Also, marrying a bottom-up
approach (as seen in computer neural networks) with a top-down approach (heuristics) in a computing network
could possibly give a network of chips the ability to truly “learn to learn” no matter if incoming information
comes from diverse sources or not. This could more closely match what the human brain does. If we can achieve
these things, I believe that progress in the field of AI will become sharply exponential. However, I have to add a
disclaimer that this is all sheer conjecture on my part.
The Transformational Effects by 2020
Health, Medicine, and Safety Improvements
The smart pill was developed in 1992 by Jerome Schentag of the University of Buffalo. The information from
the smart pill is tracked electronically and it can be instructed to deliver medicines to the proper location. In
1997, the Affymetrix company released the first commercial DNA chip that could rapidly analyze 50,000 DNA
sequences. Today, prices have dropped to $200 for far more powerful chips capable of analyzing more than
800,000 DNA sequences. Prices continue to plunge due to Moore’s Law down to a few dollars. Today, with as
little as a teaspoon of blood, doctors at Massachusetts General Hospital have created their own custom-made
biochip that is 100 times more powerful than anything on the market today. Finding circulating tumor cells is like
finding a needle in a haystack normally. Their new biochip is sensitive enough to detect lung, prostate,
pancreatic, breast, and colorectal cancer cells by analyzing only a teaspoon of blood. Clinical trials show that it is
over 99% accurate in diagnosing patients with cancer. By 2020, the cost of diagnosing cancer will drop from
hundreds of dollars to pennies and the speed will be reduced from weeks to minutes.
Early warning systems could be developed by 2020 for home diagnosis. Consumers could purchase the
equivalent of today’s glucose monitors, prick their finger, and wait a few minutes while the handheld monitor
analyzes the blood. The data could be sent automatically via the internet to a medical data diagnostics cloud
facility where their personal readings are analyzed and compared to national norms. A report would
automatically be sent to their smart phone and/or email and to their doctor’s patient self-reporting database.
The patient would be given feedback no matter what, but the doctor would only be notified if there was an
anomalous finding. Insurance companies would automatically be billed each time the patient performs a home
self-test and the CDC would automatically be notified if the patient had a social disease that warranted their
attention. These CDC databases would be tied into early warning systems at the World Health Organization
which would automatically track and prevent pandemics. (Note: there is no footnote here because this
paragraph is my own vision).
Nanotechnology has the potential to create molecular hunters which would search-and-destroy cancer cells.
“Nanocars” are human-driven computer chips which exist today in labs. The way this works is that a human
looks into a special microscope and pushes a button. The button sends a magnetic pulse to tiny bacteria which
are slightly magnetic which, in turn, causes them to propel a tiny computer chip the size of the dot on the letter
‘i’. In the not-so-distant future, a doctor could push a similar button directing a microscopic robot in the veins of
a patient to dissolve plaque, perform nanosurgery, or deliver medicines directly on cancer cells. During a check-
up at the doctor’s office, a doctor could ask the patient to lay down as he scans the body with these nano rovers.
In a similar way, the data could be automatically compared to national norms and even the patient’s own norms
over their life helping the doctor to determine if a real issue exists.
DNA genome “checkers” will become cheap and commonplace. If you want to know what your inherent
body weaknesses are, you could run a sample of your DNA through this device and devise a plan to delay or
prevent disease onset.
There are thousands of anticipated improvements in health in medicine due to these technologies which will
allow nearly ubiquitous data acquisition and storage of our personal health information. Personal data could still
be reasonably secured for privacy reasons if the law continues to protect the patient in the US. However, the use
of such technologies in other countries may fall into the hands of those who are less than scrupulous and cause
The above is only a small sample of what is currently being researched.
Convenience and Security Improvements
WiTricity and Intel both feel that their developments with magnetic resonance will eventually give rise to
wireless electrical power that fills a room or a home. This would remove the need to be tied to a desk or
powerpad to recharge batteries. In the very short term, they feel that commercial products could include a small
receiver that would continuously power a backup battery for home alarm clocks, CMOS functions/batteries in
computers (which allow the computer to retain the time) and other small adjunct electrical needs. When the
power goes out, we would not need to reset the time ever again on any appliance or electronic gadget that had
I envision that food shopping could be fully or partially automated using RFIDs. When you throw a container
in the trash, the trash bin could sense the RFID tag and ask you if you want to add this item to your shopping list.
If so, a request for the quantity and due date will be made using speech recognition software. Additionally, you
could set an option where the trash bin would notify you if a disposed item could have gone in the recycle bin
Recycling plants involve a lot of human intervention to determine if received items are truly recyclable or
not. RFID tags on packages will remove nearly all of these concerns. However, RFID tags themselves will need to
be recyclable or easily removable if they are glued to the package. This is a major concern for our environment
that goes well beyond the concerns of cost or convenience and research is being done to find ways to overcome
. If the research is successful, then it’s conceivable that all trash (including recyclables) would
remain together and trash sorting machines would sort the recyclables from the non-recyclables. Short of that, it
might require consumers to manually remove RFID tags prior to placing them in their home or office recycle
bins. Sensors would be placed in the consumers local recycle bin and inform them audibly if the item was truly
recyclable or not, and then remind them to remove the RFID tag.
Wireless sensors represent the next stage beyond RFID using MEMs (Micro-Electrical Mechanical Systems).
These can be passive or active RFID tags that are functionally packaged with temperature loggers, motion
sensors, radiation sensors and so on. When the technology is cheap enough, asset tag management in most
organizations will become nearly automatic and corporate internal theft could be reduced or eliminated. The
U.S. military is funding research into simple RFID sensors that could detect pathogens in food. These could be
used to protect the public against food-borne illnesses or even deliberate acts of terrorism29
As stated earlier, wireless sensors can also be composed of tiny computers that run their own operating
system, have onboard sensors and communicate data to one another. The NASA Jet Propulsion Laboratory in
Pasadena, Calif., is working on a new generation of wireless sensor networks. In early pilots, these have been
used to measure soil and air temperatures, humidity and light in the Mac Alpine Hills region of Antarctica, to
gauge the movement of water across a water recharge basin just west of Tucson, Ariz., and to automatically turn
on sprinklers in dry areas of the Huntington Botanical Gardens in San Marino, Calif. NASA’s interest in sensors is
for monitoring other planets such as Mars. However, NASA also believes that Sensor Web technology could
significantly assist the U.S. government with national security by reacting to activity in monitored areas30
Other Areas of Growth
The business areas discussed in the section entitled “Artificial Intelligence” will continue to improve and
gain momentum. Robotics, in particular, could possibly move into common use in Japan where there is an aging
population and a limited youth to serve their needs. As successes occur, humans will invent new ways of
combining and restructuring those ideas in ways that we can hardly imagine.
The future looks extremely interesting. Yet, there are many questions and concerns that need to be
seriously considered before we, as a civilization, buy in to ubiquitous computing such that we have access to any
information, anywhere, anytime, by anyone. The obvious considerations of safety, privacy, and personal rights
must be protected. I cannot easily imagine that most people will agree with having a “Big Brother” sort of
environment around. Then, again, another generation of children who are raised in a nearly ubiquitous
environment could decide that the loss of their privacy is worth the benefits gained. Perhaps “Big Brother” will
be the only police officer in town and some will simply have to accept it due to lack of viable alternatives.
Plenty of business data will simply be private due to basic competition so access to “any” data over the
next decade is a farce. However, businesses will be extremely tied in to the services they supply to each of their
customers in an attempt to insure repeat business. How much information businesses are willing to share will
still boil down to the basic cultural need to be an individual in this day and age. Collectivism will not take over
business in the coming decade, in my view.
Loss of free will and loss of autonomy must be avoided. People need to be in charge of their own lives so
that they don’t allow the “bots”, the government or any other potential oppressor to rule. In my earlier example
in 2023, I simply chose to switch off all alerts and listen to classical music. Humans will still want to maintain a
good degree of free will and autonomy, so I believe that all future apps of any popularity will need to respect
that need or people will vote with their wallets and those apps simply will not be purchased.
What about information overload? It would be fantastic if all of my personal information was organized
and prioritized so that it could all be truly useful. I firmly believe that a personal secretary robot or application
should be built to unify all of the reminders and priorities that come from diverse sources such as email, phone,
text messaging, calendar reminders, fax, calendars, the US mail, news, and the web. Whoever writes that
Intelligent Agent will make a fortune.
Finally, the early portion of this paper dealt with Moore’s Law. I believe that Moore’s Law will hold up
beyond 2020, but not necessarily in regard to transistors. I believe another driver will emerge, possibly artificial
intelligence. It makes sense that, after a physical foundation is built, that the software would be the next wave
of advancement to layer on top of it. Highly sophisticated artificial intelligence, powered by intense physical
network computing, will, at minimum, continue to help companies to achieve and maintain a strategic edge.
Possibly, though, artificial intelligence will be the next high growth area which carries the torch for Moore’s Law.
Weiser, Mark. "Ubiquitous Computing". www.ubiq.com 16 August 1993. 1 April 2011
Kaku, Michio. Visions: how science will revolutionize the 21st century . 1998. New
York, NY: Anchor, 1998. 14 -15. Print.
Kanellos, Michael. "Intel scientists find wall for Moore's Law". www.CNET.com 1 December 2003. 1 April 2011
Protalinski, Emil. "Intel investing $9 billion in 22nm manufacturing process". www.techspot.com 19 January
2011. 5 April 2011 <http://www.techspot.com/news/42049-intel-investing-9-billion-in-22nm-manufacturing-
Crothers, Brooke. "IBM Looks To DNA To Sustain Moore's Law". CNET.com 17 August 2009. 6 April 2011
6 Wikipedia, http://en.wikipedia.org/wiki/Nanotechnology
p. 1, http://www.nsf.gov/crssprgm/nano/reports/nano2/chapter00-1a.pdf
p. xii, http://www.nsf.gov/crssprgm/nano/reports/nano2/chapter00-1a.pdf
University of Illinois at Urbana-Champaign. "Self-cooling observed in graphene electronics." ScienceDaily
5 Apr. 2011. Web. 8 Apr. 2011. <http://www.sciencedaily.com/releases/2011/04/110403141333.htm>
Rensselaer Polytechnic Institute. "Alternative To Silicon Chip Invented By Student." ScienceDaily 13 May 2008.
Web. 8 Apr. 2011. <http://www.sciencedaily.com/releases/2008/05/080513112341.htm>
Moore, Bert. "RFID: Nanotechnology Watch". Association for Automatic Identification and Mobility 7 July
2010. 8 April 2010
( http://en.wikipedia.org/wiki/Power_line_communication )