The document outlines a 5-stage model of the human-technology merge from 1950 to 2050. Stage III (Extracting from 2010-2025) sees machine learning used to extract information from big data. Nine areas are highlighted as important developments in this stage, including ubiquitous chatbots, conversational assistants, next-gen virtual assistants, ambient AI, next wave wearables, hearables, intelligent layers across video/images, mixed reality, and the VR web. Stage IV is anticipated to use deep learning for anticipating needs, while Stage V involves a complete merge of humanity and technology through advances like artificial general intelligence.
1. Gemma Vallet
Innovation Director at PHD Media
@GemmaVallet
Óscar Dorda
CEO at PHD Media
@odorda
“The merge era: the gap between
technology and us”
#CongresoDEC
6. 1010
It is not the case that
we will experience 100
years of progress in
the 21st century,
rather, we will witness
on the order of twenty
thousand years
of progress.
–RayKurzweil
Directorof
EngineeringatGoogle
7. 1111
In the grand scheme of
things, we do have many
risks and challenges, but
nothing else matters more
to the human race than this
merger. This is going to
define how we evolve.
– Bryan Johnson
American entrepreneur venture
capitalist
February 2017
11. 1515
STAGE II:
Organising. 1990-2015.
The organisation of the
information through
search engines,
browsers, operating
systems and apps makes
it possible to get to
what we wanted
12. 1616
STAGE III:
Extracting. 2010-2025.
Introduction of machine
learning leads to dramatic
improvements in the
extraction of information –
from advanced O.S. and
semantic web through to
virtual assistants.
13. 1717
YOU ARE HERE:
The Nine Fundamental
Developments To Come;
STAGE III:
Extracting.
2010-2025
STAGE II:
Organising.
1990-2015.
STAGE I:
Surfacing.
1950-1995
14. 1818
A study by the International Data Corporation
(IDC) found that, of the 2.8 trillion gigabytes of
data that had been created by 2012, only 0.5
percent was actually being used
0.5% of Data is Used
STAGE III:
Extracting. 2010-
2025.
15. 1919
Big Data is like teenage
sex: everyone talks about
it, nobody really knows
how to do it, everyone
thinks everyone else is
doing it, so everyone
claims they are doing it.
– Dan Ariely
Behavioral Economist
STAGE III:
Extracting. 2010-
2025.
16. 2020
Big data was actually the food for:
Machine Learning (ML)
STAGE III:
Extracting. 2010-
2025.
17. 2121
The Nine Areas to Watch:
as we see out the remaining years of Extracting
together
STAGE III:
Extracting. 2010-
2025.
18. 22
1. Oracle, found that 80% of business leaders planned to implement chatbots by 2020.
Microsoft’s Dave Coplin also thinks they’ll be important: “Where we’re heading in the very short term is, if you don't have a chatbot, then
you're not open for business.”
22
1. BOT-LIFE1
Ubiquitous cognitive layer across
website, app
and out-of-home experiences | Linked
to payment | NLU (Natural Language
Understanding)
STAGE III:
Extracting 2010-2025
19. 2323
1. Facebook’s VPA (called ‘M’) can make recommendations based on conversations that are happening between two or more people in Messenger. So,
if someone types “Where are you?” then the assistant might ask if you’d like to share your current location with them.
2. MESSENGER CONCIERGE
Conversation augmentation1
| Diary integration | Linked to purchase
STAGE III:
Extracting 2010-2025
20. 2424
1. Facebook’s VPA (called ‘M’) can make recommendations based on conversations that are happening between two or more people in Messenger. So,
if someone types “Where are you?” then the assistant might ask if you’d like to share your current location with them.
2. An existing service that enables companies to created apps within the FB messenger platform
3. An existing service that suggests actions based on conversations within the FB messenger platform
2. MESSENGER CONCIERGE
Conversation augmentation1
| Diary integration | Linked to purchase
STAGE III:
Extracting 2010-2025
A COMING TOGETHER
CHAT EXTENSIONS2 M SUGGESTIONS3
+
AN EXAMPLE: FACEBOOK
21. 2525
1. Apple announced at its Worldwide Developer Conference in 2016 that it was finally allowing other companies to integrate its assistant into
its products. So you can now ask Siri to send a message on WhatsApp, make a payment on Square, call someone on Skype and map your exercise on
Runtastic. In December 2016, Google and Microsoft also announced that they’d do the same with their VPAs.
3. NEXT-GEN VPA
(VIRTUAL PERSONAL ASSISTANT)
98%+ accuracy level with true-natural
language
| Extensive open API network1 |
Organic personality development
STAGE III:
Extracting 2010-2025
22. 2626
1. Consumer Intelligence Research Partners also estimates that over 7 million of the Amazon Echo devices have been sold in the US since launch.
VoiceLabs, a voice technology consultancy, estimate that there will be 24.5 million voice-first devices shipped this year, which will lead to a
total device footprint of 33 million devices in circulation.
4. AMBIENT AI
Significant increase in
penetration1 of in-home, in-
car AI and other public
places | Spontaneous retail
|
New Gateway
STAGE III:
Extracting 2010-2025
23. 2727
1. Echo Labs pioneering deep view technology – blood cells, hydration and glucose levels to be monitored - link to suggestions and retail
expected.
2. Lots of start ups – such as Motiv, NFC and Ringly. Apple likely to launch in this space. Will fuel usage of VPAs.
5. NEXT-WAVE
WEARABLES
Stand-alone | Deep view
technology1 | Smart-Rings2
STAGE III:
Extracting 2010-2025
24. 2828
1. Wifmore Consulting believes that this is such a rich area for innovation that, by 2018, the ‘Hearables’ market could be worth $5bn.
6. HEARABLES1
Doppler Labs’ “Here One” |
Ambient AI carrier |
Translation
| Location messaging |
Experience Augmentation
STAGE III:
Extracting 2010-2025
25. 2929
1. Computer vision for images already achieving 90%+ accuracy levels: for example, Google’s trainable 'Show and Tell' algorithm, which has just
been made open source, is now capable of describing the contents of an image with an impressive 93.9% accuracy.
STAGE III:
Extracting 2010-2025
7. INTELLIGENT
LAYERS
A data-layer across
camera/video content
(leveraging Computer Vision1) |
More information | Social and
retail plug-ins
26. 3030
1. Apple building a team of AR/MR experts – hiring from Magic Leap and Facebook's Oculus.
8. FIRST-GEN MR
Microsoft vs. Magic Leap vs.
Apple1 | Commercial then
Consumer | Intelligent-Layer
Across the World
STAGE III:
Extracting 2010-2025
27. 3131
1. Apple building a team of AR/MR experts – hiring from Magic Leap and Facebook's Oculus.
STAGE III:
Extracting 2010-2025
AN EXAMPLE: MAGIC LEAP
THE ‘CHIP’
Magic Leap founder - Rony Abovitz
8. FIRST-GEN MR
Microsoft vs. Magic Leap vs.
Apple1 | Commercial then
Consumer | Intelligent-Layer
Across the World
28. 3232
1. Apple building a team of AR/MR experts – hiring from Magic Leap and Facebook's Oculus.
STAGE III:
Extracting 2010-2025
AN EXAMPLE: MAGIC LEAP
THE EXPERIENCE
8. FIRST-GEN MR
Microsoft vs. Magic Leap vs.
Apple1 | Commercial then
Consumer | Intelligent-Layer
Across the World
29. 3333
1. The technology layer is being created now – one example: the SpatialOS Games Innovation Program (by Improbable / Google Cloud Platform).
2. Websites will give way to ‘Worlds’
9. THE VR WEB
Universal Environment1 |
Construction of parallel
worlds2
STAGE III:
Extracting 2010-2025
30. 3434
1. BOT LIFE
Chatbots will include NLUI
(Natural Language User Interface)
to enable voice conversation. And
they will appear as a layer across
websites, display ads, apps and
out-of-home experiences. The
technology will enable people to
intercept video ads and seamlessly
engage with the characters in the
ads.
3. NEXT-GEN VPA
The achievement of 98%+
accuracy level with true
natural language and organic
personality development will
be a game-changer. As will
the result of a full,
extensive open API network
that will allow you to ask
or buy anything.
2. MESSENGER
CONCIERGE
Conversation augmentation
within messenger and video
chat environments that add to
the conversation with
information and can organise
and book for you.
6. HEARABLES
Companies such as Doppler
Labs with its “Here One”
product will create a new
kind of device. Will be a
carrier for the VPA and
allow for location messaging
and event experience
augmentation.
9. FIRST-GEN MR
Magic Leap and Microsoft are
likely to lead the way. Early
products likely to have
commercial adoption. As the
price drops and portability
increases, it will become a
consumer device. This will
enable an information layer to
be overlaid onto the world.
8. THE VR WEB
Advancements in cloud
computing will enable the
creation of a universal
environment that we all
share. This will logically
lead to the construction of
parallel worlds –
advertisers will create
branded worlds, just as they
currently create websites.
5. NEXT WAVE
WEARABLES
The next wave of
smartwatches to be
independent of smartphones
and have deep-view
technology, so they can read
blood, assess hydration etc.
The addition of smart rings
with embedded microphones
will lead to significant
increase in engagement with
VPAs.
7. INTELLIGENT
LAYERS
An intelligence layer across
all video content
(leveraging machine
visioning) will enable
people to know about any
object, place, and yes,
brand. Social plug-ins will
enable purchases. Video
will emerge as a new retail
channel.
STAGE III:
Extracting 2010-
2025
4. . AMBIENT AI
An always on listening AI
in homes, hotels, cars
and, increasingly, public
places. Will be a
significant gate-way for
retail
32. 3636
STAGE V:
Elevating. 2030-2050.
Artificial General Intelligence, Nano-
tech, Bio-Tech and Quantum Computing lead
to humanity and technology becoming
indistinguishable from one and other,
both virtually and biologically. The
Merge is complete. Humanity evolves
…to merge with us.
We will see this merge in how we access the web will change. Phones are likely to turn into smart glasses. Glasses into contact lenses. Contact lenses into biological implants.
Whatever form it takes, it is generally accepted that machines will become even more embedded in our lives. And with this the gap between us and our devices will continue to close. Logically we’ll reach a point where we effectively become the same entity. Technology and humanity will – both symbolically and literally – fuse together.
As the gap closes, technology will change irreversibly. People will stop regarding machines as a separate entity – an ‘us’ versus ‘them’ mentality popularised by Hollywood and science fiction. Instead, the conversation will slowly shift. Technology will be regarded as an additional lobe of our brain – an essential and constant element in our lives on earth. The idea of ‘logging on’ or ‘accessing the internet’ will disappear – replaced by a constant connection in a world where the web flows like electricity.
However, technological development is the liberator of advancement, not the driver. The driver is us.
Any technological development that has ever been successful has served to unshackle us from the three-dimensional and time-based constraints of daily life, and allow us to move one step further on a journey to abundance.
To have, within our own control, an abundance of information, experience and connection.
To be able to know what our loved ones are doing right at this moment. To be able to be in any given place, now. To know any piece of information the moment that we have the impulse. To be able to experience another reality.
This technology-driven journey to liberate our minds is now starting to take a discernible path, and it is heading directly towards us.
We are now at a stage where technology, ostensibly an entity with its own mind, has turned around to face us head on and is now extending itself out towards us, at an alarming rate. Seeking to close the gap between it and our minds. With its end goal now becoming more apparent…
Ray Kurzweil, Director of Engineering at Google, recently wrote that “It is not the case that we will experience 100 years of progress in the 21st century, rather, we will witness on the order of twenty thousand years of progress.”
In other words, because of this exponential growth, advancements made in the next couple of decades will multiply so quickly that they will dwarf the developments of the entire 20th century – a period that brought us the motor car, the aeroplane, the television, antibiotics, the PC, the internet and nuclear power.
Bryan Johnson (born August 22, 1977) is an American entrepreneur [1] [2] and venture capitalist.[3] He is founder and CEO of Kernel, a company developing a neuroprosthetic device to improve brain function,[4] and the OS Fund, a $100 million fund that invests in science and technology startups that promise to radically improve quality of life.[5]
He was also founder, chairman and CEO of Braintree,[5] an online payment system. Braintree was acquired by eBay for $800 million in 2013.[6]
This human-technology merge started with the first basic computer interfaces. But this was only level one out of five, logically definable, stages; within which we are currently moving towards the end of Level-Three.
So, there are all five stages – explored in depth in the book…Some of the predictions from exclusive interviews with leaders in technology such as Sheryl Sandberg
COO Facebook, David Coplin Chief Envisioning Officer Microsoft and Greg Corrado leading Google Brain – will astound you.
WE hope you can find time to read it…
Stage I: Surfacing. 1950-1995. The introduction and early spread of screens and the world-wide-web surfaces up information for us
We look at the early parts of the journey. We assess how specific innovations led to the commercialisation of personal computers, and how the dawn of the internet enabled us to surface information in a revolutionary way. We also investigate how this has spawned a new era of marketing that laid the foundations for one of the most significant shifts the industry has ever experienced.
Stage II: Organising. 1990-2015. The organisation of the information through search engines, browsers, operating systems and apps makes it possible to get to what we wanted
We highlight the three key inventions that led to the spread of the modern-day web. These giant leaps forward helped organise information, making it globally accessible and universally valuable. They also helped create a portable device that brought us closer to our technology than ever before: the smartphone.
Stage III: Extracting. 2010-2025. Introduction of machine learning leads to dramatic improvements in the extraction of information – from advanced O.S. and semantic web through to virtual assistants.
This is the period we currently find ourselves in – plays a pivotal role in The Merge. On the one hand, this era represents a maturation of the modern-day web. Search engines are smarter than ever, mobile penetration is widespread and connectivity is fast and reliable in many parts of the world. But, on the other hand, advances in machine learning are starting to unlock new possibilities by extracting value from an ocean of unstructured data. This breakthrough is giving rise to a new generation of machines that promise to change the way we live our lives and conduct our business.
Let’s just pause for a minute and explore Stage III a little bit.
At the beginning of Stage III, many companies didn’t know how to take advantage of the data flood. A study by the International Data Corporation (IDC) found that, of the 2.8 trillion gigabytes of data that had been created by 2012, only 0.5 percent was actually being used.
The behavioral economist Dan Ariely perhaps summed it up best when he tweeted: “Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
The reason: big data was never intended for humans.
This convergence – of available data and affordable computational power – has created a machine learning gold rush. ML has rapidly morphed from a discipline that scientists studied in universities to something that companies are investing in heavily. In particular, seven companies are leading the charge: Google, Microsoft, Facebook, Apple, Amazon, IBM and Chinese search company Baidu. These firms have all opened secretive research laboratories and are frenetically mopping up top AI talent from around the world – the Google Brain Team, for example, has expanded from three members in 2011 to over 100 members in 2017.
Driven by ML, these are the 9 fundamental development expected as we see out the end of the Extracting stage.
Chatbots with natural language user interface integration – to enable seamless interaction with any touchpoint.
Facebook’s VP of messaging products, David Marcus, admitted late last year that the first bots on Messenger were “really bad”. But, considering the rapidity of progress that ML has made in fields such as translation and image recognition, it would be foolish to bet against them quickly becoming usable on a mass scale.
Research from Oracle, found that 80% of business leaders planned to implement chatbots by 2020. Microsoft’s Dave Coplin also thinks they’ll be important: “Where we’re heading in the very short term is, if you don't have a chatbot, then you're not open for business.”
It is likely that they will be embedded websites and apps and out-of-home experiences.
It is also likely that they will recognize your voice and therefore allow you to have a conversation that carries on from where you left off last understanding not just who you are but what you and the chatbot have already discussed so that the conversation is sequentially relevant.
On the most basic level, ML-powered chatbots can be integrated into messanger platforms to automate conversations between brands and customers. The technology became popular when Facebook announced its Messenger Platform in April 2016, which essentially made it simple for any developer to build a chatbot for the service. By November that year, over 30,000 bots had been created for Facebook Messenger, including the 1-800-Flowers bot (which automated flower delivery); the Hello Hipmunk bot (which offered travel advice) and the Domino’s bot (which let people order pizza without speaking to a human). Other platforms, such as Kik, Slack and Skype, launched similar stores the same year.
But more than that, it is likely that a kind of always listening conceirge service will appear. Already, Facebook’s VPA (called ‘M’) can make recommendations based on conversations that are happening between two or more people in Messenger. So, if someone types “Where are you?” then the assistant might ask if you’d like to share your current location with them.
‘M Suggestions’ have only been rolled out to a select few users so far, but it’s easy to see how valuable this idea could be to brands in the near future. Could M, or indeed any other assistant, start to listen out for potential purchase moments in conversations? If someone asked their friend about how to remove a stain, for example, then surely that would be the perfect time for Vanish to serve up an ad? Or if a group of friends were chatting about what to do over the weekend, perhaps an advert for a local theme park or cinema could appear with a group booking discount code. This model would ensure that all ad suggestions made perfect sense within the context of the conversation, and could operate on a pay-per-click or even pay-per-conversion model – thus ensuring maximum return on investment.
Google are planning to bring together these two elements…
The next-generation VPA will be defined as a product with 98% plus accuracy level. Baidu has a 96% accuracy rate. Apple and Google are not far behind, with a 95% and 92% accuracy rate respectively.
Every additional percentage increase above 96% makes a remarkable difference to the experience.
Building an extensive connection to external companies and data is the big focus for development in this space so that users of the device can cancel book amend seamlessly through one voice mediated access point.
Then the focus turns towards personality development. Creating a virtual personal assistant th at has its own personality that evolves as you interact with it. Microsoft are exploring this. Amazon is so laser-focused on providing a natural experience that it has recently updated Alexa with Speechcons – slang words that add emotion and nuance to the device’s lexicon. Ask Alexa a question now and she may reply with ‘Bada bing’, ‘Good Grief’ or ‘Boom’.
A report from Consumer Intelligence Research Partners also estimates that over 7 million of the Amazon Echo devices have been sold in the US since launch. VoiceLabs, a voice technology consultancy, estimate that there will be 24.5 million voice-first devices shipped this year, which will lead to a total device footprint of 33 million devices in circulation
But whatever the official figures are, few would claim that the Echo has been anything other than an astounding success. And part of the reason for this is that it does something for everyone.
For the consumer, it puts an assistant in the place where they’re spending a big chunk of their time – the home. It also doesn’t rely on a screen, so if you have your hands full cooking, you don’t need to wash your hands, dry them, unlock your phone and then speak your command. And, perhaps more importantly, it’s easy to interact with. The immediate success of Amazon Echo has created a frenzy of activity from other tech giants, who are all eager to take advantage of consumer interest. Google launched its Echo competitor – Google Home – in October 2016 and Apple is rumoured to also be working on a similar product.
And Ambient AI is likely to take flight – to emerge in the world around us. In-home, cars, taxis, office-receptions. Like with chatbot development it will recognise our voice so – if it id Amazon for example, we can continue a conversation with ‘our Alexa’ anywhere.
Because it is so quick it is most likely to become gateway between us and the world and this will only solidify as the number of skills increase. The number of tasks Alexa can perform has skyrocketed. In January 2017, the device had over 7,000 registered ‘Skills’ from third parties and was adding 1,000 more every month.
An increasing amount of this will have embedded retail built in. So you can order with voice – this is already the case with Uber and fast food delivery. And with Amazon product. As the penetration of ambient AI increases a large share of retail purchases are expected to be made via ambient AI. This has significant implications for branding and distribution.
Google recently opening up their SDK to allow all manner of devices to embed it…
Many new entrants coming: The new iPhone-compatible smart ring, Motiv, turned heads at CES and it's not hard to see why. It's sleek, just 8mm wide and comes in rose gold or slate grey finishes of titanium. Plus, there are seven sizes to fit both male and female fingers.Inside it's a fitness tracker that will monitor steps, distance, active minutes and so on as well as heart rate thanks to an optical heart rate monitor. The battery will last five days and it's even waterproof to 50m. The NFC ring can be used to unlock phones and doors, transfer information and link people. The ring packs two NFC tag inlays - one for public information and one for more sensitive stuff. The private tag - for things such as your smart door lock and payment information - sits on the inner part of the ring closest to the palm, so that it requires a deliberate gesture to use. The public portion, for stuff you want to give out, like your email address, sits on the top side of your finger.
Embedded microphones are expected. This will enable increase in communication with VPA.
Smart-watches are set to peer deeper into us. A small startup company, Echo Labs, is working to integrate a new level of health monitoring into wearable technology.
Echo Labs provides health care organizations with analytics to allow for better care of their patients, decrease hospital admissions, and reduce spending. Its first generation wearable offers health information by creating continuous vital sign tracking.
The company is now working on its newest device. The company states that the new tracker will be able to determine what’s going on inside the bloodstream, which is a first for wrist-based wearables. The tracker utilizes optical sensors and spectrometry to measure and analyze blood composition and flow. It also monitors heart rate, blood pressure, respiratory rate, and full blood gas panels.
Rumours that Apple Watch series 3 will have similar functions – slated for Sep 2017 release
In fact, some people are touting ‘hearables’ to be one of the most important areas of growth for virtual assistants over the next couple of years. “Since the early 1980s, human computer interaction has primarily been facilitated through Graphical User Interfaces (GUIs),” says Christine Todorovich, principal director at design firm frog. “But the combination of screen fatigue and technology embedded in everything is exposing a need for new types of interfaces that extend beyond the visual. I therefore believe that 2017 and 2018 will be the years of the AUI — the Audio User Interface.” (sign-off)
Indeed, we’re already starting to see early examples of what Todorovich means. Sony has also developed an Airpod-like device that lets users interact with a virtual assistant without looking at a screen. The Xperia Ear helps people check diary appointments, send messages (via dictation) and listen to social media updates on the move. Other products, like Doppler Labs’ Here One earbuds, are smart enough to change ambient sounds that the user hears around them – so if they’re on public transport and want to drown out a conversation happening nearby, they can do so easily. In fact, Wifmore Consulting believes that this is such a rich area for innovation that, by 2018, the ‘Hearables’ market could be worth $5bn.
The first generation products will be used episodically. For all-day usage, the products will need to be much smaller so you forget you are wearing them, the battery will need to last a day and sufficient advancement will have had to happen with the previous voice based AI technologies to justify the price.
The creation of a hidden intelligence layer that sits on top of video content that understands exactly what is being featured within the video. Leveraging machine visioning artificial intelligence to do this. It recognizes all objects, activities etc.
The updated version of Google’s trainable 'Show and Tell' algorithm, which has just been made open source, is now capable of describing the contents of an image with an impressive 93.9% accuracy.
AI software developed by the University of Oxford, for example, can now watch videos of people speaking and correctly identify what they are saying 93% of the time. LipNet, which was backed by Alphabet, is significantly more accurate than human lip-reading experts, who tend to know what a person is saying between 20 and 60% of the time.
This intelligence layer will enable any tap or voice interaction on any part of video to provide additional deep information about what is happening and/or what brands are being featured.
With this will come social plug-ins of course but more importantly retail plug-ins. With brands being served up purchase-requests from people watching content.
To date mixed reality (MR) is having a mixed review. Magic Leap is rumoured to be struggling to create a product that will be suitable for any type of launch. Whilst Microsoft a hololens is also far from being a consumer ready product – with cumberson equipment required. The main player is now Facebook – using powerful AI to understand the physical world so that it can overlay graphics – a platform opened to developers early 2017. Exciting but limited to a smartphone camera for now. ML-eyewear is still some way off.
However this space is one of the most exciting areas for tech development. With Apple rumoured to be focusing on it, having hired developers from Magic Leap. Notable developments are likely to take a few more years they will come fast. But when they come fast. And they will change everything. We won’t need to be buying physical screens to hang on our walls as we would just be downloading them onto our mixed reality OS. The world around it will have a layer of information that tells us about things that are relevant to us. The transformation that this technology will provide will be as dramatic a leap forward and the development of the smart phone.
Here is the founder Rony Abovitz. Showing off the chip that is actually a lens. It transmits photons directly into the retina - via a nano-projector at the edge of the lens. The photons are directed into your eyes via ‘nano-ridges’
This is what to expect….
Improbable, the London-based startup which enables developers to build virtual worlds offering permanent, persistent and engaging gaming experiences, today announced a joint game developer program, the SpatialOS Games Innovation Program, with Google Cloud Platform. SpatialOS gives any developer the ability to define and build simulated worlds which can accommodate thousands of simultaneous players in a single world at the same time, exceeding the usual limits of what a conventional game server can do. These simulations are persistent and support the kind of complex computation needed to bring new game ideas to life, while enabling a development methodology that supports extremely rapid iteration.
A summary of all stages – you can read more about each of these in Merge
Stage IV: Anticipating. 2020-2035. Deep learning AI leads to technologies that anticipate our needs and interests, and start to make decisions for us.
At this level technology understands our context, knows our routine and starts to run our lives for us. Our assistants are by our sides constantly, helping us tackle all kinds of general tasks, not just specific things. They can even anticipate our needs and desires, which in turn impacts how brands go about attracting our attention.
Stage V: Elevating. 2030-2050. Artificial General Intelligence, Nano-tech, Bio-Tech and Quantum Computing lead to humanity and technology becoming indistinguishable from one and other, both virtually and biologically. The Merge is complete. Humanity evolves
The final phase of The Merge, we have grown so dependent on technologies that the boundaries between the two have blurred. Every human on the planet has 24/7 access to a high-speed connection. Devices overlay our virtual lives onto the real world, making the two indistinguishable from one another. Biological breakthroughs also bring us closer together – nanobots travel through our bloodstream, neural lace uploads our thoughts to the cloud and brain-to-brain communication has become commonplace: elevating the human experience beyond what’s biologically possible.