- The Streaming Challenge, or Bringing the Mountain to Mohammed
- Edge Computing for Network Operators: The Content-Delivery Business Case
- Put the Cash back into Caching
- Unleash Mobile Video with LTE Broadcast (eMBMS)
- Optimising Streaming Media in Real Time
- Next-Gen Killer Apps with Edge Video Orchestration & Analytics
- The Internet of Caches
Beat the content crunch enhancing video delivery with (mobile) edge computing
1. Beat the
Content Crunch
Enhancing Video Delivery with
(Mobile) Edge Computing
Alexander Cherry
Contributors
Darko Ratkaj, Senior Project Manager for
Technology and Innovation at the European
Broadcasting Union (EBU)
Mahadev Satyanarayanan, Carnegie Group
Professor of Computer Science, Carnegie
Mellon University (CMU)
Dr Rolf Schuster, Director, Open Edge
Computing (OEC) Consortium
30 March 2016
2. Page 1
Beat the Content Crunch: Enhancing Video
Delivery with (Mobile) Edge Computing
The Streaming Challenge, or Bringing the Mountain to
Mohammed
We are currently witnessing an explosion in online media traffic, driven firstly by the
rise of OTT media platforms and secondly by the proliferation of consumer access
devices. Developments in cloud computing have made media-serving infrastructure
relatively easy to scale, and giant datacentres have sprung up on every continent like
so many production lines, each ready to assemble and expedite digital packages to
addresses all over the world.
On the other side of the equation, laptops, tablets, smartphones and internet
televisions are becoming more and more affordable to consumers at large – indeed,
TechCrunch estimates there will be 6.1 billion smartphones on the planet by the year
2020.i
So the necessary preconditions for mass-producing, as well as mass-
consuming, this mountain of content are already well in place. But, just as with mass-
produced physical goods, it still needs to be shipped to its intended recipients, and
this is where the fun begins.
This distribution problem would disappear in a world where consumers could live
next door to the clouds that originate their media content. This though is hardly a
possibility, so we are left with the problem of delivering the myriad digital packages
to their intended recipients, over the road system we call the network: the fixed-line
and cellular connections that link subscribers to the cloud. As with the road system,
some paths through the network are relatively plain-sailing with multiple lanes, some
are circuitous and congested; sometimes couriers simply get lost and never turn up.
The bad news from the perspective of the roadbuilders is that none of this traffic is
going anywhere any time soon and is in fact about to get a lot worse. Server-side
clouds and client-side devices are ready for the streaming video revolution but what
continues to hold the industry back are the telecoms networks in between.
Scaling network infrastructure – like upgrading a road network with additional lanes
and highways – is naturally a costly endeavour, particularly when we consider that
telcos find themselves largely cut out of the media value chain to the benefit of OTT
players, who, despite being wholly dependent on operator networks to function, still
take home the majority of the money. It is unlikely that within the current telecoms
paradigm – both financial and technological – telecom operators will be able to meet
consumer demand for streaming content. And if this happens, it will not just be the
end consumers and the OTT service providers that lose out, but the telcos too. For
insofar as operators can play a proactive role enabling the next generation of media
delivery, they can carve themselves a bigger slice of the overall online media-services
cake currently being divvied up by the OTTs. While there is room for advances in
everything from compression technologies to encoding, everyone agrees that, for
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
3. Beat the Content Crunch
With Edge Computing
Page 2
the future demand for online content to be exploited to the full, the network itself
needs to become more intelligent and multifunctional.
Edge Computing for Network Operators: The Content-
Delivery Business Case
With the network increasingly under the spotlight as a key enabler and differentiator,
the race is on to develop network architectures fit for the challenges of the next
decade rather than for those of the last. This has given rise to edge-computing
initiatives such as ETSI’s Mobile Edge Computing (MEC), Open Edge Computing (OEC)
and Fog Computing, all of which conceive the network as a flexible fabric in which
content and applications can be hosted at whichever location makes the most sense
from a network-economics and service-delivery perspective. This can be particularly
beneficial for applications that are bandwidth-heavy, latency-dependent or both,
giving us three key application families:
Edge Computing Application Type Bandwidth-Heavy Latency-Dependent
Content and Video Delivery High Low
Tactile Internet I (AR, VR) High High
Tactile Internet II (IoT and M2M) Low High
While most consumers are happy to wait a second or two for a stream to load, online
video, especially higher-resolution video, does consume enormous amounts of
bandwidth. This whitepaper explores the application of (mobile) edge computing to
content and video delivery primarily as a bandwidth play (though, as we will see,
there are also some video use cases that are latency-critical). We will be exploring
the tactile-internet applications separately in two follow-up pieces. To find out more
about the revenue opportunities that (mobile) edge computing opens up for network
operators in the video and content-delivery space, we spoke to
Mahadev Satyanarayanan (Satya), Carnegie Group Professor of Computer
Science at Carnegie Mellon University (CMU),
Dr Rolf Schuster, newly appointed Director of the Open Edge Computing
(OEC) Consortium, and
Darko Ratkaj, Senior Project Manager for Technology and Innovation at the
European Broadcasting Union (EBU)
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
4. Beat the Content Crunch
With Edge Computing
Page 3
Source: PeerApp Video and Mobile Video Stats (2015)
Before we go further, let's take a look at a few stats as to the size and nature of the
‘content crunch’. The total amount of OTT video in the pipes is massively increasing,
with TechCrunch reckoning back in 2015 that the overall number of streaming video
service subscriptions – the likes of Netflix and Amazon Prime – would increase from
92 million to 332 million between 2015 and 2019.ii
The bits/second of online video
streams is also increasing as people move towards higher resolutions, with PeerApp
estimating that by 2018 22% of online video traffic will be 4K (see above). Again
according to PeerApp, by 2019 nearly three quarters of online video will be viewed
on mobile devices. Estimates vary but between 10% and 20% of this mobile video will
be carried over cellular networks (as opposed to Wi-Fi), which are particularly ill-
equipped to deal with it due to backhaul constraints.
While on one hand this does mean more data traffic, operators cannot monetise this
as effectively as they could other services in the past, such as voice and SMS. Rolf,
who spent nearly a decade at Vodafone before his appointment to Director of the
OEC Consortium, points to the stagnating data revenues in the telco business:
‘The problem is that there is rapidly growing demand for data but the prices are not
growing dramatically, they stay flat, you have more traffic but you don’t get more
pay.’
Satya, who is also involved with OEC and whose research at Carnegie Mellon has
been central to the growth of Mobile Edge Computing (MEC), is of a similar opinion.
‘Today it is the case that operators, much to their disgust unfortunately, are
essentially bit pipes,’ he comments, ‘All of the value lies in the cloud service, in
Facebook, in Google, in other cloud service providers.’
Operators’ content-delivery business model is basically unsustainable, faced as they
are with declining Average Revenue per User (ARPU) on the one hand and the
Sisyphean rock of infrastructure investments on the other. In many respects it is in
operators’ interests – perversely enough – to impede, rather than to enable, the
streaming revolution.
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
5. Beat the Content Crunch
With Edge Computing
Page 4
This conflict was borne out in the recent tussles over paid peering in the US, which
saw Comcast and Verizon accused of deliberately throttling back Netflix traffic, and
in which the video giant was ultimately forced to pay for peering rights and thereby
(in theory at least) subsidise the operators’ network build-out. Network operators
cannot simply turn off the spigot though, at least in non-monopoly markets. As the
expectations for quality online video increase on the consumer side, buffering and
quality degradation – the things that undermine viewing experience the most – will
come to be judged more and more harshly, ultimately leading to operator churn. If
the operators cannot beat the OTTs, they can certainly find ways to join them, and,
with the entire market set to grow, there is no reason why more collaborative models
between OTTs and network operators cannot be developed to the benefit of all
players in the value chain.
This is already beginning to happen in fixed networks but there is the same
opportunity also in mobile networks, which are already coming under pressure from
video traffic, even if this still only represents a relatively small percentage of overall
video viewing. Darko, who as part of his work at the European Broadcasting Union
(EBU) is exploring new technologies for the delivery of audio-visual media services in
mobile, notes that broadcasters are looking for opportunities and cooperation with
the mobile industry. He believes there is a substantial if largely untapped market for
video services over cellular networks.
‘Broadcasters are interested in any solution that potentially would extend their reach
without significantly bringing their costs up,’ he says of the mobile distribution
channel. He goes on: ‘If you will accept the constraints of the current technology and
the current network coverage, there could still be scope for experimenting and
finding a business model that would help us and help them deliver more content,
more television, onto mobile devices.’
Massively distributed approaches to content delivery, such as (mobile) edge
computing, create an opportunity to take the operator-OTT partnership beyond a
few acrimonious paid-peering arrangements. We will now look at a range of edge-
enabled content-delivery use cases, from (Mobile) Edge Caching and LTE Broadcast
(eMBMS) to Real-Time Optimisation of Streaming Content and Next-Generation
Video Apps.
Put the Cash back into Caching
Part of the problem with online media delivery to date has been the lack of end-to-
end oversight and ownership. Large content owners and CDNs have generally
accessed the internet by establishing Points of Presence (POPs) at major peering
points, and these have remained the only parts of the sprawling mass that is the
internet that they have any direct control over. Once the content leaves the content
cache, like a letter tossed into the letter box, its delivery is entirely in the hands of
third-parties, who often operate opaquely: fixed-line and cellular networks in this
case.
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
6. Beat the Content Crunch
With Edge Computing
Page 5
(Mobile) edge computing involves the establishment of small clouds, often termed
‘cloudlets’, at strategic nodes towards the edge of operator networks. Among other
things, cloudlets can be used to cache content in exactly the same way as a CDN, with
the key advantage that they are much more local and much more distributed. They
are effectively a means to extend CDNs and content owners deeper into operator
networks, establishing a potentially unbounded number of jointly operated super-
local POPs across the network. What this means in practical terms is that potentially
any node in the network, in addition to its legacy role of aggregating and routing
traffic, also has the potential to host caching functionality. This takes into account
locations on a spectrum from core to edge, in both fixed and mobile networks,
including Central Offices, enterprise premises, C-RAN base station ‘hotels’, base
station aggregation points, LTE base stations and small cells.
By offloading content requests to local caches operators can take pressure off their
backhaul networks, thereby lessening the need for ongoing infrastructure build-out
and saving capex investment. In addition to helping operators get more bang for their
infrastructure bucks, (mobile) edge caching also facilitates the creation of new
revenue-generating partnerships and business lines around content. By eliminating
the vagaries of some or all of the core network, edge caching can reduce the
buffering and jitter that routinely affects video streams, allowing an operator to
strike a direct deal with a CDN or large OTT to make the latter’s content more reliably
available to their mutual subscribers.
Source: Distributed Content and DNS Caching (ETSI MEC Introductory Technical Whitepaper)
The exact business models around edge caching have not yet been fully explored. It
is possible that operators could allow content owners in for free, much as currently
happens with Google’s in-network Global Caching Service, with a view to charging
their subscribers more for a superior media experience. Rolf for one believes this will
be a hard sell and that Google’s Global Caching Service is an example of the sort of
service that operators should really be seeking to monetise directly:
‘There is a significant improvement in customer experience,’ he points out, ‘but
operators don’t really monetise that’.
What is more likely is that CDNs and OTTs will pay operators for the privilege of
getting closer to the network edge and then recoup this in the form of a ‘premium’
offering to their subscribers. If major content owners like Netflix, Amazon and
Pandora are prepared to pay operators for a POP on the trunk of their networks
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
7. Beat the Content Crunch
With Edge Computing
Page 6
(traditional peering, now paid), then there is ample potential to charge them for POPs
on the network’s branches, that is, in the fixed and mobile access networks closer to
where subscribers are actually doing their viewing.
That said, (mobile) edge caches represent a capex expenditure of their own and also
present a number of management and security challenges, so there will be a sweet
spot as far as the number of caches and their location in the network is concerned.
In many cases it might be better to pull the cache back from the absolute edge of the
network, to an intermediate aggregation point for instance, in order to strike the
right balance between the cost/management of the caches and the achievable
cache-hit ratios at different locations. Satya tends to agree:
‘One of the areas that operators will be sorting out as they pilot edge computing is
precisely where to locate the edge that is going to host all these compute capabilities.
My suspicion is that there will be some applications that are so extremely latency
sensitive, sub-millisecond, where you need to be really at the cell tower. However,
my expectation is that it’s mostly going to be deeper, so that, in the core network,
maybe over a multi-cell-tower coverage area, you have a single cloudlet.’
As Satya’s comment makes clear, any given operator’s considerations as to caching
economics will also form part of their overall edge-computing strategy. It may be, for
example, that the business case for introducing cloudlets into the network cannot be
made with the caching benefits alone but rather with caching in conjunction with
other revenue-generating edge-computing use cases that can leverage the same
infrastructure, such as AR and connected-car applications.
Unleash Mobile Video with LTE Broadcast (eMBMS)
Transparently caching a particular provider’s content within, or at the edge of, the
network can generally increase Quality of Service (QoS) for viewers on that network.
However, this is not a particularly marketable proposition: what if you want to ensure
a select offering is always available to subscribers at a given quality, regardless of the
decisions of the caching algorithm? Fixed network operators have in some cases
successfully diversified into IPTV services but this sort of innovation is absent from
mobile networks. The infrastructure build-out here is even more daunting than for
fixed networks due to the even greater distribution of access points and the limited
existing backhaul, making this at first an unappealing business proposition for MNOs.
As Darko puts it:
‘The mobile technology of today does not scale up for wide-area coverage for
television in an economically viable way, especially for popular TV programmes that
attract large concurrent audiences.’
With edge-caching infrastructure in place, linear content could be unicast through
the network to each cache, multicast from the caches to subscriber devices and
stored there for catch-up viewing. ‘There is a case for systematically storing 48-72
hours’ worth of content locally simply because the most popular programmes for
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
8. Beat the Content Crunch
With Edge Computing
Page 7
catch-up viewing are those that have been on-air in the last 48-72 hours,’ comments
Darko.
This approach does have the disadvantage though of subjecting the network to the
burden of large video streams and the streams to the vagaries of the network, and
may not therefore be the best approach. Ultimately, a more economical alternative
could be to use traditional broadcast technologies to populate the distributed
caches, leaving the mobile core network intact for best-effort services. Darko
believes that next-generation LTE Broadcast (eMBMS) has a considerable role to play,
and the European Broadcasting Union (EBU) is actively driving the standardisation
process within 3GPP.
As Darko explains, ‘The main objective is to create a technology that could be
deployed on mobile networks in order to facilitate large-scale distribution of media
services so that it is economically viable. It has to fly in the business sense.’ He goes
on to say: ‘Under the bonnet these technologies use the very same set of tools, the
same core technologies, OFDM, time frequency slicing etc. They just use them in
different ways, they’re optimised for different purposes. There is a relatively small
step that would need to be taken in order to make them interoperable.’
As it matures, LTE Broadcast presents us with the prospect that, with minimal
technical rework, LTE base stations could transpose broadcast TV signals – terrestrial
or even satellite – into a dedicated LTE communication channel, allowing an
uncapped number of subscribers to watch live TV over the mobile network without
risk of congestion either in the backhaul network or over the air.
As an extension of this scenario, caches on the base stations could record the
broadcast content for catch-up viewing, once again leaving the mobile core totally
unscathed by video streams.
As Darko puts it, ‘By recording content as it comes over the air you don’t have to have
additional overlay infrastructures such as CDNs or to take the content over a core
internet network in order to populate the cache.’
This combination of Mobile Edge Computing (MEC) and LTE Broadcast is a marriage
made in heaven and can enable true mobile video/TV services with relatively little
infrastructure investment and minimal risk to existing network services. Successful
trials of these technologies are taking place as we speak, and the theory is largely in
place. Ultimately though, if the market is to reach its full potential, more direct
cooperation needs to be fostered between broadcasters and MNOs.
Optimising Streaming Media in Real Time
As we mentioned earlier, (mobile) edge caches come with their own problems both
from a cost and a management perspective. In light of this, it is important to note
that this is not the only way in which the network edge can be leveraged to offer
subscribers a superior content experience: simply having real-time data about
network conditions, everything from link quality to service awareness, allows
operators to gain maximum utilisation from their existing network assets, and
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
9. Beat the Content Crunch
With Edge Computing
Page 8
content providers to optimise the delivery of their content. This is particularly
significant in the Radio Access Network (RAN) of mobile operators, where cell
conditions can fluctuate wildly as devices enter and leave the cell.
Source: Intelligent Video Acceleration Service Scenario (ETSI Whitepaper No. 11)
With Mobile Edge Computing, RAN APIs can be opened up to third-party CDNs as a
form of Radio Network Information Service (RNIS), allowing them to optimise the
delivery of their content over mobile networks for the best overall user experience.
Out of the four Proofs of Concept (POCs) so far approved by ETSI’s MEC Industry
Specification Group, which is overseeing the standardisation of Mobile Edge
Computing, two have related to the optimisation of mobile video streams based on
real-time insight into radio conditions.
In the intelligent video acceleration POC (‘service-aware video optimisation in a fully
virtualised network’), developed by Telecom Italia in conjunction with Intel UK,
Eurecom and the Politecnico di Torino, an edge-based application sends throughput-
guidance information to the video streamer, enabling it to toggle stream speed and
quality dynamically in accordance with real-time radio conditions (see above). A
similar POC, this time looking at service-aware experience optimisation, was
developed by China Mobile, Intel and iQiYi.
Next-Gen Killer Apps with Edge Video Orchestration &
Analytics
Some video-delivery applications enabled by Mobile Edge Computing (MEC) are
fundamentally new and take us off the well-trodden path of OTT video we have
discussed so far. Recent work by UK operator EE at Wembley Stadium, carried out in
collaboration with Nokia and Smart Mobile Labs, can give us a peak at what our
video-augmented future might hold.
Wembley Stadium’s ultimate aim is to provide spectators with real-time feeds and
replays of the match being played on their mobile devices. Latency is critical here,
unlike in the other video use cases examined in this whitepaper, as spectators
switching to a ‘goal camera’ or a pitch-side camera do not want to watch events
unfolding with a lag, they want to watch them in real time. In addition to this, any
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
10. Beat the Content Crunch
With Edge Computing
Page 9
hint of buffering or poor quality would be almost certain to kill an engagement app
like this. In order to overcome this problem, EE moved the video orchestration
workflow to the edge of the network, so that video could be uplinked, processed and
downlinked in as close to real-time as possible, with everything kept locally.
Wembley Stadium has a peak capacity just shy of 100,000 people, which is potentially
a lot of video streams if everybody wants to watch the same sequence – imagine a
penalty for instance, or the replay of an equalising goal. Rather than sending down
100,000 identical streams, EE leverages LTE Broadcast to send a single stream that
can be enjoyed by as many people as necessary without any impact upon quality.
This is another example of how video technologies hosted at the mobile edge (as
with caching) can combine with LTE broadcast to make a killer subscriber experience.
Similar use cases can easily be imagined for other recreational purposes, as well as
for public safety – imagine the benefits to public-safety operations of being able to
share video streams in real time with as many relevant people as required, wherever
they are, without any dependency on unreliable and potentially congested backhaul
networks.
Source: Example of Video Analytics (ETSI MEC Introductory Technical Whitepaper)
An extension of video orchestration at the edge is mobile video analytics at the edge.
At Wembley Stadium locally generated video is shared with local subscribers, but in
other cases video may have to go a considerable distance over cellular networks to
reach its recipients (CCTV cameras are often remote and therefore not always easy
to bring onto fixed networks). In this case it is uploaded video, as opposed to
downloaded video, that risks overwhelming the mobile network.
This would be doubly lamentable given what a high proportion of CCTV streams are
valueless. A smart way around this is to host analytics (incorporating image-
recognition and algorithms) at the edge, for instance at a base station – this can then
discard the majority of ingress streams before they ever get onto the network, or
render them merely as metadata. The minute a significant ‘event’ – as defined
through pre-set rules – occurs, the stream can be cranked up to HD.
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
11. Beat the Content Crunch
With Edge Computing
Page 10
The Internet of Caches
We suggested at the start that delivering online media to consumers was like trying
to bring the mountain to Mohammed, and in some sense this problem never goes
away. In the future though we will not so much have a mountain as a rolling
landscape featuring everything from hills and escarpments to tussocks and mounds,
fashioned out of the bedrock of the internet by the elemental forces of network
economics. The key thing is to create more delivery options within networks
themselves – either in terms of caching possibilities or in terms of data and
transparency that can be put to use.
Once networks are set up to work for us rather than against us, the next step is to
ensure more in-depth collaboration between the players active at different points in
the journey of a piece of content, all the way from media owners at the cloud level,
through the networks and edges of the fixed and cellular access-providers, all the
way down to the devices upon which subscribers actually consume their content. No
one player can claim to have end-to-end oversight or ownership of quality of
experience, so it is necessary for everyone in the chain to work together to ensure
that they are pulling in the same direction.
With edge-computing approaches we are moving towards a situation where the
network is more than just a relay to the cloud: increasingly, it will represent a
distributed footprint of programmable resources designed to meet, on demand, our
requirements for bandwidth and for low latency. What this means from a content-
delivery perspective is that the internet is rapidly becoming an Internet of Caches.
Darko ultimately envisages caching extending all the way down onto subscriber
devices themselves, where a portion of the memory typically remains free or can be
set aside, and this is something that a number of players – legal grey areas aside –
are exploring already using predictive analytics based on consumer viewing habits:
‘If you were able, one way or another, to identify the preferences of a particular user,
then you can load the storage on the user device either systematically or in an ad-
hoc fashion simply on the basis of these preferences.’
Bits of content could then be pre-cached onto a user’s device either at locations with
high throughput or during non-peak times for network traffic, for example when the
user is passing through a metro station on their commute, in a coffee shop or
overnight on their home Wi-Fi. Darko continues:
‘In order to make that possible, you obviously need to have that content somewhere
stored locally, because you don’t want to pull it from the core. And that local storage
doesn’t necessarily mean the local point of presence of a mobile network, it can also
be integrated with Wi-Fi or with any other network. So, from the content point of
view, the networks become more or less transparent.’
As the scenarios above suggest, operators are not the only ones with the potential
to get involved in the distributed-caching game. ‘I think over time we will see the
price point going down to a level where anybody could be the provider of a caching
capacity,’ says Darko. ‘This would require the network operators to see themselves
White paper produced in association
with Network Edge Europe 2016 (6-7
June, London). Find out more here:
http://openmobilemedia.com/netwo
rkedgeeurope/
12. Beat the Content Crunch
With Edge Computing
Page 11
as part of the broader value chain, not as the only ones that have access to, or
relationships with, the customer.’
Rolf, who through the work of the Open Edge Computing (OEC) Consortium is seeking
to standardise network-agnostic APIs for cloudlets, has a similarly broad vision as to
the distributed future of caching:
‘The guy who runs the coffee shop with a hotspot should be able to run cloudlets
very easily and earn some money with it, and there will be thousands of those guys,’
Rolf says. ‘So I think operators have to open up to that and they have to allow others
to play, because basically Wi-Fi is more and more important, even for operators…
The time where Wi-Fi was an evil thing for operators is long-gone.’
Fundamentally, the bigger the overall edge-computing pie, whether the technology
is serving enhanced content/video delivery or some other edge-computing
application like AR or autonomous driving, then the bigger operators’ slice will be.
The future of online content delivery looks by all indications to be a story of massive
federation powered by predictive analytics and open APIs. In addition to large caches
at the customary peering points around the world, we will be seeing more finely
distributed caches in fixed and mobile access networks, as well as independently
operated storage in everything from enterprises, retail outlets, hotels and
restaurants to homes, connected cars and smartphones.
This white paper has been produced in association with Network Edge Europe, which takes
place on 6-7 June at the Royal Garden Hotel, London. The event is free to attend for senior-
level telcos and features over 20 speakers from across the application developer, telco and
enterprise sectors.
For more information visit http://openmobilemedia.com/networkedgeeurope/
i
http://techcrunch.com/2015/06/02/6-1b-smartphone-users-globally-by-2020-overtaking-basic-fixed-
phone-subscriptions/
ii
http://techcrunch.com/2015/05/18/over-the-top-streaming-video-services-to-surge-to-
330-million-subscribers-by-2019/