Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Conecta latam2019 network challenges and business modeling for new low latency services - alberto boaventura oi

105 views

Published on

Intends to analyze the challenges for low latency services and upcoming new data opportunities. Describes the Internet data service waves: Internet Mobile; Internet of Things and now Tactile Internet, how low latency is becoming the king and new data services, the main attribute. Tactile Internet will open immersive services experience beyond current audiovisual ones, but it depends to understand the human sense reaction idiosyncrasies. Also for IoT, how latency will affect overall productive processes since new robotics control for Industry 4.0 to remote surgery. It introduces the Edge Computing as one the most important toll to reduce latency by facilitating data processing at or near the source of data generation. I.e., Edge Computing avoids the application data travels from the source to very far application and processing hosts. But the latency reduction does not have an unique strategy, and it requires to have transport network proper dimensioned and configured can certainly improve the network low latency. In addition, new transport architecture with simplification collapsing network layers, network aware multidomain control, automation etc. will be definitively imperative for optimizing latency for new critical applications. Improving processor performance using technologies beyond regular x86 servers, such as: GPUs, FPGA, DPDK etc, can reduce the time of workloads and delivering/returning data to user application with readiness.
In terms of low latency opportunities, accordingly to Chatan Sharma Consulting, Edge Internet economy will be over $4.1 Trillion worldwide. The initial growth will come from Edge serving existing use cases and will gradually be accelerated by the new use cases as the deployment becomes more widespread and developers learn to take advantage of the Edge Internet architecture for applications across industry domains in all major markets. But for MNOs, Edge Computing will open a several opportunities such as Service Integration, Managed Services, IaaS/PaaS/SaaS, new collocation business modelling etc. Besides latency, in general Edge Computing can bring an immediate benefit around local traffic offloading that implies in reducing overall CapEx cost. Higher Traffic Density can rapidly justify the Edge Computing deployment.

Published in: Technology
  • Be the first to comment

Conecta latam2019 network challenges and business modeling for new low latency services - alberto boaventura oi

  1. 1. Network Challenges and New Business Modeling for Low Latency Services September, 18 & 19th 2019 Alberto Boaventura Ger. Estratégia e Arquitetura de Rede Diretoria de Estratégia, Tecnologia e Arquitetura de Rede Conecta LATAM - TELCO DIGITAL TRANSFORMATION 2019
  2. 2. Mid 00’: “Every 100 ms of delay costs 1% of sales” Mid 00’: “Extra 0.5s in search page generation time dropped traffic by 20%.”
  3. 3. Telecommunication Industry Scenarios: Evolution Waves Internet Mobile Internet Internet of Things Tactile Internet Transformation Enabled global communications; New form of entertainment; Communication cheaper; Communications everywhere; Reachability of users and services; Smartphone: becoming the main device for access to content; Smartphones => Beginning of Digital Transformation; Potentialized the use of Social Networks; It will revolutionize all industrial segments through electronic integration and transaction, improving and optimizing its production processes; New innovation platform, inaugurating a new economic moment; Experience not only audiovisual, but full immersion; They will help users (humanity) in complementing the perception of the world by bringing more information through sophisticated applications of Augmented Reality and Virtual Reality; Market Voice traffic superior to data; Voice is the most important service; Non-Real-Time Applications Fixed broadband growth is 42% (CAGR) against 22% for mobile accesses; Fixed Broadband> 64 kbps; Default ADSL.1 = 1-8 Mbps Growth of fixed broadband is 8% (CAGR) against 33% for mobile broadband; The number of MBB exceeds that of FBB in 2013; Data traffic already outperforms voice traffic. Data becomes more the most important service; In the next decade will be some tens of billions of connected objects Explosion of connected objects; Services with low latency and security have another relevance; The 5G is promising with: expectation of 1 billion subscribers in less than 5 years of operation; above 10 trillion in revenue over the same period; Latency becomes more relevant than broadband for tactile Internet services; Technology World wide web Dial Up xDSL Packet switching and TCP / IP WAP EDGE, HSPA / HSPA + and LTE / LTE-A Android Video streaming and sharing Social networks Global Roaming Access LPWA, SigFox, LoRA, LTE-M Computing in the Cloud; Cognitive computing; Big Data, Analytics, Machine Learning, AI Blockchain Low latency systems: SDN / NFV; Edge/Fog Computing Quantum Computing Quantum Internet; 2020201020001990 Broadband Mobility Connectivity Latency
  4. 4. Above 500 ms 100 ms to 500 ms 50 to 100 ms 5 to 50 ms below 5 ms Human Experience IoT&Business Requirements ApplicationsSample •Non real time services; •Web browsing; •Email; •Non real time smartmeetering: Utility Sensors; •Retail better than 500 ms; •Satellite voice services •Video on demand streaming •Cognitive assistance (20-40 msec) •Virtual reality •UAV control (10 – 50 msec) •Automotive pre-crash sensing warning •Vehicle safety apps (mutual awareness of vehicles for warning/alerting) •Assisted driving – cars make cooperative decisions, but driver stays in control •Realtime smartmeetering: eHealth Sensors •Telepresence with real-time synchronous haptic feedback •Industrial robots; Closed-loop industrial control systems •Smart grid: Synchronous phasing of power supplies •Shared haptic virtual environment •Tele-medical applications, e.g. tele-diagnosis, tele-rehabilitation •Augmented reality •Cooperative driving (20 msec) •Serious gaming (20 msec) •Industrial Automation •Remote Robots/Surgery •Industry 4.0 •Negotiated automatic cooperative driving Why latency is so importante for upcoming services? Body Reaction Audio Reaction Visual Reaction Cortical Neuron Communication Remote Surgery & Robotics Industrial RobotsUAV ControlRetailSmartmetering Web Access
  5. 5. BBU BBU 5G Latency dependence TDD BANDS INTERFERENCE 5G URLLC & UPF ON THE BORDER ICC & COMP MASSIVE MIMO DL UL DL UL Victim Victim BS to BS Interferation UE to UE Interferation Aggressor Aggressor PCRF HLR/HSS OCS/ OFCS S-GW P-GWMME IMS RRHRRH Internet Oper. A Oper. B 500ms – 1s 1 ms 100µs 5µs 1 µs 150ns 65ns LegacyNetworks(2G,3G, 4G)&Billing 5GURLLCRequirmeent AssynchronousDual Connectivity(100µs) TDDBands CoMP&ICIC CAIntra-band massiveMIMO X 10-3 X 10-1 X 10-1 X 10-1 X 10-1 X 10-1 LogarithmScale X2 (1 µs) Victim Cell hnm h21 h12 h11 Internet <1 ms X2 (1 µs) h11 h12 h21 h22 65 ns ±50 ppb (Wide Area) ±100 ppb (Medium/Local Area) ±250 ppb (Home Area) Master Clock or GPS closer to access Latency & network synchronization is a key enabler for 5G network and services: since to provide new tactile internet experience till to allow algorithms to resource and interference management and optimization for improve network capacity <1 ms 5µs
  6. 6. Latency & Network Architecture/Topology Changes UP AND CP SEGREGATION  USER EXPERIENCEC-RAN  CAPACITY IMPROVEMENTCURRENT LTE NETWORK AtCoreAtEdge Internet Backhaul ... PCRF HLR/HSS OCS/ OFCS S-GW P-GWMME IMS Internet Backhaul RANgoestotheCenter InterferenceMitigation IncreaseCapacity Internet CoregoestotheEdge: Lowlatency TrafficOptimization Interference between cell sites redeuces the overall network capacity ThedistancebetweenUEand Internetandgreatnumberof hopsproducesahugeimpact inlatency Backhaul BBU RRH BBU RRH PCF UDR SMF UPF AMF Other RURU Fronthaul DU DU Midhaul 0-10 km 50-200 µs 100 G+ links L2 P2P & P2MP 20-40 km < 2ms 100 G+ links L2/L3 40-200 km < 10 ms 400 G+ links L3 RURU Fronthaul DU DU Midhaul 0-10 km 50-200 µs 100 G+ links L2 P2P & P2MP 20-40 km < 2ms 100 G+ links L2/L3 40-200 km < 10 ms 400 G+ links L3 PCF UDM SMFAMF Other CU CU UPFCU CU ≤ 1ms 1µs 1µs
  7. 7. Reducing the Latency: From the Core to the border TDC Distribution PCF SMFAMF TDC Access CU CU UPF TDC Core UDMNSSF NEF NRF SDN Controller DU DU FronthaulMidhaulBackhaulInternet 10 to 2000 ms > 50 ms > 10 ms < 2ms < 0.20 ms 1 to 10 ms Cloud Processing Internet Backhaul+Backbone Midhaul Fronthaul Radio Access 3rd Party Applications 3dr party DC RU RU PoD Backbone Delay Budget Cloud Computing Edge Computing Usage Characteristics •Control Plane •Core Applications and General Internet Users •Global information collocated from worldwide •Centralized, IT based services •Faraway from users and communicate through IP networks •User Plane •Mobile users •Limited localized information services related to specific deployment locations •Low latency dependency and telecom services •In physical proximity and communicate usually through single-hop wireless connection Working & Deployment Environment •Warehouse-size building with cooling and precision air conditioning systems •Ample and scalable storage space and compute power •Centralized and very big buildings •Outdoor (streets, parklands, etc.) or indoor (restaurants, shopping malls etc.) •Limited storage, compute power and wireless interface •Centralized or distributed in regional areas by local business (DLSAM/OLT Cabinets, RNC/BSC/BTS, local teleco operator or vendor, shopping mall retailer etc.) Competition •High competition with big players: Google, Microsoft, Amazon •Low competition => Competitive advantage to operators 3rd Party Applications
  8. 8. Simple Edging analysis 371 334 265 247 196 193 188 177 137 121 43 PA BA PR MG SC CE RS PE GO SP RJ Most populous states and the dispersion index (km) 𝑰𝒏𝒅𝒆𝒙 = σ𝒊 𝑷𝒐𝒑𝒊 × 𝑮𝒆𝒐𝑫𝒊𝒔𝒕𝑖 σ𝑖 𝑃𝑜𝑝𝑖 IMPACT OF TWO NEW CLUSTERSIMPACT OF INCREASING EDGING IN BAHIAPOPULATION DISPERSION IN BRAZIL 81 7 4 54 SALVADOR ITABUNA TEIXEIRA DE FREITAS VITÓRIA DA CONQUISTA VITÓRIA DA CONQUISTA SALVADOR ITABUNA TEIXEIRA DE FREITAS Dispersion index (km) 33 7 4 18 12 25 SALVADOR ITABUNA TEIXEIRA DE FREITAS VITÓRIA DA CONQUISTA BARREIRAS JUAZEIRO VITÓRIA DA CONQUISTA SALVADOR ITABUNA TEIXEIRA DE FREITAS Dispersion index (km) BARREIRAS JUAZEIRO SALVADOR
  9. 9. Reducing the Latency: Transport Delay Incoming Ramdom Traffic Outgoing Shapped Traffic Packet delayed due queue length No packet delay  M/M/1 Router  𝑾 (𝒍𝒂𝒕𝒆𝒏𝒄𝒚) = 𝟏 𝝁 − 𝝀 W  W Queueing delay Average Incoming packet throughput Outgoing packet throughput Propagation delay Fiber Length - km (l) Cladding 𝑷(𝒎𝒔) = 𝒍 × 𝟑. 𝟑𝟒 Outgoing port throughput delay 0 1 ... 1 0 1 0 0 1 ... 1 0 1 0 0 1 ... 1 0 1 0 Output delay 𝑶 = 𝑩  Packet Length (B) Processing (Routing/Switching) delay Memory Bus Crossbar 𝑹 < 𝟏𝟎𝝁𝒔 𝒑𝒆𝒓 𝒏𝒆𝒕𝒘𝒐𝒓𝒌 𝒆𝒍𝒆𝒎𝒆𝒏𝒕 𝑻𝒓𝒂𝒏𝒔𝒑𝒐𝒓𝒕 𝑫𝒆𝒍𝒂𝒚 = 𝑵 × (𝑹 + 𝑶 + 𝑾) + 𝑷
  10. 10. UDM NSSF NEFNRF Reducing the Latency: Transport Optimization & Evolution OTIMIZAÇÃOTRANSPORT NETWORK FOR 5GCURRENT TRANSPORT NETWORK InnerAcesso Distrib. DWDM OTN/ Core OTN/ Acesso BTS Optical NetworkIpNetwork Mobile Network PCRF HLR/HSS OCS/ OFCS S-GW P-GWMME IMS RU TDC Access TDC Distribution BTS Spine-Leaf PCF SMFAMFCU CU UPF RU TDC Core Multiple Hierarchies Independent Optical and IP Domains Distributed Control Low scalability Simplification E2e Centralized and Multi-Domain Control Virtualization Elastic and Scalable Open and Flexible Architecture Automation SDN Controller Midhaul Metro DWDM OTN/ Core Fronthaul DUDU IP/SRAcesso Distrib. Rede TSN IEEE 1588 Edge Computing Cloud Computing Fronthaul Midhaul Backhaul Backbone
  11. 11. Reducing the Latency: ImpRove HW Architecture CPU Control Plane Storage Storage Communication Data Flow Prrocessing IO CPU FPGA FPGA FPGA GPU Compute Acceleration FPGA Control Plane • Management • Securrity • Protocol Bridging Storage Storage Acceleration • Cryptography • Compression • Indexing Data Flow Prrocessing • Inline processing • Pre-processing • Pre-filtering • Exception processing • Cryptography • Compression • IO expansion • Protocol bridging IO Ordinary x86 CPU Server Server Architecture with Accelerators 1X 1X 1X 4X 10X 10X 21X 27X 36X Speech Inference Video Inference Recomender Inference CPU Server Tesla P4 Tesla T4 CPU GPU FPGA ASIC Flexibility Cost Efficiency CPU GPU FPGA • Low compute density • Complex control logic • Large caches • Optimized for serial operations • Shallow pipelines • Low latency tolerance • High compute density • High computattions per memory access • Built for parallel operations • Deep pipelines • High latency tolerance • Field Programmable Gate Arrays • (FPGAs) are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. • Can be reprogrammed to desired application or functionality • FPGAs uses less power, but new NVIDIA chips use as few as 10-15 watts for a teraflop. • Verification of complex designs implemented is worse than CUDA code.
  12. 12. Reducing the Latency: Choose the right SW Architecture TDC Distribution PCF SMFAMF TDC Access CU CU UPF TDC Core UDMNSSF NEF NRF SDN Controller DU DU FronthaulMidhaulBackhaulInternet 3rd Party Applications 3dr party DC RU RU PoD Backbone Virtual Resources Virtualization Layer HW VNF Instancies VNF1 ... VNFn Lifecycle Service Orchestrator e2e EMS Data Plane Development Kit (DPDK) NFVI Container Eng. Host OS HW CNF Instancies CNF1 ... CNFn MANO NFVO CNFM CIM VIM VNFM Virtual Resources Virtualization Layer HW Mobile Edge Platform Data Plane Development Kit (DPDK) NFVI Container Eng. Host OS HW Mobile Applications Mobile Edge Host Level Managger Mobile Edge Platform Manager CIM VIM Mobile Edge Orchestrator User App. LCM Proxy OSS/BSS EMS OSS/BSS At EdgeAt Cloud ETSI NFV Architecture ETSI MEC Architecture
  13. 13. unlocking new opportunities with Low Latency services Industry 4.0 Healthcare Entertainment Transport Other Industries • Industrial Automation • Digital twin • AR and VR Applications • Smart Factory/Industrial • Automation • Industrial Control • Robot Control • Process Control • Manufacturing Industry • Motion Control • Remote Control • Remote Diagnosis • Emergency Response • Remote Surgery • Digital Twin • AR, VR & Immersive Entertainment; • Virtual Tourism; • Sports VR & 360 video streaming; • Online Gaming; • Game Streaming; • Sport Bettings; • Driver Assistance Applications • Automated & Autonomous Driving; • Enhanced Safety; • Cooperative collision avoidance (CoCA) of connected automated vehicles; • Video data sharing for assisted and improved automated driving (VaD); • Changing driving-mode; • Traffic Management; • Airlines/Airports; • Utilities: Smart Energy, Smart Grid • Critical Metering • Financial Industry: Stock transaction; Electronic Fund Transfer • Insurance Industry • Smart Cities: Security; Urban Mobility • E-Commerce According to Chetan Sharma Consulting - August 2019: “By 2030, the Edge Internet economy (Low Latency Services) will be over $4.1 Trillion worldwide. The initial growth will come from Edge serving existing use cases and will gradually be accelerated by the new use cases as the deployment becomes more widespread and developers learn to take advantage of the Edge Internet architecture for applications across industry domains in all major markets”.
  14. 14. Beyond low latency Services: Edge computing opportunities for MNOs E2E Costumer Applications B2B2x Solutions System Integration Edge IaaS/ PaaS/NaaS Dedicated Edge Hosting Description& Scenario Consists of offering a top latency based services; For instance, VR 360 content streaming; Consists of offering specific vertical services; For instance, operator offers an edge-enabled caching/analytics solution for CCTV cameras; Operator offers a system integration and managed services to large enterprise customers; Operator provides MEC-enabled IaaS/PaaS/NaaS to a large number of developers/enterprise customers; Customers buy MEC-enabled compute/storage/services etc. through a self-service cloud portal; they deploy (parts of) their application; Operator acts as a collocation provider, edge server are deployed on a as-needed basis depending on the geographical coverage request by the public cloud provider; Revenues • One-time revenue: Initial upshot payment per subscriber; • Recurring revenues: Monthly fees per subscriber; • One-time revenue: Configuration fees (charge per camera/device connected); • Recurring revenues: Service fees (char per camera/device connected); • One-time revenue: Plan & build fees; • Recurring revenues: Manage service and planned, ongoing support; Additional feature development (not planned) • Recurring revenues: Pay-as-go based on compute/storage/connectivity resource and tier; • One-time revenue: Initial preparation fees for configuration/Installation of hardware • Recurring: Compute/storage fees; Connectivity; APIs usage-based fees Costs • One-time costs: Initial service development; VR headsets; • Recurring costs: Content rights; Ongoing feature development; • One-time costs: Server HW (MEC) costs; Infra and Installation of server; • Recurring costs: Basic operations (power, cooling, collocation); Monitoring, Field maintenance; troubleshooting; SW licensing; Ongoing feature development • One-time costs: Plan & build costs (HW & SW components, development costs, professional services fees) • Recurring costs: Operating costs (e.g. for MEC infra, planned support etc.); Additional feature development costs; • One-time costs: Server HW (MEC) costs; Infra and Installation of server; • Recurring costs: Basic operations (power, cooling, collocation); Monitoring, Field maintenance; troubleshooting; • SW licencing; • One-time costs: Server HW (MEC) costs; Infra and Installation of server; • Recurring costs: Basic operations (power, cooling, collocation); Monitoring, Field maintenance; troubleshooting; IaaS, SaaS, PaaS Source: HP & Intel – STL Partners: “EDGE COMPUTING: 5 VIABLE TELCO BUSINESS MODELS 2017”
  15. 15. Edge Data Center Infrastructure Cloud Data Center Edge Data Center Power • Purpose-built utility / site electrical feeds • Multiple grid feed for redundancy • Diesel generators provide long-term energy source during utility outage • May leverage existing power infrastructure repurposed for edge computing • May lack redundant power grid feed • Diesel generator backup may not be feasible due to space, noise, or pollution restrictions Cooling • Purpose-built • Cooling capability and redundancy built by design • Cooling design options may be limited due to the size or location of data center Connectivity • Redundant connectivity is typical • Design for performance • Redundant connectivity is desirable but depends on applications and site restrictions • Design for low latency Facility • Purpose-built facility or room • Dedicated or MTDC • Flexibility in site location, to reduce hazards • Typically manned, with varying levels of monitoring and automation • Purpose-built facility, leverage existing facility, or purpose built enclosure » Dedicated or MTDC • Latency and location of compute needs drive EDC location • More likely to be unmanned, with remote monitoring and automation Source: TIA 2019
  16. 16. Low Latency Brief Capex Analysis for single application ... ... CloudComputingEdgeComputing N nodes 𝐶 𝑇 𝐶 𝑇 𝐶 𝑇 𝐶 𝑇 𝐶 𝑇 𝐶 𝑇 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐸𝐷𝐶 𝐶 𝐷𝐶 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 = 𝑵 × 𝐶 𝑇 + 𝐶 𝐷𝐶 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 = 𝑵 × 𝐶 𝑇 + 𝑀 × 𝐶𝑆 + 𝐶𝐼𝑛𝑓𝑟𝑎𝐷𝐶 𝑬𝒅𝒈𝒆𝑪𝒐𝒔𝒕 = 𝑵 × 𝐶 𝐸𝐷𝐶 𝐶 𝑇 = 𝑐 𝑇 × 𝑹 × 𝑻𝒓𝒂𝒇𝒇 𝑅 𝐶 𝑇 = 𝑐 𝑇 × 𝑹 × 𝑫𝒆𝒏𝒔 × 𝑲 × 𝑹 𝟐 = 𝑲 × 𝑫𝒆𝒏𝒔 × 𝑹 𝟑 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 = 𝑲 × 𝑫𝒆𝒏𝒔 × 𝑹 𝟑 + 𝑀 × 𝐶𝑆 + 𝐶𝐼𝐷𝐶 𝑬𝒅𝒈𝒆𝑪𝒐𝒔𝒕 ≈ 𝑀 × 𝐶𝑆 + 𝐶𝐼𝑛𝑓𝑟𝑎𝐸𝐷𝐶 + 𝜎 𝐸𝐷𝐶 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 = 𝑲 × 𝑫𝒆𝒏𝒔 × 𝑳 𝟑. 𝟑𝟒 𝟑 + 𝑀 × 𝐶𝑆 + 𝐶𝐼𝐷𝐶 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 − 𝑬𝒅𝒈𝒆 𝑪𝒐𝒔𝒕 = 𝑲 × 𝑫𝒆𝒏𝒔 × 𝑳 𝟑. 𝟑𝟒 𝟑 + 𝑀 × 𝐶𝐼𝐷𝐶 − 𝐶𝐼𝑛𝑓𝑟𝑎𝐸𝐷𝐶 − 𝜎 𝐸𝐷𝐶 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 𝟏 𝐶𝒍𝒐𝒖𝒅𝑪𝒐𝒔𝒕 𝟐 𝑬𝒅𝒈𝒆𝑪𝒐𝒔𝒕 𝑫𝒆𝒏𝒔 𝟏 > 𝑫𝒆𝒏𝒔 𝟐 L (Latency) Besides latency, in general Edge Computing can bring an immediate benefit around local traffic offloading that implies in reducing overall CapEx cost. Higher Traffic Density can rapidly justify the Edge Computing deployment Infrastructure Cost
  17. 17. Low Latency Opex Analysis Hyperscale DC 1 Big Place TDC Core 1 - 20 Places TDC Distribution 10-50 Places TDC Access 50-100 Places PODs > 1000 Devices Millions of Devices> 5000 Servers 100- 500 Servers 50-100 Servers 1- 10 Servers 1 Server 1 GW 100 MW 10 MW 1 MW 100 kW 10 kW 1 kW 100 W 10 W 1 W 100 mW 10 mW Source: Based on Disruptive Analysis 2018 Processing Chip (<0,2 mW) RaspberyyPi (0,5 mW) Edge Platform (30 W) Standard Server (5 kW) TDC Access (EdgeMicro) (50 kW) TDC Distribution) (500 kW) TDC Core (2 MW) NVIDIA Pegaus (500 W) Huperscale DC (1 GW) Hyperscale DC TDC Core TDC Distribution TDC Access PODs Power Consumption > 100 MW 400 kW- 2 MW 100 kW- 500 kW 20 - 50 kW 1-5kW Area > 10,000 m2 200-1,000 m2 100 - 200 m2 20-100 m2 4 m2 Number of Places 2x1 2x1-20 10-50 50-100 1000-10000 Total Power Consumption 1 GW 1.6 - 40 MW 1 - 25MW 1 - 5 MW 5-50 MW Total Area > 20,000 m2 400-20,000 m2 1,000-40,000 m2 1,000 - 5,000 m2 4,000-40,000 m2 Cell Site based OpExCore Network based OpEx PCF SMFAMF TDC Access CU CU UPF TDC Core UDMNSSF NEF NRF SDN Controller DU DU FronthaulMidhaulBackhaulInternet 3rd Party Applications 3dr party DC RU RU PoD Backbone TDC Distribution
  18. 18. Grupo A Norte Grupo B Sul Grupo C Polígono DF/MG/RJ/SP/PR Grupo D SE/NE How Oi can fit new Low latency business Oi is the operator that has the largest fiber optic network in the country with over 375,000 km of fiber cables for local and long distance access; The Company has a large long-distance fiber optic network serving more than 2000 Brazilian municipalities that are being upgraded to support 100 Gbps speeds; Oi has wide real estate facilities to support fixed and mobile networks, such as thousands of DSLAM & OLT cabinets, remote stages, cell sites, buildings, etc. All installation sites for TDCs follow criteria with double fiber approach - OTN Network Sites with reliable transmission approach; Preference for sites with InnerCore and Outercore routers; Lower latency between sites; Better traffic distribution among regionals; NETWORK OPTICAL TRANSPORT TOPOLOGY FIXED BROADBAND OFFERING TDC CORE WITH HIGH TRANSPORT AVAILABILITY Optical Network & Facilities 99% 89% 47% 98%98% 73% 23% 96%93% 28% 1% 80% Oi Vivo TIM Claro PIB População Município Fonte: : Anatel/IBGE 2018 Fonte: : GTV Plano de Virtualização 2017/2018
  19. 19. Summary ● Low Latency is becoming the king: The new wave for upcoming data services will have latency as main attribute. Low latency will allow inedited immersive/real time services, becoming a new internet era: Tactile Internet. There are lot of technologies that enable low latency and catalyze Tactile Internet, such as: 5G,& Edge Computing; ● Tactile Internet requirements: The immersive services experience depends on which type of human sense reaction is used, and vision reaction is one critical one, which requires latency below 5 ms. New robotics control for Industry 4.0, will require a very low latency as well for proper production integration; ● Low Latency is a key enabler for 5G: Since to provide new Ultra Reliable Low Latency Communication Services till to allow algorithms to resource and interference management and optimization for improve network capacity ● Edge Computing: It is the most important toll to reduce latency and consists to facilitate data processing at or near the source of data generation (Gartner). Thus, it avoids the application data travels from the source to very far application and processing hosts; ● Reducing the Latency. The overall transport network proper dimensioned and configured can certainly improve the network low latency. However, new transport architecture with simplification collapsing network layers, network aware multidomain control, automation etc. will be definitively key aspects for optimizing latency for new critical applications. Improving processor performance using technologies beyond regular x86 servers, such as: GPUs, FPGA, DPDK etc, can reduce the time of workloads and delivering/returning data to user application with readiness; ● Upcoming Low Latency Opportunities: Edge Internet economy (Low Latency Services) will be over $4.1 Trillion worldwide. The initial growth will come from Edge serving existing use cases and will gradually be accelerated by the new use cases as the deployment becomes more widespread and developers learn to take advantage of the Edge Internet architecture for applications across industry domains in all major markets; ● Edge Computing Opportunities: The opportunities for MNOs will be beyond of supporting directly services and use cases to end customers. It can open a several opportunities such as Service Integration, Managed Services, IaaS/PaaS/SaaS, new collocation business modelling etc. ● Capex Optimization: Besides latency, in general Edge Computing can bring an immediate benefit around local traffic offloading that implies in reducing overall CapEx cost. Higher Traffic Density can rapidly justify the Edge Computing deployment ● How Oi can help; Oi is the operator that has the largest fiber optic network in the country with over 375,000 km of fiber cables for local and long distance access; It has a large long-distance fiber optic network serving more than 2000 Brazilian municipalities that are being upgraded to support 100 Gbps speeds; It has wide facilities to support fixed and mobile networks, such as thousands of DSLAM & OLT cabinets, remote stages, cell sites, buildings, served by abundant optical transport network.
  20. 20. ¡Gracias! Thanks! Obrigado! Q&A alberto@oi.net.br

×