OIF experts presented project updates, discussed overcoming implementation challenges through interoperability and open optical networking and disaggregation at NGON & DCI World 2022 held in Barcelona, Spain June 21–23, 2022.
Speakers gave an overview of OIF’s 400ZR work, including results from a recent interoperability demonstration, co-packaging, Common Management Interface Specification (CMIS), common electrical interfaces (112G and 224G) and Transport Software Defined Networking (SDN) Application Program Interface (API).
Behavioral modeling of Clock/Data RecoveryArrow Devices
Clock/Data recovery (CDR) is a tricky logic to implement correctly. To verify the clock/data recovery logic implemented in designs, the corresponding verification infrastructure needs to be modeled correctly.
This presentation aims to present the various issues faced for modeling CDR behaviorally along with their solutions.
Join Teledyne LeCroy for a discussion of what S-parameters are and why we should care about them. As serial data rates move into the multi-gigabit domain, S-parameters play an important role in understanding system performance. We will uncover the four main patterns found in s-parameters and learn what they can tell us about our interconnects.
OIF experts presented project updates, discussed overcoming implementation challenges through interoperability and open optical networking and disaggregation at NGON & DCI World 2022 held in Barcelona, Spain June 21–23, 2022.
Speakers gave an overview of OIF’s 400ZR work, including results from a recent interoperability demonstration, co-packaging, Common Management Interface Specification (CMIS), common electrical interfaces (112G and 224G) and Transport Software Defined Networking (SDN) Application Program Interface (API).
Behavioral modeling of Clock/Data RecoveryArrow Devices
Clock/Data recovery (CDR) is a tricky logic to implement correctly. To verify the clock/data recovery logic implemented in designs, the corresponding verification infrastructure needs to be modeled correctly.
This presentation aims to present the various issues faced for modeling CDR behaviorally along with their solutions.
Join Teledyne LeCroy for a discussion of what S-parameters are and why we should care about them. As serial data rates move into the multi-gigabit domain, S-parameters play an important role in understanding system performance. We will uncover the four main patterns found in s-parameters and learn what they can tell us about our interconnects.
An Overview of the ATSC 3.0 Physical Layer SpecificationAlwin Poulose
ATSC 3.0 Physical Layer Specification
IEEE TRANSACTIONS ON BROADCASTING,
VOL. 62, NO. 1, MARCH 2016
Luke Fay, Lachlan Michael, David Gómez-Barquero, Nejib Ammar, and M. Winston Caldwell
PowerArtist™ includes production-proven RTL power analysis with interactive visual debug, analysis-driven automatic RTL power reduction, and a Tcl interface to the database enabling custom reports and tracking of power through regressions. PowerArtist generated models bridge the RTL and layout gap delivering physical-aware RTL power accuracy and RTL-power driven early power grid integrity. This presentation provides an overview of PowerArtist and covers RTL design-for-power best practices using real-life examples. Learn more on our website: https://bit.ly/10Rpcxu
This presentation discusses the basics about how to realize logic functions using Static CMOS logic. This presentation discusses about how to realize a Boolean expression by drawing a Pull-up network and a pull-down network. It also briefs about the pass transistor logic and the concepts of weak and strong outputs.
I have prepared it to create an understanding of delay modeling in VLSI.
Regards,
Vishal Sharma
Doctoral Research Scholar,
IIT Indore
vishalfzd@gmail.com
Design and build a Private Cloud for your Enterprise using a Scalable Architecture.
- Bridge IT and the Public Cloud
- Reduce Cost
- On-Demand Services
- Run Scalable Applications
- Handle Traffic Growth
- Meet Compliance Objectives
- Offer Operational Flexibility and Efficiency
An Overview of the ATSC 3.0 Physical Layer SpecificationAlwin Poulose
ATSC 3.0 Physical Layer Specification
IEEE TRANSACTIONS ON BROADCASTING,
VOL. 62, NO. 1, MARCH 2016
Luke Fay, Lachlan Michael, David Gómez-Barquero, Nejib Ammar, and M. Winston Caldwell
PowerArtist™ includes production-proven RTL power analysis with interactive visual debug, analysis-driven automatic RTL power reduction, and a Tcl interface to the database enabling custom reports and tracking of power through regressions. PowerArtist generated models bridge the RTL and layout gap delivering physical-aware RTL power accuracy and RTL-power driven early power grid integrity. This presentation provides an overview of PowerArtist and covers RTL design-for-power best practices using real-life examples. Learn more on our website: https://bit.ly/10Rpcxu
This presentation discusses the basics about how to realize logic functions using Static CMOS logic. This presentation discusses about how to realize a Boolean expression by drawing a Pull-up network and a pull-down network. It also briefs about the pass transistor logic and the concepts of weak and strong outputs.
I have prepared it to create an understanding of delay modeling in VLSI.
Regards,
Vishal Sharma
Doctoral Research Scholar,
IIT Indore
vishalfzd@gmail.com
Design and build a Private Cloud for your Enterprise using a Scalable Architecture.
- Bridge IT and the Public Cloud
- Reduce Cost
- On-Demand Services
- Run Scalable Applications
- Handle Traffic Growth
- Meet Compliance Objectives
- Offer Operational Flexibility and Efficiency
Get ready to dive into the exciting world of IoT data processing! 🌐📊
Join us for a thought-provoking webinar on "Processing: Turning IoT Data into Intelligence" hosted by industry visionary Deepak Shankar, founder of Mirabilis Design. Discover how to harness the potential of IoT devices by strategically choosing processors that optimize power, performance, and space.
In this engaging session, you'll explore key insights:
✅ Impact of processor architecture on Power-Performance-Area optimization
✅ Enabling AI and ML algorithms through precise compute and storage requirements
✅ Future trends in IoT hardware innovation
✅ Strategies for extending battery life and cost prediction through system design
Don't miss the chance to learn how to leverage a single IoT Edge processor for multiple applications and much more. This is your opportunity to gain a competitive edge in the evolving IoT landscape.
According to a new Gartner report1, “Around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2022, Gartner predicts this
figure will reach 75%”. In addition to hosting new 5G era services, the other major network operator driver for edge compute and edge clouds is deploying virtualized network infrastructure, replacing many dedicated hardware-based elements with virtual network functions (VNFs) running on general purpose edge compute. Even portions of access networks are being virtualized, and many of these functions need to be deployed close to end users. The combination of these infrastructure and applications drivers is a major reason that so much of 5G era network transformation resolves around edge cloud distribution.
A revolution is going on at the Edge of the Network.
Why Edge is important?
How Edge Computing is shaping the way we do IoT, AR/VR, Big Data, Machine Learning and Analytics applications.
What are the important problems and who’s problem is this?
What solutions Industry is looking into right now?
This review of the "Industry report by SDxCentral" summarizes what is going on in the Industry.
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision SystemAI Frontiers
This presentation will demonstrate our recent progress in developing advanced computer vision algorithms using embedded platforms for video-based face recognition, vehicle attribute analysis, urban management event detection, and high-density crowd counting. These algorithms combine the traditional CV approach with recent advances in deep learning to make high-performance computer vision systems practical and enable products in several vertical markets including intelligent transportation systems (ITS), business intelligence (BI), and smart video surveillance. We will demonstrate algorithm design and optimization scheme for several recently available processors from Movidius, Nvidia, and ARM.
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading Onyebuchi nosiri
Currently, the high-performance processors in Spine-Leaf, Mesh, and Router layer-3 (SLMR-3) backend server domain have multiple cores, but data offloading from processor to the peripheral is not keeping pace with the required Quality of Service (QoS) needed to balance the workload on a Warehouse Scaled Computer (WSC) running a developed Enterprise Energy Tracking Analytic Cloud Portal (EETACP) data center network. High speed with low latency interconnects between the processors and Field Programmable Gate Array (FPGA) is critical for achieving performance benefits in EETACP deployment. Most of the servers in WSC architectures are running at average utilization rates and perform well under peak processing power. These servers are good candidates for FPGA processors in cloud-based data centers owing to its acceleration coherency. This paper made a strong case for cloudbased support for EETACP. An FPGA-based Spine-Leaf model is proposed to be an alternative to traditional network models for EETACP provisioning. The paper analyzed reconfigurable FPGAs, characterized a simplified process model for hyperscale FPGA cloud design description. To validate the performance, comparisons was made with two similar networks, namely DCell and BCube for enterprise application deployments. It was concluded that FPGA-based DCN acceleration for EETACP offers acceptable QoS expectations
Next Generation Inter-Data Center NetworkingInfinera
Presented by Chris Liou, Vice President, Network Strategy, at ECOC 2013 in London, UK (ECOC Special Symposia2: Next Generation Data Centres - Paving the Way for the Zettabyte Era
High Scalability Network Monitoring for Communications Service ProvidersCA Technologies
CA Performance Management is a big data collection, warehousing and analytics solution that helps communications service providers maximize return on their network infrastructure investments and lower the cost of network operations.
Learn more about CA Performance Management here: http://bit.ly/1vrQPJB
Delivering Carrier Grade OCP for Virtualized Data CentersRadisys Corporation
This webinar explores the requirements for carrier grade Open Compute Project (OCP) infrastructure for virtualized telecom data centers delivering SDN and NFV for digital services.
Madhu Rangarajan will provide an overview of Networking trends they are seeing in Cloud, various network topologies and tradeoffs, and trends in the acceleration of packet processing workloads. They will also talk about some of the work going on in Intel to address these trends, including FPGAs in the datacenter.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
2. Intent
• Highlight few trends affect telecom infrastructure architecture:
• Digital Society
• Big Data and Machine Learning
• List some enablers for evolving demand:
• Edge Computing
• Graphic Process Unit(GPU)
• Field Programmable Gate Array (FPGA)
• Discuss some supporting architecture options:
• Customize Nodes
• Network Slicing
20161004 CC 4.0 SA, NC 2
5. Trends
• Digitalization
• Virtualization
• Physical -> digital -> data driven
• IoE -> data 4V (volume, variety, velocity, veracity) -> analytics, optimization, ML
• Mobilization
• Connectivity
• Unconnected -> mobile -> mobile optimized
• Increase system scope/domain/complexity/interactions
• Place -> People -> Devices
• Automation -> self-directed/managing/healing
• Mobility
• Time => anytime
• space => anywhere
• data => more available information/inference
20161004 CC 4.0 SA, NC 5
10. 5G Main Enablers
• Dynamic RAN provides a RAN that can adapts to rapid changes in user
needs and the mix of the generic 5G services.
• Ultra-Dense Networks,
• Moving Networks,
• Devices acting as temporary as access nodes,
• D2D communication for both access and backhaul.
• Lean System Control Plane (LSCP) provides new lean signaling/control
information and allows separation of data and control information and
support large variety of devices with very different capabilities.
• Localized Contents and Traffic Flows allow offloading, aggregation and
distribution of real-time and cached content.
• Spectrum Toolbox contains a set of enablers (tools) to allow 5G systems to
operate under different regulatory framework and share enablers.
20161004 CC 4.0 SA, NC 10
11. Data Process at Edge
20161004 CC 4.0 SA, NC 11
from: http://ubiquity.acm.org/article.cfm?id=2822875
12. Edge Computing
• Pushes applications, data and computing power (services) to the
logical extremes of a network.
• Replicates fragments of information across distributed networks.
• Place customized nodes (SDR, SDN, NFV, …) closer to the client
whenever it is possible.
20161004 CC 4.0 SA, NC 12
15. Advantages of Edge Computing
• Decrease the data volume that must be moved, the consequent
traffic, and the distance the data must go, thereby reducing
transmission costs, shrinking latency.
• Reduces or eliminates the network bottlenecks.
• Fast response times.
• Reduce security risk by reducing the non-essential data travels into
network core, only necessary data forward for analysis.
• Deeper insights, with privacy control: Analyze sensitive data locally
instead of sending it to the cloud for analysis.
20161004 CC 4.0 SA, NC 15
17. KT – Network Architecture Evolution 4G to 5G
from http://www.netmanias.com/en/?m=attach&no=13956
20161004 CC 4.0 SA, NC 17
18. KT - Network Architecture Evolution 4G to 5G
from http://www.netmanias.com/en/?m=attach&no=13955
20161004 CC 4.0 SA, NC 18
19. Edge Node Architecture
(need design improvement, this is only a place holder)
20161004 CC 4.0 SA, NC 19
from: http://ubiquity.acm.org/article.cfm?id=2822875
21. Graphics Processing Unit (GPU)
• A GPU is a computer chip that performs rapid mathematical
calculations, primarily for the purpose of rendering images.
• A GPU generally has a large number of slow and weak processors (lower operating frequency, lower number
of registers, simpler ALU's etc.)
• GPU's come strapped with lots of memory and generally have high memory bandwidth to support the
hundreds of small processors that make up the GPU.
• GPUs are special purpose and can compute vector math, matrix math, pixel transforms and rendering jobs
about 10-100x faster than the equivalent CPU performance as all these tasks are embarrassingly parallel.
20161004 CC 4.0 SA, NC 21
22. CPU vs GPU
from http://www.electronicspecifier.com/communications/vivante-es-design-magazine-gpus-the-next-must-have
20161004 CC 4.0 SA, NC 22
23. Parallel Computing and Streaming
The Landscape of Parallel Computing Research: A View from Berkeley
The 13 application areas where OpenCL can be used
1. Dense Linear Algebra
2. Sparse Linear Algebra
3. Spectral Methods
4. N-Body Methods
5. Structured Grids
6. Unstructured Grids
7. Monte Carlo
8. Combinational Logic
9. Graph traversal
10. Dynamic Programming
11. Backtrack and Branch + Bound
12. Construct Graphical Models
13. Finite State Machine
14. …
20161004 CC 4.0 SA, NC 23
24. Current VPP Hardware Acceleration
from https://fd.io/technology
20161004 CC 4.0 SA, NC 24Q QS1`
25. Field Programmable Gate Arrays (FPGAs)
• Field Programmable Gate Arrays
(FPGAs) are semiconductor devices
that are based around a matrix of
configurable logic blocks (CLBs)
connected via programmable
interconnects.
• FPGAs can be reprogrammed to
desired application or functionality
requirements after manufacturing.
20161004 CC 4.0 SA, NC 25
26. FPGA Applications Partial List
from: http://www.xilinx.com/training/fpga/fpga-field-programmable-gate-array.htm
• Aerospace & Defense - Radiation-tolerant FPGAs along with intellectual property for image processing, waveform generation, and partial
reconfiguration for SDRs.
• ASIC Prototyping - ASIC prototyping with FPGAs enables fast and accurate SoC system modeling and verification of embedded software
• Audio - Xilinx FPGAs and targeted design platforms enable higher degrees of flexibility, faster time-to-market, and lower overall non-recurring
engineering costs (NRE) for a wide range of audio, communications, and multimedia applications.
• Automotive - Automotive silicon and IP solutions for gateway and driver assistance systems, comfort, convenience, and in-vehicle infotainment. -
Learn how Xilinx FPGA's enable Automotive Systems
• Broadcast - Adapt to changing requirements faster and lengthen product life cycles with Broadcast Targeted Design Platforms and solutions for high-
end professional broadcast systems.
• Consumer Electronics - Cost-effective solutions enabling next generation, full-featured consumer applications, such as converged handsets, digital flat
panel displays, information appliances, home networking, and residential set top boxes.
• Data Center - Designed for high-bandwidth, low-latency servers, networking, and storage applications to bring higher value into cloud deployments.
• High Performance Computing and Data Storage - Solutions for Network Attached Storage (NAS), Storage Area Network (SAN), servers, and storage
appliances.
• Medical - For diagnostic, monitoring, and therapy applications, the Virtex FPGA and Spartan® FPGA families can be used to meet a range of
processing, display, and I/O interface requirements.
• Video & Image Processing - Xilinx FPGAs and targeted design platforms enable higher degrees of flexibility, faster time-to-market, and lower overall
non-recurring engineering costs (NRE) for a wide range of video and imaging applications.
• Wired Communications - End-to-end solutions for the Reprogrammable Networking Linecard Packet Processing, Framer/MAC, serial backplanes,
• and more
20161004 CC 4.0 SA, NC 26
27. GPU and FPGA Considerations
• GPU has good penetration on ML community. It needs to overcome
big inertia to get people move away from GPUs CUDA.
• FPGAs uses less power, but new NVIDIA chips use as few as 10-15
watts for a teraflop.
• Verification of complex designs implemented on FPGA is a big
challenge. In contrast testing and validating CUDA code is relatively
easier.
• Fast digital design, no wait to obtaining a target chip.
• The design can be implemented on the FPGA and tested at once.
• FPGAs are good for prototyping. Design change change can be
absorbed in field.
20161004 CC 4.0 SA, NC 27
28. Observations and Interpretations
• Add flexible Edge Computing Nodes for 5G network.
• Use high computing power and customized SoC+FPGA for BTS/eNode-
B, and form a cluster/cloud to share resources.
• Accelerate performance with Field Programmable Gate Array.
• Adapt Software Defined Radio and dynamic RAN.
• Leverage GPUs for distributed data analytics and ML.
• Consider Micro-HPC (High Performance Computing) data center for
deployment.
20161004 CC 4.0 SA, NC 28
31. 5G Requirements
Business Requirements
• Massive broadband (xMBB) that delivers
gigabytes of bandwidth on demand <-
velocity
• Massive machine-type communication
(mMTC) that connects billions of sensors and
machines <- variety + volume
• Critical machine-type communication (uMTC)
that allows immediate feedback with high
reliability and enables for example remote
control over robots and autonomous driving.
<- velocity + veracity
Technology Requirements
• 1-10Gbps connections to end points in the
field (i.e. not theoretical maximum)
• 1 millisecond end-to-end round trip delay
(latency)
• 1000x bandwidth per unit area
• 10-100x number of connected devices
• (Perception of) 99.999% availability
• (Perception of) 100% coverage
• 90% reduction in network energy usage
• Up to ten year battery life for low power,
machine-type devices
20161004 CC 4.0 SA, NC 31
35. 5G mMBB Requirements and Technology Enablers
• Software Defined Radio (SDR) -> multiple radio technologies.
• Massive multiple-input and multiple-output (MIMO) antennas -> data
throughput and capacity
• Dynamic RAN
20161004 CC 4.0 SA, NC 35
International Mobile Telecommunications for the year 2000 (IMT-2000) is a worldwide set of requirements for a family of standards for the 3rd generation of mobile communications.
International Mobile Telecommunications-Advanced (IMT-Advanced Standard) are requirements issued by the ITU-R of the International Telecommunication Union (ITU) in 2008 for what is marketed as 4G (Or sometimes as 4,5G) mobile phone and Internet access service.
When data analysis is done at the edge of a network, that's known as "edge analytics.”
Edge computing does not replace cloud computing, however. In reality, an analytic model or rules might be created in a cloud then pushed out to edge devices.
Some edge devices are also incapable of doing analysis. Edge computing is also closely related to "fog computing," which also entails data processing from the edge to the cloud.
(Let's also not forget the data warehouse, which is used for massive storage of data and slower analytic queries).
Solution accelerators: The cornerstone of the system is the use of semantic data models to represent and visualize real-world edge devices, such as smart garbage cans or street lamps. “You can have a rule at the edge that says, when a vehicle comes along, turn the street lamp on,” Bates said. “So you might be saving power.”
IoT Solution Engine: A cloud-ready, scalable system that supports stream management.
Distributed Fog Intelligence: Allows business rules to be dynamically distributed across a device network and supports edge analytics as well as machine protocols. According to Plat. One, an edge agent also allows monitoring and control of devices, including configuration, software updates, and lifecycle management.
This is only a placeholder, apply SDN/NFV and uniKernal here; may combined with enodeb possible high performance micro-datacenter.
Gpgpu-General-purpose computing on graphics processing units
From: https://www.quora.com/Whats-the-difference-between-a-CPU-and-a-GPU
http://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
TL;DR answer: GPUs have far more processor cores than CPUs, but because each GPU core runs significantly slower than a CPU core and do not have the features needed for modern operating systems, they are not appropriate for performing most of the processing in everyday computing. They are most suited to compute-intensive operations such as video processing and physics simulations.
GPGPU is still a relatively new concept. GPUs were initially used for rendering graphics only; as technology advanced, the large number of cores in GPUs relative to CPUs was exploited by developing computational capabilities for GPUs so that they can process many parallel streams of data simultaneously, no matter what that data may be. While GPUs can have hundreds or even thousands of stream processors, they each run slower than a CPU core and have fewer features (even if they are Turing complete and can be programmed to run any program a CPU can run). Features missing from GPUs include interrupts and virtual memory, which are required to implement a modern operating system.
In other words, CPUs and GPUs have significantly different architectures that make them better suited to different tasks. A GPU can handle large amounts of data in many streams, performing relatively simple operations on them, but is ill-suited to heavy or complex processing on a single or few streams of data. A CPU is much faster on a per-core basis (in terms of instructions per second) and can perform complex operations on a single or few streams of data more easily, but cannot efficiently handle many streams simultaneously.
As a result, GPUs are not suited to handle tasks that do not significantly benefit from or cannot be parallelized, including many common consumer applications such as word processors. Furthermore, GPUs use a fundamentally different architecture; one would have to program an application specifically for a GPU for it to work, and significantly different techniques are required to program GPUs. These different techniques include new programming languages, modifications to existing languages, and new programming paradigms that are better suited to expressing a computation as a parallel operation to be performed by many stream processors. For more information on the techniques needed to program GPUs, see the Wikipedia articles on stream processing and parallel computing.
Modern GPUs are capable of performing vector operations and floating-point arithmetic, with the latest cards capable of manipulating double-precision floating-point numbers. Frameworks such as CUDA and OpenCL enable programs to be written for GPUs, and the nature of GPUs make them most suited to highly parallelizable operations, such as in scientific computing, where a series of specialized GPU compute cards can be a viable replacement for a small compute cluster as inNVIDIA Tesla Personal Supercomputers. Consumers with modern GPUs who are experienced with Folding@home can use them to contribute with GPU clients, which can perform protein folding simulations at very high speeds and contribute more work to the project (be sure to read the FAQsfirst, especially those related to GPUs). GPUs can also enable better physics simulation in video games using PhysX, accelerate video encoding and decoding, and perform other compute-intensive tasks. It is these types of tasks that GPUs are most suited to performing.
AMD is pioneering a processor design called the Accelerated Processing Unit (APU) which combines conventional x86 CPU cores with GPUs. This approach enables graphical performance vastly superior to motherboard-integrated graphics solutions (though no match for more expensive discrete GPUs), and allows for a compact, low-cost system with good multimedia performance without the need for a separate GPU. The latest Intel processors also offer on-chip integrated graphics, although competitive integrated GPU performance is currently limited to the few chips with Intel Iris Pro Graphics. As technology continues to advance, we will see an increasing degree of convergence of these once-separate parts. AMD envisions a future where the CPU and GPU are one, capable of seamlessly working together on the same task.
Nonetheless, many tasks performed by PC operating systems and applications are still better suited to CPUs, and much work is needed to accelerate a program using a GPU. Since so much existing software use the x86 architecture, and because GPUs require different programming techniques and are missing several important features needed for operating systems, a general transition from CPU to GPU for everyday computing is very difficult.
Contrary to regular computer programs which are sequential, its statements are inherently concurrent (parallel).
For that reason, VHDL is usually referred to as a code rather than a program.
In VHDL , only statements placed inside a PROCESS, FUNCTION, or PROCEDURE are executed sequentially.
### C-RAN
From: http://www.fujitsu.com/downloads/TEL/fnc/whitepapers/CloudRANwp.pdf
Operators who choose wireless architectures have several options to choose from: small cells (SC-RAN); carrier Wi-Fi (CW-F); and Distributed Antenna System (DAS). These and a host of other solutions are being introduced by network operators as methods of expanding their network to accommodate data growth.
A Centralized-RAN, Cloud-RAN, or C-RAN architecture addresses capacity and coverage issues, while supporting mobile xHaul (Fronthaul and Backhaul) solutions as well as network self-optimization, self-configuration, self-adaptation with software control and management through SDN and NFV. Cloud RAN also provides great benefits in controlling ongoing operational costs, improving network security, network controllability, network agility and flexibility.
Growth in data traffic also severely impacts power consumption, with consequent cost burdens. Most of the power consumption is in the radio access networks, specifically at base stations. These consume more than 80% of the total power drawn by a typical mobile network system [4]. Reducing energy cost and shrinking the carbon footprint to transform to an efficient power management paradigm are increasingly urgent imperatives, especially in combination with demand for increased capacity, better coverage, and all-time-high throughput. New and alternative techniques and architectures that favour efficient operation, low power consumption, agile traffic management, and high reliability are not just nice-to-haves; they are business essentials.
Remote Radio Heads (RRHs) connects to the baseband unit (BBU) using CPRI (Common Public Radio Interface) or OBSAI (Open Base Station Architecture Initiative) interfaces. The RRHs include the radio, the associated amplification/filtering, and the antenna. The BBU is implemented separately and performs the centralized signal processing functionality of the RAN. The decentralized BBU enables agility, faster service delivery, cost savings, and improved coordination of radio capabilities across a set of RRHs.
Check https://www.crowdsupply.com/lime-micro/limesdr/updates/remote-radio-head
Airinterface https://www.crowdsupply.com/lime-micro/limesdr/updates/oai-lte-demo
Octave https://www.crowdsupply.com/lime-micro/limesdr/updates/lab-gnu-ocatve
### V-RAN
The Virtualized-RAN (V-RAN) architecture virtualizes the BBU functionality and services in a centralized BBU pool (V-BBU) in the Central Office (CO) that can effectively manage on-demand resource allocation, mobility, and interference control for a large number of interfaces using programmable software layers. V-RAN architecture enjoys software-defined capacity and scaling limits.
### CPRI (Common Public Radio Interface)
CPRI (which is more widely adopted in the industry than OBSAI) is a digital interface standard for encapsulating radio samples between the RRH and the Baseband Unit (BBU). The interface burden is not packet-based; signals are multiplexed in a low-latency timeslot-like fashion. CPRI offers maximum latency, near-zero jitter, and a near-zero bit error rate. In practice, a value of 0.4 milliseconds for transport leaves an acceptable delay budget for processing requirements and propagation delay.
The CPRI capacity required is up to 10 GbE, with distances of up to 40 km between the RRH and the BBU.