Designing a Fault-Tolerant Channel Extension Network for Internal Recoveryicu812
Originally published in the July 2006 issue of Technical Support magazine, this article by Mike Smith provides both an overview and best practices to be considered when designing a data replication network to support mainframe and data center recoverability.
HIGH PERFORMANCE ETHERNET PACKET PROCESSOR CORE FOR NEXT GENERATION NETWORKSijngnjournal
As the demand for high speed Internet significantly increasing to meet the requirement of large data transfers, real-time communication and High Definition ( HD) multimedia transfer over IP, the IP based network products architecture must evolve and change. Application specific processors require high
performance, low power and high degree of programmability is the limitation in many general processor based applications. This paper describes the design of Ethernet packet processor for system-on-chip (SoC) which performs all core packet processing functions, including segmentation and reassembly, packetization classification, route and queue management which will speedup switching/routing performance making it
more suitable for Next Generation Networks (NGN). Ethernet packet processor design can be configured for use with multiple projects targeted to a FPGA device the system is designed to support 1/10/20/40/100 Gigabit links with a speed and performance advantage. VHDL has been used to implement and simulated the required functions in FPGA
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...Ilango Jeyasubramanian
• Improved speedup by 3.2%, CPI by 7.68% from regular LRU policy by a new Dynamic SLRU policy Implementation on Gem5 simulator.
• Dynamic segmentation of each cache set was done with cache line access probability based LRU insertion policy and performed block replacement by normal SLRU policy.
• Achieved hit-rate improvement by 2.97%, power reduction by 11% with modified Gem5 Ruby memory system by added instructions in the X86 ISA and additional states in the MOESI protocol for coherence in round-robin ordered ring connected filter caches on Intel X86 processor.
Designing a Fault-Tolerant Channel Extension Network for Internal Recoveryicu812
Originally published in the July 2006 issue of Technical Support magazine, this article by Mike Smith provides both an overview and best practices to be considered when designing a data replication network to support mainframe and data center recoverability.
HIGH PERFORMANCE ETHERNET PACKET PROCESSOR CORE FOR NEXT GENERATION NETWORKSijngnjournal
As the demand for high speed Internet significantly increasing to meet the requirement of large data transfers, real-time communication and High Definition ( HD) multimedia transfer over IP, the IP based network products architecture must evolve and change. Application specific processors require high
performance, low power and high degree of programmability is the limitation in many general processor based applications. This paper describes the design of Ethernet packet processor for system-on-chip (SoC) which performs all core packet processing functions, including segmentation and reassembly, packetization classification, route and queue management which will speedup switching/routing performance making it
more suitable for Next Generation Networks (NGN). Ethernet packet processor design can be configured for use with multiple projects targeted to a FPGA device the system is designed to support 1/10/20/40/100 Gigabit links with a speed and performance advantage. VHDL has been used to implement and simulated the required functions in FPGA
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...Ilango Jeyasubramanian
• Improved speedup by 3.2%, CPI by 7.68% from regular LRU policy by a new Dynamic SLRU policy Implementation on Gem5 simulator.
• Dynamic segmentation of each cache set was done with cache line access probability based LRU insertion policy and performed block replacement by normal SLRU policy.
• Achieved hit-rate improvement by 2.97%, power reduction by 11% with modified Gem5 Ruby memory system by added instructions in the X86 ISA and additional states in the MOESI protocol for coherence in round-robin ordered ring connected filter caches on Intel X86 processor.
Dynamic classification in silicon-based forwarding engine environmentsTal Lavian Ph.D.
Current network devices enable connectivity between end systems with support for routing with a defined set of protocol software bundled with the hardware. These devices do not support user customization or the introduction of new software applications. Programmable network devices allow for the dynamic downloading of customized programs into network devices allowing for the introduction of new protocols and network services. The Oplet Runtime Environment (ORE) is a programmable network architecture built on a Gigabit Ethernet L3 Routing Switch to support downloadable services. Complementing the ORE, we introduce the JFWD API, a uniform, platform-independent portal through which application programmers control the forwarding engines of heterogeneous network nodes (e.g., switches and routers). Using the JFWD API, an ORE service has been implemented to classify and dynamically adjust packet handling on silicon-based network devices.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Network-aware Data Management for Large Scale Distributed Applications, IBM R...balmanme
IBM Research – Talk – June 24, 2015
Title:
Network-aware Data Management for Large Scale Distributed Applications
Abstract:
As current technology enables faster storage devices and larger interconnect bandwidth, there is a substantial need for novel system design and middleware architecture to address increasing latency, scalability, and throughput requirements. In this talk, I will outline network-aware data management and present solutions based on my past experience in large-scale data migration between remote repositories.
I will first describe my experience in the initial evaluation of 100Gbps network as a part of the Advance Network Initiative project. We needed intense fine-tuning in network, storage, and application layers, to take advantage of the higher network capacity. End-system bottlenecks and system performance play an important role especially in many-core platforms. I will introduce a special data movement prototype, successfully tested in one of the first 100Gbps demonstrations, in which applications map memory blocks for remote data, in contrast to the send/receive semantics. This prototype was used to stream climate data over wide-area for in-memory application processing and visualization.
Within this scope, I will introduce a flexible network reservation algorithm for on-demand bandwidth guaranteed virtual circuit services. Flexible reservations find best path in a time-dependent dynamic network topology to support predictable application performance. I will then present a data-scheduling model with advance provisioning, in which data movement operations are defined with earliest start and latest completion times.
I will conclude my talk with a very brief overview of my other related projects on performance engineering, hyper-converged virtual storage, and optimization in control and data path for virtualized environments.
FPGA Implementation of Multiplier-less CDF-5/3 Wavelet Transform for Image Pr...IOSRJVSP
Most of the digital image processing application uses various domain transformation technique to convert time domain information to transform domain which will help to simplify the mathematical modeling. Discrete Wavelet Transform is one of the best transformation techniques. The time-frequency resolution makes this transform sensitive to both time and frequency which will give very good compression and decompression. In this paper, we propose FPGA implementation of multiplier-less CDF-5/3 wavelet transform for image processing application using System-Generator tool.To maintain low area and high frequency we use multiplier-less architecture for CDF-5/3 DWT for our implementation. The VHDL code for multiplier-less structure is fed to system generator tool using standard procedure and synthesis the structure to get the area and frequency
Greenplum Analytics Workbench - What Can a Private Hadoop Cloud Do For You? EMC
This session discusses the rational behind the Greenplum Analytics Workbench initiative - it's goals, present status today and the roadmap for this first of a kind initiative. Enterprises learn about how a Hadoop Cloud can help unlock revenue opportunities from the data within the cluster.
Today at Hot Chips 2019, Intel engineers presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
ETHERNET PACKET PROCESSOR FOR SOC APPLICATIONcscpconf
As the demand for Internet expands significantly in numbers of users, servers, IP addresses,
switches and routers, the IP based network architecture must evolve and change. The design
of domain specific processors that require high performance, low power and high degree of
programmability is the bottleneck in many processor based applications. This paper describes
the design of ethernet packet processor for system-on-chip (SoC) which performs all core
packet processing functions, including segmentation and reassembly, packetization
classification, route and queue management which will speedup switching/routing
performance. Our design has been configured for use with multiple projects ttargeted to a
commercial configurable logic device the system is designed to support 10/100/1000 links with a speed advantage. VHDL has been used to implement and simulated the required functions in FPGA.
Dynamic classification in silicon-based forwarding engine environmentsTal Lavian Ph.D.
Current network devices enable connectivity between end systems with support for routing with a defined set of protocol software bundled with the hardware. These devices do not support user customization or the introduction of new software applications. Programmable network devices allow for the dynamic downloading of customized programs into network devices allowing for the introduction of new protocols and network services. The Oplet Runtime Environment (ORE) is a programmable network architecture built on a Gigabit Ethernet L3 Routing Switch to support downloadable services. Complementing the ORE, we introduce the JFWD API, a uniform, platform-independent portal through which application programmers control the forwarding engines of heterogeneous network nodes (e.g., switches and routers). Using the JFWD API, an ORE service has been implemented to classify and dynamically adjust packet handling on silicon-based network devices.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Network-aware Data Management for Large Scale Distributed Applications, IBM R...balmanme
IBM Research – Talk – June 24, 2015
Title:
Network-aware Data Management for Large Scale Distributed Applications
Abstract:
As current technology enables faster storage devices and larger interconnect bandwidth, there is a substantial need for novel system design and middleware architecture to address increasing latency, scalability, and throughput requirements. In this talk, I will outline network-aware data management and present solutions based on my past experience in large-scale data migration between remote repositories.
I will first describe my experience in the initial evaluation of 100Gbps network as a part of the Advance Network Initiative project. We needed intense fine-tuning in network, storage, and application layers, to take advantage of the higher network capacity. End-system bottlenecks and system performance play an important role especially in many-core platforms. I will introduce a special data movement prototype, successfully tested in one of the first 100Gbps demonstrations, in which applications map memory blocks for remote data, in contrast to the send/receive semantics. This prototype was used to stream climate data over wide-area for in-memory application processing and visualization.
Within this scope, I will introduce a flexible network reservation algorithm for on-demand bandwidth guaranteed virtual circuit services. Flexible reservations find best path in a time-dependent dynamic network topology to support predictable application performance. I will then present a data-scheduling model with advance provisioning, in which data movement operations are defined with earliest start and latest completion times.
I will conclude my talk with a very brief overview of my other related projects on performance engineering, hyper-converged virtual storage, and optimization in control and data path for virtualized environments.
FPGA Implementation of Multiplier-less CDF-5/3 Wavelet Transform for Image Pr...IOSRJVSP
Most of the digital image processing application uses various domain transformation technique to convert time domain information to transform domain which will help to simplify the mathematical modeling. Discrete Wavelet Transform is one of the best transformation techniques. The time-frequency resolution makes this transform sensitive to both time and frequency which will give very good compression and decompression. In this paper, we propose FPGA implementation of multiplier-less CDF-5/3 wavelet transform for image processing application using System-Generator tool.To maintain low area and high frequency we use multiplier-less architecture for CDF-5/3 DWT for our implementation. The VHDL code for multiplier-less structure is fed to system generator tool using standard procedure and synthesis the structure to get the area and frequency
Greenplum Analytics Workbench - What Can a Private Hadoop Cloud Do For You? EMC
This session discusses the rational behind the Greenplum Analytics Workbench initiative - it's goals, present status today and the roadmap for this first of a kind initiative. Enterprises learn about how a Hadoop Cloud can help unlock revenue opportunities from the data within the cluster.
Today at Hot Chips 2019, Intel engineers presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
ETHERNET PACKET PROCESSOR FOR SOC APPLICATIONcscpconf
As the demand for Internet expands significantly in numbers of users, servers, IP addresses,
switches and routers, the IP based network architecture must evolve and change. The design
of domain specific processors that require high performance, low power and high degree of
programmability is the bottleneck in many processor based applications. This paper describes
the design of ethernet packet processor for system-on-chip (SoC) which performs all core
packet processing functions, including segmentation and reassembly, packetization
classification, route and queue management which will speedup switching/routing
performance. Our design has been configured for use with multiple projects ttargeted to a
commercial configurable logic device the system is designed to support 10/100/1000 links with a speed advantage. VHDL has been used to implement and simulated the required functions in FPGA.
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
Some configurations deserve their own SlideShare entry: this is one of them. When the indsutry's first 100TB Spark SQL benchmark was reached, the media took notice. For good reason.
Intel, Mellanox, Lenovo and IBM came together to investigate a topology that leveraged advances in CPU, memory, storage and networking to assess the readiness of Spark SQL to harness new capabilities -- and speeds.
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
An ideal Network Processor, that is, a programmable multi-processor device must be capable of offering both the flexibility and speed required for packet processing. But current Network Processor systems generally fall short of the above benchmarks due to traffic fluctuations inherent in packet networks, and the resulting workload variation on individual pipeline stage over a period of time ultimately affects the overall performance of even an otherwise sound system. One potential solution would be to change the code running at these stages so as to adapt to the fluctuations; a near robust system with standing traffic fluctuations is the dynamic adaptive processor, reconfiguring the entire system, which we introduce and study to some extent in this paper. We achieve this by using a crucial decision making model, transferring the binary code to the processor through the SOAP protocol.
Building efficient 5G NR base stations with Intel® Xeon® Scalable Processors Michelle Holley
Speaker: Daniel Towner, System Architect for Wireless Access, Intel Corporation
5G brings many new capabilities over 4G including higher bandwidths, lower latencies, and more efficient use of radio spectrum. However, these improvements require a large increase in computing power in the base station. Fortunately the Xeon Scalable Processor series (Skylake-SP) recently introduced by Intel has a new high-performance instruction set called Intel® Advanced Vector Extensions 512 (Intel® AVX-512) which is capable of delivering the compute needed to support the exciting new world of 5G.
In his talk Daniel will give an overview of the new capabilities of the Intel AVX-512 instruction set and show why they are so beneficial to supporting 5G efficiently. The most obvious difference is that Intel AVX-512 has double the compute performance of previous generations of instruction sets. Perhaps surprisingly though it is the addition of brand new instructions that can make the biggest improvements. The new instructions mean that software algorithms can become more efficient, thereby enabling even more effective use of the improvements in computing performance and leading to very high performance 5G NR software implementations.
I understand that physics and hardware emmaded on the use of finete .pdfanil0878
I understand that physics and hardware emmaded on the use of finete element methods to predict
fluid flow over airplane wings,that progress is likely to continue. However, in recent years, this
progress has been achieved through greatly increased hardware complexity with the rise of
multicore and manycore processors, and this is affecting the ability of application developers to
achieve the full potential of these systems. currently performance is measured on a dense
matrix–matrix multiplication test which has questionable relevance to real applications.the
incredible advances in processor technology and all of the accompanying aspects of computer
system design, such as the memory subsystem and networking
In embedded it seems to combination of both hardware and the software , it is used to be
combined function of action in the systems .while we do that the application to developed in the
achieve the full potential of the systems in advanced processer technology.
Hardware
(1) Memory
Advances in memory technology have struggled to keep pace with the phenomenal advances in
processors. This difficulty in improving the main memory bandwidth led to the development of a
cache hierarchy with data being held in different cache levels within the processor. The idea is
that instead of fetching the required data multiple times from the main memory, it is instead
brought into the cache once and re-used multiple times. Intel allocates about half of the chip to
cache, with the largest LLC (last-level cache) being 30MB in size. IBM\'s new Power8 CPU has
an even larger L3 cache of up to 96MB [4]. By contrast, the largest L2 cache in NVIDIA\'s
GPUs is only 1.5MB.These different hardware design choices are motivated by careful
consideration of the range of applications being run by typical users.
One complication which has become more common and more important in the past few years is
non-uniform memory access. Ten years ago, most shared-memory multiprocessors would have
several CPUs sharing a memory bus to access a single main memory. A final comment on the
memory subsystem concerns the energy cost of moving data compared to performing a single
floating point computation.
(2) Processors
CPUs had a single processing core, and the increase in performance came partly from an increase
in the number of computational pipelines, but mainly through an increase in clock frequency.
Unfortunately, the power consumption is approximately proportional to the cube of the
frequency and this led to CPUs with a power consumption of up to 250W.CPUs address memory
bandwidth limitations by devoting half or more of the chip to LLC, so that small applications can
be held entirely within the cache. They address the 200-cycle latency issue by using very
complex cores which are capable of out-of-order execution , By contrast, GPUs adopt a very
different design philosophy because of the different needs of the graphical applications they
target. A GPU usually has a number of functional u.
An octa core processor with shared memory and message-passingeSAT Journals
Abstract This being the era of fast, high performance computing, there is the need of having efficient optimizations in the processor architecture and at the same time in memory hierarchy too. Each and every day, the advancement of applications in communication and multimedia systems are compelling to increase number of cores in the main processor viz., dual-core, quad-core, octa-core and so on. But, for enhancing the overall performance of multi processor chip, there are stringent requirements to improve inter-core synchronization. Thus, a MPSoC with 8-cores supporting both message-passing and shared-memory inter-core communication mechanisms is implemented on Virtex 5 LX110T FPGA. Each core is based on MIPS III (Microprocessor without interlocked pipelined stages) ISA, handling only integer type instructions and having six-stage pipeline with data hazard detection unit and forwarding logic. The eight processing cores and one central shared memory core are inter connected using 3x3 2-D mesh topology based Network-on-chip (NoC) with virtual channel router. The router is four stage pipelined supporting DOR X-Y routing algorithm and with round robin arbitration technique. For verification and functionality test of above fully synthesized multi core processor, matrix multiplication operation is mapped onto the above said. Partitioning and scheduling of multiple multiplications and addition for each element of resultant matrix has been done accordingly among eight cores to get maximum throughput. All the codes for processor design are written in Verilog HDL. Keywords: MPSoC, message-passing, shared memory, MIPS, ISA, wormhole router, network-on-chip, SIMD, data level parallelism, 2-D Mesh, virtual channel
Instruction Set Extension of a Low-End Reconfigurable Microcontroller in Bit-...IJECEIAES
The microcontroller-based system is currently having a tremendous boost with the revelation of platforms such as the Internet of Things. Low-end families of microcontroller architecture are still in demand albeit less technologically advanced due to its better I/O better application and control. However, there is clearly a lack of computational capability of the low-end architecture that will affect the pre-processing stage of the received data. The purpose of this research is to combine the best feature of an 8-bit microcontroller architecture together with the computationally complex operations without incurring extra resources. The modules’ integration is implemented using instruction set architecture (ISA) extension technique and is developed on the Field Programmable Gate Array (FPGA). Extensive simulations were performed with the and a comprehensive methodology is proposed. It was found that the ISA extension from 12-bit to 16-bit has produced a faster execution time with fewer resource utilization when implementing the bit-sorting algorithm. The overall development process used in this research is flexible enough for further investigation either by extending its module to more complex algorithms or evaluating other designs of its components.
Hyper-threading or Hyper-Threading Technology(HTT), is actually Intel’s
trademark for their multi-threading, but has become a common name for all
processors of this type. It is essentially a cut down version of dual version of dual
core. Execution units on a hyper-threaded CPU share certain elements, such as cache
and pipelines.
Similar to Network Processing on an SPE Core in Cell Broadband EngineTM (20)
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
2. The goal of this work is to use SPEs to support system
infrastructure with the elements essential to computer sys-
tems, such as high-speed networks and storages.
The following characteristics make SPEs potentially su-
perior to existing solutions for high-speed device support.
• The allocation of dedicated resources for individual
devices is effective in achieving short and stable re-
sponse times [7]. Because SPEs occupy a smaller chip
area than general-purpose processor cores of equiva-
lent performance, when one whole processor core is
used as a dedicated processor, the allocation of one
SPE has less impact than that of general-purpose core.
• The use of DMA enables the scheduling of a large
number of memory access requests that are asyn-
chronous to the processor core. SPEs can access the
system memory in a more flexible and tightly sched-
uled way than general cache-based systems during
computation.
• The communication between the SPE and the device is
possible with a low overhead because the devices can
access the high-speed LS directly.
• An SPE differs from a special-purpose device such as
a network processor in that it is a general-purpose re-
source that can be used for a variety of computational
tasks. Thus SPEs can be used for different purposes
depending on the status of system usage.
However, it is necessary to split execution codes and data
for the 256-KB LS, and codes and data must be swapped
as required via explicit DMA requests. This necessitates a
different programming technique from that used in conven-
tional processors, and increases the level of technical diffi-
culty in particular when porting existing large-scale appli-
cations.
This paper provides an overview of the implementation
of network protocol stack processing in an SPE and an eval-
uation of its performance.
The standard protocol stacks incorporated in operating
systems such as Linux have large memory footprints, and
require extensive modification when they are ported to an
SPE. We prioritized design time and selected for a base a
protocol stack with a small footprint designed for an em-
bedded device.
The protocol stack NetX [2], provided by Express Logic,
and the microkernel ThreadX which is required for its oper-
ation were ported to SPE. Porting was enabled by placing a
small number of data structures (connection tables, packet
buffers, etc.) in the main memory and using DMA access.
Generally, protocol stacks use scalar code and it is diffi-
cult to use SIMD instructions to tune performance. In our
experiment, speed was increased through the adjustment of
data alignment and optimization of branching. A transfer
performance of 8.5 Gbps using TCP-IP was achieved when
a single SPE operating at 3.2 GHz was allocated exclusively
to support a 10 Gbps optical Ethernet NIC.
As indicated by Foong,et al. [8], standard TCP-IP net-
work protocol stacks require a processing power of 1
Hz/bps on a Pentium 4. At a time when 1 Gbps is stan-
dard for Ethernet NIC and the use of 10-Gbps Ethernet is
becoming more widespread, network protocol stack pro-
cessing consumes the largest CPU cycle time among the
services that support a single device.
In recent years, HPC cluster systems that use multi-core
processors have become common. On these systems the
load on the CPU assigned to protocol stack processing may
increase, and problems may arise from load unbalancing.
To prevent this, a single core at each node is frequently
assigned to dedicated network processing [6]. In the Cell
Broadband EngineTM
, the use of one of the eight SPEs for
this purpose makes it possible to employ 10-Gbps Ethernet.
This represents a considerable advantage over a system us-
ing conventional high-performance processor cores.
Section 2 below considers related research in order to
clarify the positioning of the project discussed here. Sec-
tion 3 discusses the characteristics of implementation of
protocol stack processing using SPEs and the methods of
optimization, while Section 4 presents the results of perfor-
mance evaluations. Section 5 presents the conclusions of
the project, and Section 6 considers problems and future di-
rections.
2. Related Works
An SPE has an unique instruction set architecture in
which all the computational instructions are SIMD instruc-
tions. This gives the SPEs a high level of floating-point
computation performance per chip area, and they are em-
ployed in various areas of numerical computation applica-
tions [5] [14]. However, system services like the kernel
services of operating systems and IO device processing are
generally scalar tasks with complicated data structures, and
there is no publication handling this theme of poring these
types of processing to an SPE.
We ported a network protocol stack on an SPE. One SPE
was separated from processor scheduling on the Cell Linux
kernel and assigned for processing a network protocol stack.
As a result, this SPE looks like a network processor [7].
As network interfaces began to handle high packet rates
and interrupts reduced system performance. And they
become a problem [13]. To avoid performance reduc-
tion from frequent interrupts, Aron,et al. [4] proposed the
software-timer-based network processing, which can elim-
inate higher frequent interrupts from NIC with timer-based
polling. This approach’s main objectives are to limit net-
120
3. Table 1. RC-101 10-Gigabit Ethernet Adapter
Features Items
Hardware Vendor Sony Corporation.
Engineering samples available
MAC original
Technical Features IEEE standard IEEE802.3ae 10GBASE-SR
Wiring Multi-mode fiber (300m)
Host bus PCIe (x1, 4, 8)
Transceiver original
Power consumption 7.5W
Software Features Checksum offload IP, UDP, TCP
Jumbo frame 9k
Interrupt moderation original
work processing cycles and to eliminate context switching.
Our approach also eliminates interrupts from NIC, because
all the interactions from NIC are handled by polling on ded-
icated SPE.
Assigning dedicated resources for network processing
has multiple benefits [15] [19]. Wu,et al. [18] shows that
process scheduling by a Linux kernel may largely affect
the performance of a protocol stack. Our approach elim-
inates any interference between protocol stack processing
and other kernel tasks.
There was another challenge of controlling system-wide
performance by restricting CPU cycles for network process-
ing [16]. But our approach can completely offload network
processing from the PPE on which the operating system is
running.
From a security perspective, separating the protocol
stack from a monolithic kernel is preferable [17]. This sug-
gests that our schemes which utilizes separate resources for
protocol stacks can potentially improve system security.
3. Implementing the Network Protocol Stack in
an SPE
This section discusses the system constructed in this
project: hardware and software configurations, the func-
tions of the implemented system. It also addresses the pro-
cesses and performance improvements required to imple-
ment a protocol stack on an SPE.
3.1. System Configuration
Figure 1 shows the configuration of the hardware sys-
tem constructed for this work. We used the Sony RC-101
10-Gigabit Ethernet Adapter. This hardware is not yet com-
mercially available but it is available for evaluation. Table 1
shows the main specifications of the RC-101 10-Gigabit
Ethernet Adapter. The Cell Broadband EngineTM
is con-
nected to RC-101 using a PCIe x8 interface through a pro-
totype high-performance FlexIO-PCIe bridge.
Figure 1. Hardware System
Figure 2. Connection between RC-101 and
SPE
3.2. Basic Strategies for Supporting
TCP/IP on SPE
3.2.1 Protocol stack
The standard protocol stacks incorporated in operating sys-
tems such as Linux have large memory footprints that ex-
ceed the LS size of 256-KB, and extensive modifications
are therefore necessary when they are ported to an SPE. We
prioritized design time, and used a protocol stack for em-
bedded applications, which has a small footprint, as a base.
The protocol stack NetX, manufactured by Express
Logic, and the microkernel ThreadX that is required for its
operation were ported to an SPE.
121
4. Figure 3. Software System Overview
3.2.2 Pinning
The linux distribution for Cell Broadband EngineTM
treats
the SPEs as virtualized resources, which means that user
applications can generate more SPE threads than available
SPEs, and context switching of the SPE thread may occur
while user applications are running. However, as Figure 2
shows, we stored descriptors in the LS and attempted to op-
erate the NIC directly from the SPE, and it was therefore
necessary to pin an SPE. One SPE was pinned and used ex-
clusively for network processing.
We considered using multiple SPEs to fix the code foot-
print problem, but did not apply this method, in order to free
as many number of SPEs.
3.3. Software Module
Figure 3 shows the configuration of the software system.
The Linux 2.6.20 kernel with a patch for our evaluation
board is used. The kernel is running on the PPE.
One SPE was pinned for the SPE-side components which
are shown on the left. In the SPE, the ThreadX real-time
OS, RC-101 NIC driver, NetX TCP/IP protocol stack and
a service application are running. The service application
communicates with a user space application and executes
requests from the user space application by calling NetX
API. This SPE runs in the Linux kernel space.
The PPE-side component shown in the center (bottom) is
in charge of initializing, managing and terminating the SPE-
side components. Because there is no standard kernel API
call method from SPE to PPE in Linux, the SPE-side com-
ponents cannot directly call kernel API existing in the PPE
core. The PPE-side component is implemented as a ker-
nel module and calls various kernel API as requested by the
SPE side components, including the allocation/deallocation
of memory, obtaining an IO space address and probing the
Table 2. implemented functions
Function Support Enabled
ARP yes yes
RARP yes no
ICMP yes no
IGMP yes no
IPv4 yes yes
IPv4 checksum yes no
IPv6 no
TCP yes yes
TCP checksum yes no
TCP selective ack yes (receive only) yes
TCP slow start algorithm no
TCP fast retransmit no
TCP window scaling no
UDP yes yes
UDP checksum yes no
API NetX specific
socket like API
network adapter. The memory region shown in the center
box in Figure 3 is a kernel space memory region that is al-
located and mapped to the user space by the PPE-side com-
ponent. Communication with the user space is conducted
via ring buffers that are placed in this memory region.
3.4. Functions of Protocol Stack
The NetX is used for supporting TCP and UDP. The TCP
protocol stack implemented in Linux has a variety of ad-
vanced functions. However, NetX is designed for embed-
ded systems where code size is important. Thus, advanced
functions such as window scaling are omitted from NetX.
Table 2 shows the functions of NetX and whether they were
activated in this project. Of the functions supported by
NetX, only the essential functions were selected, in order
to prevent the code size from becoming too large.
3.5. Solving the Code Footprint Problem
We implemented the real-time OS and the protocol stack
onto an SPE, as well as a simple test program in the same
SPE. This exhausted local storage, and no margin remained
for additional functions or packet memory.
Because the registers and operands in an SPE are 128-bit,
more instructions are required to access a structure without
alignment restrictions than when a standard 32-bit RISC
processor is employed. Therefore, code sizes can easily
become large in programs like protocol stacks that access
structures frequently. In order to control the code size, we
added alignment restrictions to the main structures in the
NetX. Because we employed gcc as the compiler, we used
the directive attribute ((aligned(N))), which was devel-
oped for adding alignment restrictions. As a result, while
the data size increased, the code size was reduced. The
size of the .text segment was reduced from 145-KB to 113-
KB after the alignment, representing a reduction of approx-
imately 21%. However, the .bss segment increased approx-
122
5. Table 3. local store usage
Segment name .text packet pool .bss stacks .rodata .data tcp thread udp .debug ranges other total
Before 144592 49920 39328 17408 4656 1264 480 352 336 128 3680 262144
After 113168 57600 45728 17408 4720 1264 1856 1312 1072 128 17888 262144
Ratio 0.78 1.15 1.16 1.00 1.01 1.00 3.87 3.73 3.19 1.00 4.86 1.00
imately 18% in size following the alignment, from 39-KB
to 46-KB. Table 3 shows a comparison of the segment sizes
in the LS before and after alignment. “packet pool” rep-
resents the packet pool,with the section that maintains the
packet payload at an MTU of 1500 bytes and the structure
that controls the packets. The term, “stacks” refers to the
stack area, which also holds a thread context, while “tcp” is
the TCP control block, and “udp” is the UDP control block.
In the table, “total” represents the size of the LS. Overall,
the addition of alignment restrictions to the structures re-
sulted in a size reduction of approximately 5%.
Additionally, the alignment restrictions resulted in a 27%
improvement of loopback performance in the TCP and 22%
in the UDP.
3.6. Insertion of Branch Prediction to In-
crease Speed
Because SPEs do not have a branch prediction hardware,
static hint branch instructions are used to indicate the fetch
direction. However, even when branch instructions were
counted by the number of “if” statements in the C source
code, there were more than 1200. It was impractical to in-
put the gcc builtin expect code manually. Instead, we used
the gcc profile-guided optimization (PGO) to enable branch
prediction. Communication was via loopbacks from the test
program, and statistics on the direction of branching were
gathered and fed back during compiling. Using PGO, we
were able to improve TCP loopback performance by ap-
proximately 22%.
3.7. Making the System Non-preemptive to
Improve Performance
The protocol stack that we employed in this project,
NetX, is a commercial stack, and is designed for use upon
the embedded real-time OS ThreadX, which is also man-
ufactured by Express Logic. We ran NetX by porting
ThreadX to the SPE. And preventing ThreadX from being
triggered by any type of interrupt processing other than the
timer processing of the thread by SPE decrementer inter-
rupts. Interrupts of the SPE core can potentially be oper-
ated by NICs from the MFC Memory Mapped I/O (MMIO)
register, but the SPE processing speed is fast enough, and
at low load, tasks can be rescheduled in round-robin cycles
of approximately 400 nano seconds. Given this high-speed
response, polling was employed. In addition, the exclusive
use of an SPE made it unnecessary to run in coordination
with unknown non-network processes (unlike CPU cores
running a normal OS) and therefore the maximum delay
could be controlled.
ThreadX supports preemption, but we did not enable pre-
emptive processing. As indicated above, the system is in
control of the entire task processing in the SPE. Therefore
non-preemptive task processing can be conducted without
delays, and exclusion processing such as mutexes can be
eliminated, resulting in higher speeds. Eliminating mutex
processing enabled us to increase the bandwidth of loop-
back processing by approximately 13%.
3.8. Use of the SPE to Drive NIC
Because SPEs have a DMAC in every core, they are
also capable of efficient NIC register accesses. On a con-
ventinal processor core (like the PPE), the pipeline stalls
for an extremely long period in units of instruction cycles
during MMIO register access. In contrast, the DMAC in
an SPE can be explicitly controlled by software in an SPE,
enabling instructions to be issued without waiting for time-
consuming MMIO accesses to complete, and programs can
therefore continue to be executed during MMIO accesses.
For example, when sending data, the NIC is notified of the
update in the transmission descriptor, and this notification is
normally implemented by writing the NIC MMIO register.
With SPEs, a DMA transfer instruction can be issued with-
out waiting for this write to complete, and prevents from
stalling.
RC-101 uses 32-byte descriptors placed in the ring buffer
for sending and receiving. We placed the descriptors in the
LS to control RC-101. When the send descriptor is updated,
the RC-101 register is tapped and the descriptor read from
RC-101 is kicked. Packet payloads are placed either in the
LS or on main memory. When placed in the LS, the data
flow is as shown by the solid line in Figure 2. The data flow
when the packet payloads were placed on main memory is
shown by the dotted line in the same figure. In the latter
case, the payload has to be retrieved from the main memory,
doubling the load on the memory system when compared to
LS. However, because it is difficult to place multiple 9KB
payloads in the LS in order to increase the MTU, it is neces-
sary to place payloads on main memory and reduce the data
resident in the LS for high bandwidth.
123
6. 3.9. Increasing the Number of Supported
Sessions
When all functions were handled in the LS without using
the main memory, the maximum number of connected ses-
sions that could be supported simultaneously was 16. We
increased the number of sessions in order to demonstrate
that the SPE can be used for more realistic applications. It
is difficult to increase the number of sessions because of the
packet pool capacity and the sizes of the TCP control block
(TCB) and the UDP control block (UCB). The TCB and
UCB occupy approximately 1-KB and 512-B of the LS, re-
spectively. Therefore if the TCP supports 128 sessions, for
example, half the LS capacity is used. We were able to sup-
port 128 simultaneous TCP sessions by using the techniques
described below.
• Software cache for TCB and UCB
We placed the TCB and UCB on main memory. The
protocol stack retrieves these into a small area in the
LS when required. If the TCB and UCB were modi-
fied they are restored to the main memory. These op-
erations are handled by a software in the SPE.
The TCB and UCB include the port number field
which are frequently searched. We extracted such data
into the LS and made resident for speed-up.
• Placing queues in the main memory
Because the packet pool capacity is essential to guar-
anteeing the TCP retransmit and keeping the data re-
ceiving. The data placed in the TCP queues was saved
to the main memory and the packet pool in the LS was
used as a cache area.
3.10. API for User Space Application
We implemented the following application interfaces to
enable the use of the protocol stack from Linux user space
application processes.
• Proprietary socket interface API (SOCK API)
The SOCK API provides the NetX API without requir-
ing changes in the user space application programs.
The service programs that provide these API are lo-
cated in the service application program section of the
SPE-side components shown in Figure 3. These ser-
vice programs use resources guaranteed by the PPE
kernel module, and employ a memory area shared be-
tween the user space and the kernel space to communi-
cate with user processes. In addition, by calling NetX
API, the service programs use the NetX functions to
realize TCP or UDP communication.
Figure 4. Test Environment for End to End
Communications
Figure 5. Test Environment for Loopback
Communications
• Remote Direct Memory Access protocol API (RDMA
API)
The RDMA API follows the instructions issued by
user applications, and copies data using TCP to com-
municate between remote nodes. Unique protocols
were used in TCP.
• SDP on RDMA API (SDP)
To enable the use of Infiniband SDP functions [3], we
used the library provided in the SDP through emula-
tion of the HCA driver in kernel space. This is to sup-
port applications using standard sockets.
The SOCK API and the RDMA API cannot coexist at the
same time.
4. Performance Evaluation
This section discusses the measurement environment
used in performance measurements, and the measurement
124
7. results.
4.1. Measurement Environment
The measurement environment of end-to-end communi-
cations is shown in Figure 4. During measurements, RC-
101 units were connected directly with an optical cable with
no switches in between. During end-to-end communica-
tions, the sender side only transmitted data and the receiver
side received data.
The measurement environment of loopback communica-
tion is shown in Figure 5. In loopback, the packets were
reproduced in the protocol stack and returned by means of
memory copies in NetX. During loopback, send and re-
ceive transmissions were executed on a single SPE using
two threads.
Performance data is measured by running the test pro-
gram shown in Figures 4 and 5 on the SPE where the SPE-
side components are running. The NetX socket API is
called directly from the test program. 10 Gigabits of data
is transferred, and the throughput which includes the IP
header and MAC header length is measured. We measured
UDP throughput by varying UDP payload size. Also, we
measured TCP throughput by varying TCP Max Segment
Size (MSS) and TCP window size.
4.2. Loopback Communications Perfor-
mance
Loopback performance measures only the performance
of the protocol stack. However, because the packets are
copied and returned when NetX is used, results are largely
affected by the additional memory copies.
4.2.1 UDP loopback
Figure 6 shows the results of the UDP loopback test. The
bandwidth reaches 7.5 Gbps when MTU is 1472 bytes.
4.2.2 TCP loopback
Figure 7 shows the results of TCP loopback communica-
tions. The bandwidth reaches 4.6 Gbps when MSS is 1460
and window size is approximately 26 KB (MSS × 18).
4.3. End-to-End Communications Perfor-
mance
Two types of data structures are used for communica-
tion between the device driver and the protocol stack. The
packet descriptor carries control information and is placed
on LS to enable low overhead polling by SPE. The other
data structure is a packet buffer. The large data structures
0
1000
2000
3000
4000
5000
6000
7000
8000
0 200 400 600 800 1000 1200 1400 1600
Throughput[Mbps]
Packet Size [bytes]
Figure 6. UDP Loopback Performance
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0 5000 10000 15000 20000 25000 30000
Throughput[Mbps]
TCP Receive Window Size [bytes]
MSS 128
MSS 512
MSS1460
Figure 7. TCP Loopback Performance
0
1000
2000
3000
4000
5000
6000
0 200 400 600 800 1000 1200 1400 1600
Throughput[Mbps]
Packet Size [bytes]
Figure 8. UDP End-to-End Performance (Us-
ing LS Only)
125
8. 0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
0 1000 2000 3000 4000 5000 6000 7000 8000
Throughput[Mbps]
Packet Size [bytes]
Figure 9. UDP End-to-End Performance (Us-
ing the Software Cache)
0
1000
2000
3000
4000
5000
6000
0 5000 10000 15000 20000 25000 30000 35000 40000
Throughput[Mbps]
TCP Receive Window Size [bytes]
MSS 128
MSS 512
MSS 1460
Figure 10. TCP End-To-End Performance (Us-
ing LS Only)
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0 10000 20000 30000 40000 50000 60000
Throughput[Mbps]
TCP Receive Window Size [bytes]
MSS 128
MSS 512
MSS 1460
MSS 3000
MSS 5000
MSS 7000
MSS 8960
Figure 11. TCP End-To-End Performance (Us-
ing The Software Cache)
that are accessed by the protocol stack are UCB and TCB.
We compared two ways of arranging these data structures.
• LS configuration
This configuration puts the entire packet buffer, the
UCB and the TCB on LS. This is good for data ac-
cess latency and main memory loads. And thus attains
the better packet handling rate. However, because of
the LS size limitation, we limited the MTU size to
1500 bytes. With this configuration, eight descriptors
are assigned for sending and four descriptors are as-
signed for receiving.
• Main memory configuration
This configuration uses LS as a cache memory. The
data structures that can be cached is UCB, TCB, and
packet queues. The total size of these data structures
can be larger than the LS size. Thus the MTU size and
packet pool size can be configured to be large enough.
64 descriptors for RC-101 NIC are assigned for send-
ing and 32 descriptors are for receiving.
4.3.1 UDP
This section discusses the results for UDP end-to-end com-
munications. Figures 8 and 9 show the results for UDP end-
to-end communications. The figures represent the LS con-
figuration and the main memory configuration, respectively.
As shown in Figure 9, wire speeds were almost attained for
UDP. The result of Figure 8 is not up to wire speed, because
of the limitation on MTU size and the number of descrip-
tors. In the main memory configuration, there is only one
session, so there are no cache misses for UCB. But incom-
ing packets with higher rates than the processing power are
put into the UDP receive queue. This operation causes a
main memory access and lowers the throughput.
4.3.2 TCP
This section discusses the results for TCP end-to-end com-
munications. Figures 10 and 11 show the results for TCP
end-to-end communications. The figures represents the LS
configuration and the main memory configuration, respec-
tively. The throughput of Figure 10 shows much smoother
results than Figure 11, because Figure 10 involves no main
memory accesses and has static packet processing times. In
Figure 10, the maximum throughput of 5.9 Gbps is reached
when is MSS 1460 bytes and the TCP window size is 23-
KB (MSS × 16).
In Figure 11, a higher maximum throughput of 8.5 Gbps
is reached at a 5000 bytes MSS and a 60-KB TCP win-
dow size (MSS × 12). This is because the main memory
126
9. configuration can utilize a larger MTU size and larger de-
scriptors than the LS configuration. But in the main mem-
ory configuration, both the sender and receiver must access
the main memory at each packet processing to maintain
transmit queue and receive queue. The overhead of these
main memory accesses resulted in limiting throughput of
the main memory configuration.
4.3.3 Maximum Packet Rate
During transfers from LS, packet processing performance
is approximately 870K [packet/sec]. During transfers from
the main memory, packet processing performance is ap-
proximately 530K [packet/sec]. These are measured in
UDP end-to-end communications.
5. Conclusion
The Cell Broadband EngineTM
features eight processors
with a unique design, called SPEs, on a single chip. We im-
plemented a network protocol stack on one of these SPEs.
On this single SPE, we operated the protocol stack NetX,
developed for embedded applications, and the microkernel
ThreadX required for its operation. Both were manufac-
tured by Express Logic. In addition there are communica-
tion interfaces enabling the protocol-stack-driven SPE to be
accessed from device drivers and other processors.
The most significant characteristic of SPE architecture is
its omission of a cache hierarchy; the SPE has an LS that
is controlled via DMA. This necessitates a different form of
programming than that used for a conventional processor.
In implementing the protocol stacks, we employed software
caches for processing a number of data structures.
We optimized the implementation of the NIC device
drivers and the protocol stack in the fallowing various ways.
• Communication between the network hardware and
the processor was realized with a low overhead by
means of direct access to the LS from the NIC.
• Data alignment was adjusted to prevent performance
from declining and the code size from increasing. (The
SPE uses 128-bit load/store instructions, rather than
scalar data load/store instructions.)
• Profile-based static branching prediction was used to
support code scheduling and branch hint instruction
scheduling by the compiler.
As a result, we achieved a TCP performance of 8.5 Gbps
for 3-KB packet sizes using a single SPE operating at
3.2 GHz. This result indicates a competitive network proto-
col processing performance, considering that we employed
a processor designed for a variety of computational applica-
tions rather than a dedicated network processor, and demon-
strates the potential of the application of SPEs in this field.
6. Future Directions
With the results achieved in this work as a starting point,
the following directions can be indicated for the future de-
velopment of network processing using SPEs.
• Further optimization for SPE use
We used 32-bit scalar code, and therefore did not make
effective use of the SPE’s 128-bit SIMD data path. We
believe that there is potential for optimization in nu-
merous areas through the significant modification of
code structures. For example conditional branches can
be replaced by select bit instructions or shuffle byte
operations can be used for packet header arrangement.
Further optimization can be expected to result in im-
proved performance.
• Seamless integration in Linux kernels
Because NetX is designed for embedded applications,
it is not compatible with standard Linux protocol
stacks, and therefore cannot replace Linux protocol
stacks. Adding the required functions and integrat-
ing the protocol stack in a Linux kernel would make
it accessible from a Linux application in the same way
as a standard protocol stack. We believe that a Linux
kernel patch shared with a full TCP-IP offload using
a network processor [7] could be used to enable this
integration.
• Implementation of various types of network packet
processing
In this work, we focused on basic protocol stack pro-
cessing, but further network packet processing can be
considered for the future. Encryption-related process-
ing such as IP-SEC and DTCP over IP, packet filter-
ing, and other applications can be named as the next
targets. SPEs have shown excellent performance in
encryption and decryption processing, as typified by
AES [5]. SIMD processing can also be expected to in-
crease the speed of pattern matching processing, and
may be applicable for accelerating packet filtering.
As the next step in our work, we are developing a test
application that embeds the protocol stack discussed here.
To avoid the time required for rewriting a large-scale soft-
ware for the SPE, and to generate usable results in a short
period, we are attempting to apply the concept of web ser-
vices that employ combinations of highly independent ser-
vices. Specifically, this combines a group of major element
127
10. technologies and the Cell Broadband EngineTM
to realize
an independent service package that can be accessed via a
network.
In addition to the network interface driven by the pro-
tocol stack discussed here, we intend to port and install a
storage device, a file system and a database engine onto the
SPE. By combining a single Cell Broadband EngineTM
with
commodity parts, we will implement a media data service
package that includes media streaming on a bandwidth of
several hundred MB/s, and meta-data processing.
7. Acknowledgements
The authors would like to thank Bill Lamie and David
Lamie of Express Logic for their cooperation and for dis-
closing the NetX and ThreadX source code.
The authors also thank to Hisashi Tomita, Youichi Mizu-
tani, Masahiro Kajimoto, Yuichi Machida, Kazuyoshi Ya-
mada, Tsuyoshi Kano, Mitsuki Hinosugi of Sony Corpora-
tion for giving us the resources for evaluation with the RC-
101 10-Gigabit Ethernet Adapter and for helping in perfor-
mance tuning.
And the authors thank to Mariko Ozawa, Ken Kurihara,
Takuhito Tsujimoto and Daniel Toshio Harrel for reviewing
this paper.
Finally, the author would like to thank children of Awa-
fune Football Club for their exuberant energy.
References
[1] Cell Broadband EngineTM
Home Page at Sony Computer
Entertainment Inc. http://cell.scei.co.jp/.
[2] Express Logic Home Page . http://www.rtos.com/.
[3] Openfabrics alliance home page. http://www.
openfabrics.org/.
[4] M. Aron and P. Druschel. Soft timers: efficient microsec-
ond software timer support for network processing. ACM
Transactions on Computer Systems (TOCS), 18(3):197–228,
2000.
[5] T. Chen, R. Raghavan, J. Dale, and E. Iwata. Cell Broadband
Engine Architecture and its first implementation- A perfor-
mance view. IBM Journal of Research and Development,
51(5):559–572, 2007.
[6] P. Crowley, M. Fluczynski, J. Baer, and B. Bershad. Charac-
terizing processor architectures for programmable network
interfaces. Proceedings of the 14th international conference
on Supercomputing, pages 54–65, 2000.
[7] W. Feng, P. Balaji, C. Baron, L. Bhuyan, and D. Panda. Per-
formance Characterization of a 10-Gigabit Ethernet TOE.
Proceedings of the IEEE International Symposium on High-
Performance Interconnects (HotI), 2005.
[8] A. Foong, T. Huff, H. Hum, J. Patwardhan, and G. Regnier.
TCP performance re-visited. Performance Analysis of Sys-
tems and Software, 2003. ISPASS. 2003 IEEE International
Symposium on, pages 70–79, 2003.
[9] M. Gschwind. Chip multiprocessing and the cell broadband
engine. Proceedings of the 3rd conference on Computing
frontiers, pages 1–8, 2006.
[10] M. Gschwind, P. Hofstee, B. Flachs, M. Hopkins, Y. Watan-
abe, and T. Yamazaki. A novel simd architecture for the cell
heterogeneous chip-multiprocessor. Hot Chips 17, August
2005.
[11] H. Hofstee. Power efficient processor architecture and the
cell processor. High-Performance Computer Architecture,
2005. HPCA-11. 11th International Symposium on, pages
258–262, 2005.
[12] J. Kahle et al. Introduction to the Cell multiprocessor.
IBM Journal of Research and Development, 49(4):589–604,
2005.
[13] I. Kim, J. Moon, and H. Yeom. Timer-Based Interrupt Mit-
igation for High Performance Packet Processing. Proceed-
ings of 5th International Conference on High-Performance
Computing in the Asia-Pacific Region, 2001.
[14] J. Kurzak and J. Dongarra. Implementation of the Mixed-
Precision High Performance LINPACK Benchmark on the
CELL Processor Technical Report UT-CS-06-580.
[15] G. Regnier, D. Minturn, G. McAlpine, V. Saletore, and
A. Foong. ETA: Experience with an Intel Xeon Processor
as a Packet Processing Engine. 2004.
[16] K. Ryu, J. Hollingsworth, and P. Keleher. Efficient network
and I/O throttling for fine-grain cycle stealing. Proceedings
of Supercomputing01, 2001.
[17] A. Sinha, S. Sarat, and J. Shapiro. Network subsystems
reloaded: a high-performance, defensible network subsys-
tem. Proceedings of the USENIX Annual Technical Confer-
ence 2004 on USENIX Annual Technical Conference table
of contents, pages 19–19, 2004.
[18] W. Wu and M. Crawford. Potential performance bottleneck
in linux tcp. Int. J. Commun. Syst., 20(11):1263–1283, 2007.
[19] B. Wun and P. Crowley. Network I/O Acceleration in
Heterogeneous Multicore Processors. Proceedings of the
14th IEEE Symposium on High-Performance Interconnects,
pages 9–14, 2006.
128