An Adaptive Load Balancing Middleware for Distributed SimulationGabriele D'Angelo
The simulation is useful to support the design and performance evaluation of complex systems, possibly composed by a massive number of interacting entities. For this reason, the simulation of such systems may need aggregate computation and memory resources obtained by clusters of parallel and distributed execution units. Shared computer clusters composed of available Commercial-Off-the-Shelf hardware are preferable to dedicated systems, mainly for cost reasons. The performance of distributed simulations is influenced by the heterogeneity of execution units and by their respective CPU load in background. Adaptive load balancing mechanisms could improve the resources utilization and the simulation process execution, by dynamically tuning the simulation load with an eye to the synchronization and communication overheads reduction. In this work it will be presented the GAIA+ framework: a new load balancing mechanism for distributed simulation. The framework has been evaluated by performing testbed simulations of a wireless ad hoc network model. Results confirm the effectiveness of the proposed solutions.
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSNexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
#PR12 #PR366
안녕하세요 논문 읽기 모임 PR-12의 366번째 논문리뷰입니다.
올해가 AlexNet이 나온지 10주년이 되는 해네요.
AlexNet이 2012년에 혜성처럼 등장한 이후, Solve computer vision problem = Use CNN이 공식처럼 사용되던 2010년대가 가고
2020년대 들어서 ViT의 등장을 시작으로 Transformer 기반의 network들이 CNN의 자리를 위협하고 상당부분 이미 뺏어간 상황입니다.
2020년대에 CNN의 가야할 길은 어디일까요?
Inductive bias가 적은 Transformer가 대용량의 데이터로 학습하면 항상 CNN보다 더 낫다는 건 진실일까요?
이 논문에서는 2020년대를 위한 CNN이라는 제목으로 ConvNeXt라는 새로운(?) architecture를 제안합니다.
사실 새로운 건 없고 그동안 있었던 것들과 Transformer에서 적용한 것들을 copy해와서 CNN에 적용해보았는데요,
Transformer보다 성능도 좋고 속도도 빠른 결과가 나왔다고 합니다.
결과에 대해서 약간의 논란이 twitter 상에서 나오고 있는데 이 부분 포함해서 자세한 내용은 영상을 통해서 보실 수 있습니다.
늘 재밌게 봐주시고 좋아요 댓글 구독 해주시는 분들께 감사드립니다 :)
논문링크: https://arxiv.org/abs/2201.03545
영상링크: https://youtu.be/Mw7IhO2uBGc
PERFORMANCE ANALYSIS OF OLSR PROTOCOL IN MANET CONSIDERING DIFFERENT MOBILITY...ijwmn
A Mobile Ad Hoc Network (MANET) is created when an independent mobile node network is connected
dynamically via wireless links. MANET is a self-organizing network that does not rely on pre-existing
infrastructure such as wired or wireless network routers. Mobile nodes in this network move randomly,
thus, the topology is always changing. Routing protocols in MANET are critical in ensuring dependable
and consistent connectivity between the mobile nodes. They conclude logically based on the interaction
between mobile nodes in MANET routing and encourage them to choose the optimum path between source
and destination. Routing protocols are classified as proactive, reactive, or hybrid. The focus of this project
will be on Optimized Link State Routing (OLSR) protocol, a proactive routing technique. OLSR is known as
the optimized variant of link state routing in which packets are sent throughout the network using the
multipoint relay (MPR) mechanism. This article evaluates the performance of the OLSR routing protocol
under condition of changing mobility speed and network density. The study's performance indicators are
average packet throughput, packet delivery ratio (PDR), and average packet latency. Network Simulator 2
(NS-2) and an external patch UM-OLSR are used to simulate and evaluate the performance of such
protocol. As a result of research, the approach of implementing the MPR mechanism are able to minimise
redundant data transmission during the normal message broadcast. The MPRs enhance the link state
protocols’ traditional diffusion mechanism by selecting the right MPRs. Hence, the number of undesired
broadcasts can be reduced and limited. Further research will focus on different scenario and environment
using different mobility model
Colloque IMT -04/04/2019- L'IA au cœur des mutations industrielles - L'IA pou...I MT
Colloque IMT - L'IA au cœur des mutations industrielles - Session Optimisation: L'IA pour la performance des réseaux. Présentation par Léonardo Linguaglossa, Chercheur post-doctorant (Télécom ParisTech)
An Adaptive Load Balancing Middleware for Distributed SimulationGabriele D'Angelo
The simulation is useful to support the design and performance evaluation of complex systems, possibly composed by a massive number of interacting entities. For this reason, the simulation of such systems may need aggregate computation and memory resources obtained by clusters of parallel and distributed execution units. Shared computer clusters composed of available Commercial-Off-the-Shelf hardware are preferable to dedicated systems, mainly for cost reasons. The performance of distributed simulations is influenced by the heterogeneity of execution units and by their respective CPU load in background. Adaptive load balancing mechanisms could improve the resources utilization and the simulation process execution, by dynamically tuning the simulation load with an eye to the synchronization and communication overheads reduction. In this work it will be presented the GAIA+ framework: a new load balancing mechanism for distributed simulation. The framework has been evaluated by performing testbed simulations of a wireless ad hoc network model. Results confirm the effectiveness of the proposed solutions.
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSNexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
#PR12 #PR366
안녕하세요 논문 읽기 모임 PR-12의 366번째 논문리뷰입니다.
올해가 AlexNet이 나온지 10주년이 되는 해네요.
AlexNet이 2012년에 혜성처럼 등장한 이후, Solve computer vision problem = Use CNN이 공식처럼 사용되던 2010년대가 가고
2020년대 들어서 ViT의 등장을 시작으로 Transformer 기반의 network들이 CNN의 자리를 위협하고 상당부분 이미 뺏어간 상황입니다.
2020년대에 CNN의 가야할 길은 어디일까요?
Inductive bias가 적은 Transformer가 대용량의 데이터로 학습하면 항상 CNN보다 더 낫다는 건 진실일까요?
이 논문에서는 2020년대를 위한 CNN이라는 제목으로 ConvNeXt라는 새로운(?) architecture를 제안합니다.
사실 새로운 건 없고 그동안 있었던 것들과 Transformer에서 적용한 것들을 copy해와서 CNN에 적용해보았는데요,
Transformer보다 성능도 좋고 속도도 빠른 결과가 나왔다고 합니다.
결과에 대해서 약간의 논란이 twitter 상에서 나오고 있는데 이 부분 포함해서 자세한 내용은 영상을 통해서 보실 수 있습니다.
늘 재밌게 봐주시고 좋아요 댓글 구독 해주시는 분들께 감사드립니다 :)
논문링크: https://arxiv.org/abs/2201.03545
영상링크: https://youtu.be/Mw7IhO2uBGc
PERFORMANCE ANALYSIS OF OLSR PROTOCOL IN MANET CONSIDERING DIFFERENT MOBILITY...ijwmn
A Mobile Ad Hoc Network (MANET) is created when an independent mobile node network is connected
dynamically via wireless links. MANET is a self-organizing network that does not rely on pre-existing
infrastructure such as wired or wireless network routers. Mobile nodes in this network move randomly,
thus, the topology is always changing. Routing protocols in MANET are critical in ensuring dependable
and consistent connectivity between the mobile nodes. They conclude logically based on the interaction
between mobile nodes in MANET routing and encourage them to choose the optimum path between source
and destination. Routing protocols are classified as proactive, reactive, or hybrid. The focus of this project
will be on Optimized Link State Routing (OLSR) protocol, a proactive routing technique. OLSR is known as
the optimized variant of link state routing in which packets are sent throughout the network using the
multipoint relay (MPR) mechanism. This article evaluates the performance of the OLSR routing protocol
under condition of changing mobility speed and network density. The study's performance indicators are
average packet throughput, packet delivery ratio (PDR), and average packet latency. Network Simulator 2
(NS-2) and an external patch UM-OLSR are used to simulate and evaluate the performance of such
protocol. As a result of research, the approach of implementing the MPR mechanism are able to minimise
redundant data transmission during the normal message broadcast. The MPRs enhance the link state
protocols’ traditional diffusion mechanism by selecting the right MPRs. Hence, the number of undesired
broadcasts can be reduced and limited. Further research will focus on different scenario and environment
using different mobility model
Colloque IMT -04/04/2019- L'IA au cœur des mutations industrielles - L'IA pou...I MT
Colloque IMT - L'IA au cœur des mutations industrielles - Session Optimisation: L'IA pour la performance des réseaux. Présentation par Léonardo Linguaglossa, Chercheur post-doctorant (Télécom ParisTech)
EFFECTS OF MAC PARAMETERS ON THE PERFORMANCE OF IEEE 802.11 DCF IN NS-3ijwmn
This paper presents the design procedure of the NS-3 script for WLAN that is organized according to the hierarchical manner of TCP/IP model. We configure all layers by using NS-3 model objects and set and modify the values used by objects to investigate the effects of MAC parameters (access mechanism, CWmin, CWmax and retry limit) on the performance metrics viz. packet delivery ratio, packet lost ratio, aggregated throughput, and average delay. The simulation results show that RTS/CTS access mechanism outperforms basic access mechanism in saturated state, whereas the MAC parameters have no significant impact on network performance in non-saturated state. A higher value of CWmin improves the aggregated throughput in expense of average delay. The tradeoff relationships among the performance metrics are also observed in results for the optimal values of MAC parameters. Our design procedure represents a good guideline for new NS-3 users to design and modify script and results greatly benefit the network design and management.
Fast switching of threads between cores - Advanced Operating SystemsRuhaim Izmeth
Fast switching of threads between cores is a published research paper on Operating systems, This is our attempt to decode the research and present to the class
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
EFFECTS OF MAC PARAMETERS ON THE PERFORMANCE OF IEEE 802.11 DCF IN NS-3ijwmn
This paper presents the design procedure of the NS-3 script for WLAN that is organized according to the hierarchical manner of TCP/IP model. We configure all layers by using NS-3 model objects and set and modify the values used by objects to investigate the effects of MAC parameters (access mechanism, CWmin, CWmax and retry limit) on the performance metrics viz. packet delivery ratio, packet lost ratio, aggregated throughput, and average delay. The simulation results show that RTS/CTS access mechanism outperforms basic access mechanism in saturated state, whereas the MAC parameters have no significant impact on network performance in non-saturated state. A higher value of CWmin improves the aggregated throughput in expense of average delay. The tradeoff relationships among the performance metrics are also observed in results for the optimal values of MAC parameters. Our design procedure represents a good guideline for new NS-3 users to design and modify script and results greatly benefit the network design and management.
Fast switching of threads between cores - Advanced Operating SystemsRuhaim Izmeth
Fast switching of threads between cores is a published research paper on Operating systems, This is our attempt to decode the research and present to the class
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Vancouver Career College Students Participate in Different Halloween Activiti...Vancouver Career College
On the 31st of October, 2013 we had lots of Halloween activities on campus. Vancouver Career College students were carving pumpkins in very competitive ways. It was an excellent teambuilding exercise. Also, students, faculty and staff participated in different fun games like ball throwing and more. In addition to that everybody was very well prepared for the Halloween day by dressing up in creative and innovative costumes. Check out our photos and let us know what you think or come over to the Coquitlam campus and meet our people in person!
Here is what the Social Services Worker Foundations program student says about Vancouver Career College: “My instructor meets us when we are at, and tailors our lessons specifically to our learning style. She is patient, kind and gives specific REAL life examples of scenarios pertaining to our own course work. She adds humor and heavy reality. A good balance. What is innovative about our labs and classrooms is the flexibility and intimacy. Also, tailoring to my needs and other related personal touches. All of these factors and more make it different than any other campus. Overall, Vancouver Career College is a warm, welcoming and helpful place. All staff have gone above and beyond to be helpful with helping set me up.”
Subscribe to Vancouver Career College:
http://www.youtube.com/subscription_center?add_user=VCCollege
Vancouver Career College, Coquitlam Campus
5 - 1180 Pinetree Way
Coquitlam, BC
V3B 7L2
Social Media & Journalistiek, een experimentjeBart Brouwers
Vraag tijdens debat voor 92 aanwezige journalisten in Tilburg, 12 november 2013: Is twitter een zorg of een zegen voor de journalistiek? - LET OP: dit is niet wetenschappelijk. De doelgroep was het toevallig aanwezige publiek tijdens een debat over journalistiek & social media.
Techno-economic analyses of specific Ammonium Nitrate production processes, presenting capital investment breakdown, raw materials consumed and operating costs. Know more at www.intratec.us/products/ammonium-nitrate-production-processes
Techno-economic analyses of specific Aluminium Chloride production processes, presenting capital investment breakdown, raw materials consumed and operating costs. Know more at www.intratec.us/products/aluminium-chloride-production-processes
Network Function Modeling and Performance EstimationIJECEIAES
This work introduces a methodology for the modelization of network functions focused on the identification of recurring execution patterns as basic building blocks and aimed at providing a platform independent representation. By mapping each modeling building block on specific hardware, the performance of the network function can be estimated in terms of maximum throughput that the network function can achieve on the specific execution platform. The approach is such that once the basic modeling building blocks have been mapped, the estimate can be computed automatically for any modeled network function. Experimental results on several sample network functions show that although our approach cannot be very accurate without taking in consideration traffic characteristics, it is very valuable for those application where even loose estimates are key. One such example is orchestration in network functions virtualization (NFV) platforms, as well as in general virtualization platforms where virtual machine placement is based also on the performance of network services offered to them. Being able to automatically estimate the performance of a virtualized network function (VNF) on different execution hardware, enables optimal placement of VNFs themselves as well as the virtual hosts they serve, while efficiently utilizing available resources.
FLEXIBLE VIRTUAL ROUTING FUNCTION DEPLOYMENT IN NFV-BASED NETWORK WITH MINIMU...IJCNCJournal
In a conventional network, most network devices, such as routers, are dedicated devices that do not
have much variation in capacity. In recent years, a new concept of Network Functions
Virtualisation (NFV) has come into use. The intention is to implement a variety of network functions
with software on general-purpose servers and this allows the network operator to select any
capabilities and locations of network functions without any physical constraints.
This paper focuses on the deployment of NFV-based routing functions which are one of critical
virtual network functions, and present the algorithm of virtual routing function allocation that
minimize the total network cost. In addition, this paper presents the useful allocation policy of
virtual routing functions, based on an evaluation with a ladder-shaped network model. This policy
takes the ratio of the cost of a routing function to that of a circuit and traffic distribution in the
network into consideration. Furthermore, this paper shows that there are cases where the use of
NFV-based routing functions makes it possible to reduce the total network cost dramatically, in
comparison to a conventional network, in which it is not economically viable to distribute smallcapacity
routing functions
ENERGY CONSUMPTION REDUCTION IN WIRELESS SENSOR NETWORK BASED ON CLUSTERINGIJCNCJournal
One of the important issues in the routing protocol design in Wireless Sensor Networks (WSNs) is
minimizing energy consumption and maximizing network lift time. Nowadays networks and information
systems are one of the main parts of modern life that without them, people cannot live. On the hand, the
impairment of these networks leads to great and incalculable costs. In this paper, a new method based on
clustering has presented that problem of energy consumption is solved. The proposed algorithm is that
energy-based clustering can create clusters of the same energy level and distribute energy efficiency across
the WNS nodes. This proposed clustering protocol classify network nodes based on energy and
neighbourhood criteria and attempts to better balance energy in clusters and ultimately increase network
lifetime and maintain network coverage. Results are shown that the proposed algorithm is on average 40%
better than LEACH algorithm and 14% better than IBLEACH algorithm
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco cloud solutions, visit: http://www.cisco.com/c/en/us/products/cloud-systems-management/index.html
A VNF modeling approach for verification purposesIJECEIAES
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays, model checking tools, SAT solvers, and Theorem Provers are available for formal verification of such properties in virtualized networks. Unfortunately, most of those verification tools accept input descriptions written in specification languages that are difficult to use for people not experienced in formal methods. Also, in order to enable the use of formal verification tools in real scenarios, vendors of Virtual Network Functions (VNFs) should provide abstract mathematical models of their functions, coded in the specific input languages of the verification tools. This process is error-prone, time-consuming, and often outside the VNF developers’ expertise. This paper presents a framework that we designed for automatically extracting verification models starting from a Java-based representation of a given VNF. It comprises a Java library of classes to define VNFs in a more developer-friendly way, and a tool to translate VNF definitions into formal verification models of different verification tools.
This paper is written to give basic knowledge of Network function virtualisation in network system. In this paper the work on NFV done till now has been collaborated. It describes how the challenges faced by industry lead to NFV and what is meaning of NFV and NFV architecture model. It also explains NFV Infrastructure is managed and the forwarding path on which packets traverse in NFV. A relationship of NFV with SDN and current research ongoing on NFV policies is discussed.
Comparative Study of Orchestration using gRPC API and REST API in Server Crea...IJCNCJournal
Cloud computing is the quick, simple, and economical distribution of computing services. To offer quality services in cloud infrastructure, the NFV Management and Orchestration (MANO) function is a critical task for managing both infrastructure resources and network functions. Since the demand for cloud services is rapidly increasing, cloud service providers are constantly trying to reduce operational costs. Hence the elements of MANO functions have gained critical importance in cloud infrastructure. To use MANO efficiently, there are hypervisors (OpenStack) that act as a middle layer between hardware and software that are used to provide Infrastructure-as-a-Service solutions through a set of interrelated services. These services are managed by an application programming interface (API). The existing orchestration method uses Representational state transfer (REST) API for communicating with core components of a hypervisor. These API have been integral part of creating and deploying a server. To improve the performance, this paper proposes a novel approach for orchestration based on the open-source remote procedure call framework, known as gRPC. To analyse the proposed solution, both REST and gRPC-based orchestration methods are implemented for creating and launching servers using OpenStack cloud computing platform. The server creation time is obtained for different scenarios and SFC use cases. The results show that by using gRPC, the performance in terms of server creation time of an orchestration is improved by up to 27% compared to the traditional REST-based orchestration.
Comparative Study of Orchestration using gRPC API and REST API in Server Crea...IJCNCJournal
Cloud computing is the quick, simple, and economical distribution of computing services. To offer quality services in cloud infrastructure, the NFV Management and Orchestration (MANO) function is a critical task for managing both infrastructure resources and network functions. Since the demand for cloud services is rapidly increasing, cloud service providers are constantly trying to reduce operational costs. Hence the elements of MANO functions have gained critical importance in cloud infrastructure. To use MANO efficiently, there are hypervisors (OpenStack) that act as a middle layer between hardware and software that are used to provide Infrastructure-as-a-Service solutions through a set of interrelated services. These services are managed by an application programming interface (API). The existing orchestration method uses Representational state transfer (REST) API for communicating with core components of a hypervisor. These API have been integral part of creating and deploying a server. To improve the performance, this paper proposes a novel approach for orchestration based on the open-source remote procedure call framework, known as gRPC. To analyse the proposed solution, both REST and gRPC-based orchestration methods are implemented for creating and launching servers using OpenStack cloud computing platform. The server creation time is obtained for different scenarios and SFC use cases. The results show that by using gRPC, the performance in terms of server creation time of an orchestration is improved by up to 27% compared to the traditional REST-based orchestration.
This paper focuses on the evolutionary stages for cloudification then covers the key software building blocks that will be needed to enable NFV, and ultimately ICT transformation to 5G. It describes how Intel® Open Networking Platform (Intel® ONP) Server running on innovative new networking platforms based on Intel® silicon can help reduce the cost and effort required for service providers and vendors alike to adopt and deploy SDN and NFV architectures.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Control of Communication and Energy Networks Final Project - Service Function...Biagio Botticelli
Final Project of the Control of Communication and Energy Networks course of the Master Degree in Engineering in Computer Science at University of Rome "La Sapienza".
The technical report introduce the concepts of Service Function Chaining (SFC) and Network Function Virtualization (NFV) analyzing an approach to merge the two technologies.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
Network functions virtualization (NFV) is a network architecture concept that uses the technologies of IT virtualization to virtualized entire classes of network node functions into building blocks that may connect, or chain together, to create communication services.
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-functionEiko Seidel
White Paper Intel SK Telekom
This paper presents the architecture for a user plane function (UPF) in the mobile packet core (MPC) targeting 5G deployments.
EDGE CONTROLLER PLACEMENT FOR NEXT GENERATION WIRELESS SENSOR NETWORKSijcsit
Nowadays, Fog architecture or Edge architecture is becoming a popular research trend to distribute a
substantial amount of computing resources, data processing and resource management at the extreme edge
of the wireless sensor networks (WSNs). Industrial communication is a research track in next generation
wireless sensor networks for the fourth revolution in the industrial process. Adopting fog architecture into
Industrial communication systems is a promising technology within sensor networks architecture. With
Software Defined Network (SDN) architecture, in this paper, we address edge controller placement as an
optimization problem with the objective of more robustness while minimizing the delay of network
management and the associated synchronization overhead. The optimization problem is provided and
modelled as submodular function. Two algorithms are provided to find the optimal solution using a real
wireless network to get more realistic results. Greedy Algorithm and Connectivity Ranking Algorithm are
provided. Greedy algorithm outperforms connectivity ranking algorithm to find the optimum balance
between the different metrics. Also, based on the network operator preference, the number of edge
controllers to be placed will be provided. This research paper plays a great role in standardization of
softwarization into Industrial communication systems for next generation wireless sensor networks.
Nowadays, Fog architecture or Edge architecture is becoming a popular research trend to distribute a
substantial amount of computing resources, data processing and resource management at the extreme edge
of the wireless sensor networks (WSNs). Industrial communication is a research track in next generation
wireless sensor networks for the fourth revolution in the industrial process. Adopting fog architecture into
Industrial communication systems is a promising technology within sensor networks architecture. With
Software Defined Network (SDN) architecture, in this paper, we address edge controller placement as an
optimization problem with the objective of more robustness while minimizing the delay of network
management and the associated synchronization overhead. The optimization problem is provided and
modelled as submodular function. Two algorithms are provided to find the optimal solution using a real
wireless network to get more realistic results. Greedy Algorithm and Connectivity Ranking Algorithm are
provided. Greedy algorithm outperforms connectivity ranking algorithm to find the optimum balance
between the different metrics. Also, based on the network operator preference, the number of edge
controllers to be placed will be provided. This research paper plays a great role in standardization of
softwarization into Industrial communication systems for next generation wireless sensor networks.
Similar to Conference Paper: Cross-platform estimation of Network Function Performance (20)
Ericsson Technology Review: Versatile Video Coding explained – the future of ...Ericsson
Continuous innovation in 5G networks is creating new opportunities for video-enabled services for both consumers and industries, particularly in areas such as the Internet of Things and the automotive sector. These new services are expected to rely on continued video evolution toward 8K resolutions and beyond, and on new strict requirements such as low end-to-end latency for video delivery.
The latest Ericsson Technology Review article explores recent developments in video compression technology and introduces Versatile Video Coding (VVC) – a significant improvement on existing video codecs that we think deserves to be widely deployed in the market. VVC has the potential both to enhance the user experience for existing video services and offer an appropriate performance level for new media services over 5G networks.
BRIDGING THE GAP BETWEEN PHYSICAL AND DIGITAL REALITIES
The key role that connectivity plays in our personal and professional lives has never been more obvious than it is today. Thankfully, despite the sudden, dramatic changes in our behavior earlier this year, networks all around the world have proven to be highly resilient. At Ericsson, we’re committed to ensuring that the network platform continues to improve its ability to meet the full range of societal needs as well as supporting enterprises to stay competitive in the long term. We know that greater agility and speed will be essential.
This issue of our magazine includes several articles that explain Ericsson’s approach to future network development, including my annual technology trends article. The seven trends on this year’s list serve as a critical cornerstone in the development of a common Ericsson vision of what future networks will provide, and what sort of technology evolution will be required to get there.
ERIK EKUDDEN
Senior Vice President, Chief Technology Officer and Head of Group Function Technology
Ericsson Technology Review: Integrated access and backhaul – a new type of wi...Ericsson
Today millimeter wave (mmWave) spectrum is valued mainly because it can be used to achieve high speeds and capacities when combined with spectrum assets below 6GHz. But it can provide other benefits as well. For example, mmWave spectrum makes it possible to use a promising new wireless backhaul solution for 5G New Radio – integrated access and backhaul (IAB) – to densify networks with multi-band radio sites at street level.
This Ericsson Technology Review article explains the IAB concept at a high level, presenting its architecture and key characteristics, as well as examining its advantages and disadvantages compared with other backhaul technologies. It concludes with a presentation of the promising results of several simulations that tested IAB as a backhaul option for street sites in both urban and suburban areas.
Ericsson Technology Review: Critical IoT connectivity: Ideal for time-critica...Ericsson
Critical Internet of Things (IoT) connectivity is an emerging concept in IoT development that enables more efficient and innovative services across a wide range of industries by reliably meeting time-critical communication needs. Mobile network operators (MNOs) are in the perfect position to enable these types of time-critical services due to their ability to leverage advanced 5G networks in a systematic and cost-effective way.
This Ericsson Technology Review article explores the benefits of Critical IoT connectivity in areas such as industrial control, mobility automation, remote control and real-time media. It also provides an overview of key network technologies and architectures. It concludes with several case studies based on two deployment scenarios – wide area and local area – that illustrate how well suited 5G spectrum assets are for Critical IoT use cases.
5G New Radio has already evolved in important ways since the 3GPP standardized Release 15 in late 2018. The significant enhancements in Releases 16 and 17 are certain to play a critical role in expanding both the availability and the applicability of 5G NR in both industry and public services in the near future.
This Ericsson Technology Review article summarizes the most notable new developments in releases 16 and 17, grouped into two categories: enhancements to existing features and features that address new verticals and deployment scenarios. This analysis and our insights about the future beyond Release 17 is an important component of our work to help mobile network operators and other stakeholders better understand and plan for the many new 5G NR opportunities that are on the horizon.
Ericsson Technology Review: The future of cloud computing: Highly distributed...Ericsson
The growing interest in cloud computing scenarios that incorporate both distributed computing capabilities and heterogeneous hardware presents a significant opportunity for network operators. With a vast distributed system (the telco network) already in place, the telecom industry has a significant advantage in the transition toward distributed cloud computing.
This Ericsson Technology Review article explores the future of cloud computing from the perspective of network operators, examining how they can best manage the complexity of future cloud deployments and overcome the technical challenges. Redefining cloud to expose and optimize the use of heterogeneous resources is not straightforward, but we are confident that our use cases and proof points validate our approach and will gain traction both in the telecommunications community and beyond.
Ericsson Technology Review: Optimizing UICC modules for IoT applicationsEricsson
Commonly referred to as SIM cards, the universal integrated circuit cards (UICCs) used in all cellular devices today are in fact complex and powerful minicomputers capable of much more than most Internet of Things (IoT) applications require. Until a simpler and less costly alternative becomes available, action must be taken to ensure that the relatively high price of UICC modules does not hamper IoT growth.
This Ericsson Technology Review article presents two mid-term approaches. The first is to make use of techniques that reduce the complexity of using UICCs in IoT applications, while the second is to use the UICCs’ excess capacity for additional value generation. Those who wish to exploit the potential of the UICCs to better support IoT applications have the opportunity to use them as cryptographic storage, to run higher-layer protocol stacks and/or as supervisory entities, for example.
Mobile data traffic volumes are expected to increase by a factor of four by 2025, and 45 percent of that traffic will be carried by 5G networks. To deliver on customer expectations in this rapidly changing environment, communication service providers must overcome challenges in three key areas: building sufficient capacity, resolving operational inefficiencies through automation and artificial intelligence, and improving service differentiation. This issue of ETR magazine provides insights about how to tackle all three.
Ericsson Technology Review: 5G BSS: Evolving BSS to fit the 5G economyEricsson
The 5G network evolution has opened up an abundance of new business opportunities for communication service providers (CSPs) in verticals such as industrial automation, security, health care and automotive. In order to successfully capitalize on them, CSPs must have business support systems (BSS) that are evolved to manage complex value chains and support new business models. Optimized information models and a high degree of automation are required to handle huge numbers of devices through open interfaces.
This Ericsson Technology Review article explains how 5G-evolved BSS can help CSPs transform themselves from traditional network developers to service enablers for 5G and the Internet of Things, and ultimately to service creators with the ability to collaborate beyond telecoms and establish lucrative digital value systems.
Ericsson Technology Review: 5G migration strategy from EPS to 5G systemEricsson
For many operators, the introduction of the 5G System (5GS) to provide wide-area services in existing Evolved Packet System (EPS) deployments is a necessary step toward creating a full-service, future-proof 5GS in the longer term. The creation of a combined 4G-5G network requires careful planning and a holistic strategy, as the introduction of 5GS has significant impacts across all network domains, including the RAN, packet core, user data and policies, and services, as well as affecting devices and backend systems.
This Ericsson Technology Review article provides an overview of all the aspects that operators need to consider when putting together a robust EPS-to-5GS migration strategy and provides guidance about how they can adapt the transition to address their particular needs per domain.
Ericsson Technology Review: Creating the next-generation edge-cloud ecosystemEricsson
The surge in data volume that will come from the massive number of devices enabled by 5G has made edge computing more important than ever before. Beyond its abilities to reduce network traffic and improve user experience, edge computing will also play a critical role in enabling use cases for ultra-reliable low-latency communication in industrial manufacturing and a variety of other sectors.
This Ericsson Technology Review article explores the topic of how to deliver distributed edge computing solutions that can host different kinds of platforms and applications and provide a high level of flexibility for application developers. Rather than building a new application ecosystem and platform, we strongly recommend reusing industrialized and proven capabilities, utilizing the momentum created with Cloud Native Computing Foundation, and ensuring backward compatibility.
The rise of the innovation platform
Society and industry are transforming at an unprecedented rate. At the same time, the network platform is emerging as an innovation platform with the potential to offer all the connectivity, processing, storage and security needed by current and future applications. In my 2019 trends article, featured in this issue of Ericsson Technology Review, I share my view of the future network platform in relation to six key technology trends.
This issue of the magazine also addresses critical topics such as trust enablement, the extension of computing resources all the way to the edge of the mobile network, the growing impact of the cloud in the telco domain, overcoming latency and battery consumption challenges, and the need for end-to-end connectivity. I hope it provides you with valuable insights about how to overcome the challenges ahead and take full advantage of new opportunities.
Ericsson Technology Review: Spotlight on the Internet of ThingsEricsson
The Internet of Things (IoT) has emerged as a fundamental cornerstone in the digitalization of both industry and society as a whole. It represents a huge opportunity not only in economic terms, but also from a global challenges perspective – making it easier for governments, non-governmental organizations and the private sector to address pressing food, energy, water and climate related issues.
5G and the IoT are closely intertwined. One of the biggest innovations within 5G is support for the IoT in all its forms, both by addressing mission criticality as well as making it possible to connect low-cost, long-battery-life sensors.
With this in mind, we decided to create a special issue of Ericsson Technology Review solely focused on IoT opportunities and challenges. I hope it provides you with valuable insights about the IoT-related opportunities available to your organization, along with ideas about how we can overcome the challenges ahead.
Ericsson Technology Review: Driving transformation in the automotive and road...Ericsson
A variety of automotive and transport services that require cellular connectivity are already in commercial operation today, and many more are yet to come. Among other things, these services will improve road safety and traffic efficiency, saving lives and helping to reduce the emissions that contribute to climate change. At Ericsson, we believe that the best way to address the growing connectivity needs of this industry sector is through a common network solution, as opposed to taking a single-segment silo approach.
The latest Ericsson Technology Review article explains how the ongoing rollout of 5G provides a cost-efficient and feature-rich foundation for a horizontal multiservice network that can meet the connectivity needs of the automotive and transport ecosystem. It also outlines the key challenges and presents potential solutions.
This presentation explains the importance of SD-WAN technology as part of the Enterprise digital transformation strategy. It goes over the first wave of SD-WAN in a single vendor deployment, with Do-it-yourself (DIY) as the preferred model. Then continues with the importance of orchestration in the second wave of SD-WAN deployments in a multi-vendor ecosystem, turning to SD-WAN Managed Services as the preferred model. It ends up with some examples of use cases and the Verizon customer case. More information on Ericsson Dynamic orchestration - http://m.eric.sn/6rsZ30psKLu
Ericsson Technology Review: 5G-TSN integration meets networking requirements ...Ericsson
Time-Sensitive Networking (TSN) is becoming the standard Ethernet-based technology for converged networks of Industry 4.0. Understanding the importance and relevance of TSN features, as well as the capabilities that allow 5G to achieve wireless deterministic and time-sensitive communication, is essential to industrial automation in the future.
The latest Ericsson Technology Review article explains how TSN is an enabler of Industry 4.0, and that together with 5G URLLC capabilities, the two key technologies can be combined and integrated to provide deterministic connectivity end to end. It also discusses TSN standards and the value of the TSN toolbox for next generation industrial automation networks.
Ericsson Technology Review: Meeting 5G latency requirements with inactive stateEricsson
Low latency communication and minimal battery consumption are key requirements of many 5G and IoT use cases, including smart transport and critical control of remote devices. Thanks to Ericsson’s 4G/5G research activities and lessons learned from legacy networks, we have identified solutions that address both of these requirements by reducing the amount of signaling required during state transitions, and shared our discoveries with the 3GPP.
This Ericsson Technology Review article explains the why and how behind the new Radio Resource Control (RRC) state model in the standalone version of the 5G New Radio standard, which features a new, Ericsson-developed state called inactive. On top of overcoming latency and battery consumption challenges, the new state also increases overall system capacity by decreasing the processing effort in the network.
Ericsson Technology Review: Cloud-native application design in the telecom do...Ericsson
Cloud-native application design is set to become standard practice in the telecom industry in the near future due to the major efficiency gains it can provide, particularly in terms of speeding up software upgrades and releases. At Ericsson, we have been actively exploring the potential of cloud-native computing in the telecom industry since we joined the Cloud Native Computing Foundation (CNCF) a few years ago.
This Ericsson Technology Review article explains the opportunities that CNCF technology has enabled, as well as unveiling key aspects of our application development framework, which is designed to help navigate the transition to a cloud-native approach. It also discusses the challenges that the large-scale reuse of open-source technology can raise, along with key strategies for how to mitigate them.
Ericsson Technology Review: Service exposure: a critical capability in a 5G w...Ericsson
To meet the requirements of use cases in areas such as the Internet of Things, AR/VR, Industry 4.0 and the automotive sector, operators need to be able to provide computing resources across the whole telco domain – all the way to the edge of the mobile network. Service exposure and APIs will play a key role in creating solutions that are both effective and cost efficient.
The latest Ericsson Technology Review article explores recent advances in the service exposure area that have resulted from the move toward 5G and the adoption of cloud-native principles, as well as the combination of Service-based Architecture, microservices and container technologies. It includes examples that illustrate how service exposure can be deployed in a multitude of locations, each with a different set of requirements that drive modularity and configurability needs.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Conference Paper: Cross-platform estimation of Network Function Performance
1. Cross-Platform Estimation of Network Function
Performance
Amedeo Sapio
Department of Control and
Computer Engineering
Politecnico di Torino
Torino, Italy
amedeo.sapio@polito.it
Mario Baldi
Department of Control and
Computer Engineering
Politecnico di Torino
Torino, Italy
mario.baldi@polito.it
Gergely Pongr´acz
TrafficLab
Ericsson Research
Budapest, Hungary
gergely.pongracz@ericsson.com
Abstract—This work shows how the performance of a network
function can be estimated with an error margin that is small
enough to properly support orchestration of network functions
virtualization (NFV) platforms. Being able to estimate the per-
formance of a virtualized network function (VNF) on execution
hardware of various types enables its optimal placement, while
efficiently utilizing available resources. Network functions are
modeled using a methodology focused on the identification of
recurring execution patterns and aimed at providing a platform
independent representation. By mapping the model on specific
hardware, the performance of the network function can be
estimated in terms of maximum throughput that the network
function can achieve on the specific execution platform. The
approach is such that once the basic modeling building blocks
have been mapped, the estimate can be computed automatically.
This work presents the model of an Ethernet switch and evaluates
its accuracy by comparing the performance estimation it provides
with experimental results.
Keywords—Network Functions Virtualization; Virtual Network
Function; modeling; orchestration; performance estimation;
I. INTRODUCTION
For a few years now software network appliances have been
increasingly deployed. Initially, their appeal stemmed from
their lower cost, shorter time-to-market, ease of upgrade when
compared to purposely designed hardware devices. These
features are particularly advantageous in the case of appliances,
a.k.a. middleboxes, operating on relatively recent higher layer
protocols that are usually more complex and are possibly still
evolving. Then, with the overwhelming success and diffusion
of cloud computing and virtualization, software appliances
became natural means to ensure that network functionalities
had the same flexibility and mobility as the virtual machines
(VMs) they offer services to. Hence, value started to be seen
in the software implementation of also less complex, more
stable network functionalities. This trend led to embracing
Software Defined Networking and Network Functions Virtu-
alization (NFV). The former as a hybrid hardware/software
approach to ensure high performance for lower layer packet
forwarding, while retaining a high degree of flexibility and
programmability. The latter as a virtualization solution target-
ing the execution of software network functions in isolated
Virtual Machines (VMs) sharing a pool of hosts, rather than
on dedicated hardware (i.e., appliances). Such a solution en-
ables virtual network appliances (i.e., VMs executing network
functions) to be provisioned, allocated a different amount of
resources, and possibly moved across data centers in little time,
which is key in ensuring that the network can keep up with
the flexibility in the provisioning and deployment of virtual
hosts in today’s virtualized data centers. Additional flexibility
is offered when coupling NFV with SDN as network traffic can
be steered through a chain of Virtualized Network Functions
(VNFs) in order to provide aggregated services. With inputs
from the industry, the NFV approach has been standardized by
the European Telecommunications Standards Institute (ETSI)
in 2013 [1].
The flexibility provided by NFV requires the ability to
effectively assign compute nodes to VNFs and allocate the
most appropriate amount of resources, such as CPU quota,
RAM, virtual interfaces. In the ETSI standard the component
in charge of taking such decisions is called orchestrator and it
can also dynamically modify the amount of resources assigned
to a running VNF when needed. The orchestrator can also
request the migration of a VNF when the current compute node
executing it is no longer capable of fulfilling the VNF perfor-
mance requirements. These tasks require the orchestrator to
be able to estimate the performance of VNFs according to the
amount of resources they can use. Such estimation must take
into account the nature of the traffic manipulation performed
by the VNF at hand, some specifics of its implementation, and
the expected amount of traffic it operates on. A good estimation
is key in ensuring higher resource usage efficiency and avoid
adjustments at runtime.
This work presents and evaluates the model of an Ethernet
switch based on a unified modeling approach [2] applicable
to any VNF, independently of the platform it is running on.
By mapping the VNF model to a specific hardware, it is
possible to predict the maximum amount of traffic that the
VNF can sustain. In this work, the model is mapped to a
sample hardware platform and the predicted performance is
compared with the actual measurements.
The deployed modeling approach [2] is particularly valu-
able because it relies on a description of VNFs in terms of
basic operations, which results in a hardware independent
notation that ensures that the model is valid for any execution
platform. In addition, the mapping of the model on a target
hardware architecture (required in order to determine the actual
performance) can be automated, hence allowing to easily apply
2. the approach to each available hardware platform and choose
the most suitable for the execution.
After discussing related work in Section II, the modeling
approach is described in Section III. Section IV presents
the modelization of an Ethernet switch and the mapping of
the model to a general purpose hardware architecture. In
order to validate the accuracy of the approach, Section V
compares the performance estimated through the model with
actual measurements obtained by running targeted experiments
with a software implementation of the Ethernet switch on the
considered hardware platform.
II. RELATED WORK
This work applies to an Ethernet switch the approach
to network function modelization proposed in [2], providing
experimental measurements to validate the obtained model.
The modelization approach was inspired by [3] that aims
to demonstrate that the Software Defined Networks approach
does not necessarily imply lower performance compared to
purpose-built ASICs. In order to prove it, the performance of
a software implementation of an Ethernet Provider Backbone
Edge Bridge is evaluated. The execution platform considered
in this work is a hypothetical network processor, for which
a high-level model is provided. The authors do not aim at
providing a universal modelization approach for a generic
network functions. Rather, their purpose is to use a specific
sample network function to demonstrate that, even for very
specific tasks, the NPU-based software implementation offers
performance only slightly lower than purpose designed chips.
A modeling approach for describing packet processing in
middleboxes and the ways they can be deployed is presented
in [4] and applied to a NAT, a L4 load balancer, and a L7
load balancer. The proposed model is not aimed at estimating
performance and resources requirements, but it rather focuses
on accurately describing middleboxes functionalities to support
decisions in their deployment.
On the other hand, a VNF modeling approach aimed at
performance estimation would be greatly beneficial to cloud
platforms where the performance of the network infrastructure
is taken into account when placing VMs [5]–[7]. For example,
[7] describes the changes needed in the OpenStack software
platform, the open-source reference cloud management system,
to enable the Nova scheduler to plan VM allocation based on
network property data and a set of constraints provided by the
orchestrator. We argue that in order to infer such constraints,
the orchestrator needs a VNF model like the ones generated
by the approach presented in this paper.
III. METHODOLOGY
The proposed modeling approach is based on the definition
of a set of processing steps, here called Elementary Operations
(EOs), that are common throughout various NF implementa-
tions. This stems from the observation that, generally, most
NFs perform a rather small set of operations when processing
the average packet, namely, a well-defined alteration of packet
headers, coupled with a data structure lookup.
An EO is informally defined as the longest sequence of
elementary steps (e.g., CPU instructions or ASIC transactions)
that is common among multiple NFs processing tasks. As a
consequence, an EO have variable granularity ranging from a
simple I/O or memory load operation, to a whole IP checksum
computation. On the other hand, EOs are defined so that each
can be potentially used in multiple NF models.
An NF is modeled as a sequence of EOs that represent the
actions performed for the vast majority of packets. Since we
are interested in performance estimation, we ignore handling
that affects only a small number of packets (i.e., less the 1%),
since these tasks have a negligible impact on performance,
even when they are more complex and resource intensive
than the most common ones. Accordingly exceptions, such as
failures, configuration changes, etc., are not considered.
It is important to highlight that NF models produced with
this approach are hardware independent, which ensures that
they can be applied when NFs are deployed on different
execution platforms. In order to estimate the performance of an
NF on a specific hardware platform, each EO must be mapped
on the hardware components involved in its execution and
their features. This mapping allows to take into consideration
the limits of the involved hardware components and gather
a set of constraints that affect the performance (e.g., clock
frequency). Moreover, the load incurred by each component
when executing each EO must be estimated, whether through
actual experiments or based on nominal hardware specifica-
tions. The data collected during such mapping are specific to
EOs and the hardware platform, but not to a particular NF.
Hence, they can be applied to estimate the performance of any
NF starting from its model. Specifically, the performance of
each individual EO involved in the NF model is computed and
composed considering the cumulative load that all EOs impose
on the hardware components of the execution platform, while
heeding all of the applicable constraints.
Figure 1 summarizes the steps and intermediate outputs of
the proposed approach.
NF
EO
HW
Architecture
NF
Performance
Express
Map
EO Performance Constraints
NF Model
Fig. 1: NF modeling and performance estimation approach.
Table I presents a sample list of EOs that we identified
when modeling a number of NFs. Such list is by no means
meant to be exhaustive; rather, it should be incrementally
extended whenever it turns out that a new NF being considered
cannot be described with previously identified EOs. When
defining an EO, it is important to identify the parameters
related to traffic characteristics that significantly affect the
execution and resource consumption.
3. TABLE I: Sample list of EOs
EO Parameters Description
1 mem_I/O L1n, L2n
Packet copy between
I/O and (cache) memory
2 parse b Parsing a data field
3 increase b
Increase/decrease
a field
4 array_access es, max
Direct access to
a byte array in memory
5 hash_lookup
N, HE,
max, p
Simple hash table lookup
6 checksum b Compute IP checksum
7 sum b Sum 2 operands
A succinct description of the EOs listed in table I is
provided below.
1) Packet copy between I/O and memory:
A packet is copied from/to an I/O buffer to/from
memory. L1n is the number of bytes that are prefer-
ably stored in L1 cache memory, otherwise in L2
cache or external RAM. L2n bytes are preferably
stored in L2 cache memory, otherwise in external
RAM. The parameters have been chosen taking into
consideration that some NPUs provide a manual
cache that can be explicitly loaded with the data
that need fast access. General purpose CPUs may
have assembler instructions (e.g., PREFETCHh) to
explicitly influence the cache logic.
2) Parsing a data field:
A data field of b bytes stored in memory is parsed.
A parsing operation is necessary before performing
any computation on a field (corresponds to loading
a processor register). This EO can be used also to
model the dual operation, i.e., encapsulation, which
implies storing back into memory a properly con-
structed sequence of fields.
3) Increase/decrease a field:
Increase/decrease the numerical value contained in
a field of b bytes. The field to increase must have
already been parsed.
4) Direct access to a byte array in memory:
This EO performs a direct access to an element of
an array in memory using an index. Each array entry
has size es, while the array has at most max entries.
5) Simple hash table lookup:
A simple lookup in a direct, XOR based hash table is
performed. The hash key consists of N components
and each entry has size equal to HE. The table has
at most max entries. The collision probability is p.
6) Compute IP checksum:
The standard IP checksum computation is performed
on b bytes.
7) Sum 2 operands:
Two operands of b bytes are added.
For the sake of simplicity (and without affecting the
validity of the approach, as shown by the results in Section V),
in modeling NFs by means of EOs, we assume that the number
of processor registers is larger than the number of packet fields
that must be processed simultaneously. Therefore there is no
competition for this resource.
IV. A MODELING USE CASE
This section demonstrates the application of the modeling
approach described in the previous section. EOs are used to
describe the operation of an Ethernet switch and then they are
mapped to a general purpose hardware platform.
A. Ethernet Switch Model
For each packet the switch selects the output interface
where it must be forwarded, retrieving it from a hash table
keyed by the destination MAC address extracted from the
packet.
When the network interface receives a packet, it is firstly
stored in an I/O buffer. In order to access the Ethernet header,
the CPU/NPU must first copy the packet in cache or main
memory. Since the switch operates only on the Ethernet header
that is of limited size (14 bytes), it is copied in the L1 cache,
while the rest of the packet (up to 1486 bytes) can be copied in
L2 cache or main memory. To ensure generality, we consider
that an incoming packet cannot be copied directly from an I/O
buffer to another, instead it must be first copied in (cache)
memory in any case.
The switch must then read the destination MAC address
(6 bytes) prior to using it to access the hash table to get the
appropriate output interface. The hash table has one key (the
destination MAC) and consists of 12 byte entries composed
by the key and the output interface MAC address.
Here we considered that the output interface is identified
by its Ethernet address. Different implementations can use a
different identifier, which leads to a minor variation in the
model.
The average number of entries in a real case scenario is
≈ 2M, which can give an idea of whether it can be fully
stored in cache under any traffic conditions. Here we assume
that the collision probability is negligible (i.e., the hash table
is sufficiently sparse).
The packet can then be moved to the buffer of the selected
output I/O device. The resulting model is summarized in
Figure 2.
mem_I/O(14, 1486)
parse(6)
hash_lookup(1, 12, 2M, 0)
mem_I/O(14, 1486)
Fig. 2: Ethernet switch model.
B. Mapping to Hardware
We now proceed to map the described EOs to a specific
hardware platform. Figure 3 provides a schematic representa-
tion of the platform main components and relative constraints
using the template proposed in [3]: an Intel R
Xeon E5-2630
CPU, a DDR3 RAM module and a 10Gb Ethernet Controller.
Using the CPU reference manual [8], it is possible to
determine the operations required for the execution of each
EO in Table I and estimate the achievable performance.
4. DDR3
- 1333 Mtps
- Max 85.2 Gbps
- CAS lat. 9
MCT
- 4 ch.
- DDR3
- Max
340.8 Gbps
I/O
PCIe v3.0
- 8 Gtps
- 126 Gbps
(x16)
x86-64
6 cores /slot
- 2 threads / core
- 2.3 – 2.8 GHz
AVX
VT-d, VT-x + EPT
L1
- per core
- i=32KB
- d=32KB
L2
- per core
- 256 KB
L3
- per slot
- 15 MB
2x 10 GbE
- 5 Gtps
- PCIe v2.0 (x8)
- Max 32 Gbps
Intel Xeon E5-2630
Fig. 3: Hardware architecture description.
1. mem_I/O(L1n, L2n)
The CPU L1 and L2 data caches can move one line per
cache cycle, i.e., 512 bits (64 bytes) in 4 clock cycles and
12 clock cycles respectively, and their maximum sizes are
32 KB and 256 KB, respectively. Moreover, read and write
operations in I/O buffers require on average 40 clock cycles.
On the whole, the execution of this EO requires:
4 ∗ ⌈
min(32KB, L1n)
64B
⌉+
12 ∗ ⌈
min(256KB, max(0, L1n − 32KB) + L2n)
64B
⌉+
40 ∗ ⌈
L1n + L2n
64B
⌉
clock cycles and
⌈
max(0, max(0, L1n − 32KB) + L2n − 256KB)
64B
⌉
L3 cache or DRAM accesses.
2. parse(b)
Loading a 64 bit register requires 4 clock cycles if data is
in L1 cache or 12 clock cycles if data is in L2 cache, otherwise
an additional L3 cache or DRAM memory access is required
to retrieve a 64 byte line and store it in L1 or L2 respectively:
4 ∗ ⌈
b
8B
⌉ clock cycles {+⌈
b
64B
⌉ L3 or DRAM accesses}
or
12 ∗ ⌈
b
8B
⌉ clock cycles {+⌈
b
64B
⌉ L3 or DRAM accesses}
3. increase(b)
Whether a processor includes an increase instruction or one
for adding a constant value to a 64 bit register, this EO requires
1 clock cycle to complete. However, thanks to pipelining, up
to 3 independent such instructions can be executed during 1
clock cycle:
⌈0.33 ∗
b
8B
⌉ clock cycles
4. array_access(es, max)
Direct array access needs to execute an “ADD” instruction
(1 clock cycle) for computing the index and a “LOAD” instruc-
tion resulting into a direct memory access and as many clock
cycles as the number of CPU registers required to load the
selected array element:
1 + ⌈
es
8B
⌉ clock cycles
+⌈
es
64B
⌉ DRAM accesses
5. hash_lookup(N, HE, max, p)
We assume that a simple hash lookup is implemented
according to the pseudo-code described in [3] and shown in
Figure 4 for ease of reference.
Register $1-N: key components
Register $HL: hash length
Register $HP: hash array pointer
Register $HE: hash entry size
Register $Z: result
Pseudo code:
# hash key calculation
eor $tmp, $tmp
for i in 1 ... N
eor $tmp, $i
# key is available in $tmp
# calculate hash index from key
udiv $tmp2, $tmp, $HL
mls $tmp2, $tmp2, $HL, $tmp
# index is available in $tmp2
# index -> hash entry pointer
mul $tmp, $tmp2, $HE
add $tmp, $HP
# entry pointer available in $tmp
<prefetch entry to L1 memory>
# pointer to L1 entry -> $tmp2
# hash key check (entry vs. key)
for i in 1 ... N
ldr $Z, [$tmp2], #4
# check keys
cmp $i, $Z
bne collision
# no jump means matching keys
# pointer to data available in $Z
Fig. 4: Hash lookup pseudo-code.
Considering that the hash entry needs to be loaded from
memory to L1 cache, a simple hash lookup would require
approximately:
⌈(4 ∗ N + 106 + 4 ∗ ⌈
HE
8B
⌉ + 4 ∗ ⌈
HE
32B
⌉) ∗ (1 + p)⌉
clock cycles and
5. ⌈(⌈
HE
64B
⌉ ∗ (1 + p))⌉
DRAM accesses.
Otherwise, if the entry is already in the cache, the memory
accesses and cache store operations are not required. Notice
that in order for the whole table to be in cache, its size should
be limited to:
max ∗ HE ≤ 32KB + 256KB = 288KB
So, in the average case, a mix of cache hits and misses will
take place, depending on the specific traffic profile.
6. checksum(b)
Figure 5 shows a sample assembly code to compute a
checksum on an Intel R
x86-64 processor. Assuming that the
data on which the checksum is computed is not in L1/L2 cache,
according to the Intel R
documentation [8], the execution of
this code requires
7 ∗ ⌈
b
2
⌉ + 8 clock cycles
+⌈
b
64B
⌉ L3 or DRAM accesses
Register ECX: number of bytes b
Register EDX: pointer to the buffer
Register EBX: checksum
CHECKSUM_LOOP:
XOR EAX, EAX ;EAX=0
MOV AX, WORD PTR [EDX] ;AX <- next word
ADD EBX, EAX ;add to checksum
SUB ECX, 2 ;update number of bytes
ADD EDX, 2 ;update buffer
CMP ECX, 1 ;check if ended
JG CKSUM_LOOP
MOV EAX, EBX ;EAX=EBX=checksum
;EAX=checksum>>16 EAX is the carry
SHR EAX, 16
AND EBX, 0xffff ;EBX=checksum&0xffff
;EAX=(checksum>>16)+(checksum&0xffff)
ADD EAX, EBX
MOV EBX, EAX ;EBX=checksum
SHR EBX, 16 ;EBX=checksum>>16
ADD EAX, EBX ;checksum+=(checksum>>16)
MOV checksum, EAX ;checksum=EAX
Fig. 5: Sample Intel R
x86 assembly code for checksum
computation.
7. sum(b)
On the considered architecture, the execution of this EO is
equivalent to the increase(b) EO. Please note that this is
not necessarily the case on every architecture.
TABLE II: Estimates for different packet sizes
Packet size Mpps Gbps
64 12.05 7.91
128 8.38 9.69
256 5.21 11.24
512 2.97 12.34
1024 1.59 13.01
1500 1.09 12.95
C. Performance Estimation
Using the above mapping of EOs in the Ethernet switch
model devised in Section IV-A and shown in Figure 2, we
can estimate that forwarding a packet of the maximum size
(1500 bytes) requires:
2630 clock cycles + 1 DRAM access
As a consequence, a single core of an Intel R
Xeon E5-
2630 operating at 2.8 Ghz can process ≈ 1.09 Mpps, while
the DDR3 memory can support 70.16 Mpps. The memory
throughput is estimated considering that each packet requires
a 12 byte memory access to read the hash table entry and the
time to read the second 8 bytes word from memory is:
(CAS latency ∗ 2) + 1
data rate
As a result a single core can process ≈ 12.95 Gbps.
If we consider minimum size (64 byte) packets (i.e., an
unrealistic, worst case scenario), the Ethernet switch requires:
238 clock cycles + 1 DRAM access
which means that a single core at 2.8 Ghz can process ≈
12.05 Mpps (while the load and throughput of the memory
remain the same), which translates into ≈ 7.9 Gbps. Estimates
calculated for different packet sizes are reported in Table II.
V. EXPERIMENTAL VALIDATION
In order to evaluate the accuracy of the estimates provided
by the proposed modeling approach, in this section we show
measurements made in a lab setting with software switch
implementations running on the presented hardware platform.
Three software switches are used in the experiments: Open
vSwitch (OVS), eXtensible DataPath Daemon (xDPd) and
Ericsson Research Flow Switch (ERFS). These switches are
configured via the OpenFlow protocol to perform a single des-
tination MAC address-based output port selection and forward
packets on the selected interface. The execution platform is
equipped with two Xeon E5-2630 processors whose model
is provided in Figure 3. To minimize the interference of the
operating system drivers, the network interfaces are managed
by the Intel R
DPDK drivers. These drivers are designed for
fast packet processing enabling applications (i.e., the switch
implementation in this case) to receive and send packets
directly from/to a network interface card within the minimum
possible number of CPU cycles. A separate PC with the same
hardware configuration is used as a traffic generator with
the DPDK based pktgen traffic generator that is capable of
saturating a 10GbE link with minimum size packets.
6. 0
2
4
6
8
10
12
14
16
64128 256 512 1024 1500
Throughput(Mpps)
Packet size (bytes)
Estimate
OVS-DPDK
ERFS
xDPd-0.6
Fig. 6: Performance with 100 flows
The test traffic consists of Ethernet packets with different
destination MAC addresses in order to prevent inter-packet
caching. The total number of packets sent for each test is equal
to 100, 000, 000 ∗ transmission rate (in Gbps). The generator
PC is also used to compute statistics on received packets.
Figure 6 shows the results obtained using each of the
above listed switches and generating 100 concurrent flows with
different destination MAC addresses. From the results it is
clear that in this scenario the switches can achieve throughput
up to the link capacity except with very small packets. The
estimated value is above the measured value, as expected, since
the estimation considers the hardware computational capability
and not the transmission rate of the physical links. For small
packets the fully-optimized pipeline of ERFS outperforms
xDPd and OVS. With 64 byte packets the measured throughput
of ERFS significantly exceeds the estimated value, which in
turn is above the measurements ones for the other 2 switches.
In order to further test the accuracy of the estimates, we
run additional tests with bi-directional flows. The generated
traffic has the same characteristics as the previous tests and in
this case we calculate the aggregate statistics on all output
interfaces. In this way the traffic processed by the switch
can hypothetically reach 20 Gbps. We test this configuration
with increasing packet sizes, until the link capacity is reached.
The results obtained, which involved 2 different cores, are
presented in Figure 7, together with the values estimated with
the modeling approach. As correctly estimated, a rate around
only 22 Mpps can be reached with small packets. As it is
visible, version 0.6 of xDPd has internal scalability problems,
while the other 2 switches are capable to scale as needed. The
above results show that the model provides a good estimation
of the throughput limit. In the case of bi-directional flows
the computed estimation has a 9% error for 64 byte packets,
0.2% for 128 byte packets and 6% for 256 byte packets. The
error increases for bigger packets because the computational
capabilities, which are what the model takes into account, are
no longer the factor limiting performance.
The results show that the proposed modeling approach
provides means to produce a valuable estimation of network
functions performance. This methodology will be further im-
0
5
10
15
20
25
64128 256 512 1024 1500
Throughput(Mpps)
Packet size (bytes)
Estimate
OVS-DPDK
ERFS
xDPd-0.6
Fig. 7: Performance with 100 flows and bi-directional traffic
(using 2 cores)
proved considering also the effects of packets interaction and
concurrence.
ACKNOWLEDGMENT
This work was conducted within the framework of the FP7
UNIFY project1
, which is partially funded by the Commission
of the European Union. Study sponsors had no role in writing
this report. The views expressed do not necessarily represent
the views of the authors’ employers, the UNIFY project, or
the Commission of the European Union.
REFERENCES
[1] “ETSI ISG for NFV, ETSI GS NFV-INF 001, Network
Functions Virtualisation (NFV); Infrastructure Overview,”
http://www.etsi.org/deliver/etsi gs/NFV-INF/001 099/001/01.01.01
60/gs NFV-INF001v010101p.pdf, [Online; accessed 19-May-2015].
[2] M. Baldi and A. Sapio, “A network function modeling approach for
performance estimation,” in 2015 IEEE 1st International Forum on
Research and Technologies for Society and Industry Leveraging a better
tomorrow (RTSI 2015), Torino, Italy, Sep. 2015.
[3] G. Pongr´acz, L. Moln´ar, Z. L. Kis, and Z. Tur´anyi, “Cheap silicon: a myth
or reality? picking the right data plane hardware for software defined
networking,” in Proceedings of the second ACM SIGCOMM workshop
on Hot topics in software defined networking. ACM, 2013, pp. 103–108.
[4] D. Joseph and I. Stoica, “Modeling middleboxes,” Network, IEEE,
vol. 22, no. 5, pp. 20–25, 2008.
[5] A. Gember, A. Krishnamurthy, S. S. John, R. Grandl, X. Gao, A. Anand,
T. Benson, A. Akella, and V. Sekar, “Stratos: A network-aware orches-
tration layer for middleboxes in the cloud,” Technical Report, Tech. Rep.,
2013.
[6] J. Soares, M. Dias, J. Carapinha, B. Parreira, and S. Sargento,
“Cloud4nfv: A platform for virtual network functions,” in Cloud Net-
working (CloudNet), 2014 IEEE 3rd International Conference on. IEEE,
2014, pp. 288–293.
[7] F. Lucrezia, G. Marchetto, F. G. O. Risso, and V. Vercellone, “Intro-
ducing network-aware scheduling capabilities in openstack,” Network
Softwarization (NetSoft), 2015 IEEE 1st Conference on, 2015.
[8] “Intel 64 and IA-32 Architectures Optimization Reference Manual,”
http://www.intel.com/content/www/us/en/architecture-and-technology/
64-ia-32-architectures-optimization-manual.html, [Online; accessed
19-May-2015].
1http://www.fp7-unify.eu/