The NECOS project focuses on cloud slicing as a novel approach to provide uniform management and automation over federated data centers and network domains. It introduces lightweight processes and technologies to deliver simplified and resource-efficient usage of currently separated computing, connectivity, and storage domains. The goal is to address limitations of current cloud infrastructures and respond to new service demands through a reference implementation of "Slice as a Service" based on resource virtualization.
We have the Bricks to Build Cloud-native Cathedrals - But do we have the mortar?Nane Kratzke
This is some input for a panel discussion about "Challenges of Cloud Computing-based Systems" I attend at the 9th International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2018) in Barcelona, Spain in February 2018.
Cloud-native applications (CNA) are build more and more often according to microservice and independent system architecture (ISA) approaches. ISA involves two architecture layers: the macro and the micro architecture layer. Software engineering outcomes on the micro layer are often distributed in a standardized form as self-contained deployment units (so called container images). There exist plenty of programming languages to implement these units: JAVA, C, C++, JavaScript, Python, R, PHP, Ruby, ... (this list is almost endless) But on the macro layer, one might mention TOSCA and little more. TOSCA is an OASIS deployment and orchestration standard language to describe a topology of cloud based web services, their components, relationships, and the processes that manage them. This works for static deployments. However, CNA are elastic, self-adaptive - almost the exact opposite of what can be defined efficiently using TOSCA. For these kind of scenarios one might mention Kubernetes or Docker Swarm as container orchestrators which are intentionally build to operate elastic services formed of containers. But these operating platforms do not provide expressive and pragmatic programming languages covering the macro layer of cloud-native applications.
So it seems there is a gap and the question arises, whether we need further (and what kind of) macro layer languages for CNA?
GaruaGeo: Global Scale Data Aggregation in Hybrid Edge and Cloud Computing En...Otávio Carvalho
Research work published on the 9th International Conference on Cloud Computing and Services Science (CLOSER 2019) held at Heraklion, Crete.
The combination of Edge Computing devices and Cloud Computing resources brings the best of both worlds: Data aggregation closer to the source and scalable resources to grow the network on demand. However, the ability to leverage each time more powerful edge nodes to decentralize data processing and aggregation is still a significant challenge for both industry and academia. In this work, we extend the Garua platform to analyze the impact of a model for data aggregation in a global scale smart grid application dataset. The platform is extended to support global data aggregators that are placed nearly to the Edge nodes where data is being collected. This way, it is possible to aggregate data not only at the edge of the network but also pre-process data at nearby geographic areas, before sending data to be aggregated globally by global centralization nodes. The results of this work show that the implemented testbed application, through the usage of edge node aggregation, data aggregators geographically distributed and messaging windows, can achieve collection rates above 400 million measurements per second.
Provably Secure Key-Aggregate Cryptosystems with Broadcast Aggregate Keys for...Prasadu Peddi
This document proposes a provably secure key-aggregate cryptosystem that allows for efficient online data sharing on the cloud. It allows data owners to encrypt data and delegate decryption rights to users via a single broadcast aggregate key, while retaining the ability to revoke access. The proposed system provides data confidentiality, user revocation, scalability, and prevents collusion. It is proven to be semantically secure and collusion resistant under appropriate security assumptions. Hardware requirements include a Pentium IV 2.4 GHz processor and 512 MB RAM, while software requirements include Windows XP/7, Java/J2EE, Netbeans 7.4, and MySQL database.
Grid computing involves distributing computing resources across a network to tackle large problems. The Worldwide LHC Computing Grid (WLCG) was established to support the Large Hadron Collider (LHC) experiment, which produces around 15 petabytes of data annually. The WLCG uses a four-tiered model, with raw data stored at Tier-0 (CERN), copies distributed to Tier-1 data centers, computational resources provided by Tier-2 centers, and Tier-3 facilities providing additional analysis capabilities. This distributed model has proven effective in supporting the first year of LHC data collection and analysis through globally shared computing resources.
DEEP-Hybrid-DataCloud is a Horizon 2020 project that aims to promote intensive computing services for analyzing large datasets through a hybrid cloud approach. It received funding from the European Union to develop specialized computing infrastructure and integrate intensive computing services. The project involves 9 academic and 1 industrial partners across 6 European countries. It will define a "DEEP as a Service" solution and evolve existing INDIGO components to better support intensive computing workloads and specialized hardware.
The NECOS project focuses on cloud slicing as a novel approach to provide uniform management and automation over federated data centers and network domains. It introduces lightweight processes and technologies to deliver simplified and resource-efficient usage of currently separated computing, connectivity, and storage domains. The goal is to address limitations of current cloud infrastructures and respond to new service demands through a reference implementation of "Slice as a Service" based on resource virtualization.
We have the Bricks to Build Cloud-native Cathedrals - But do we have the mortar?Nane Kratzke
This is some input for a panel discussion about "Challenges of Cloud Computing-based Systems" I attend at the 9th International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2018) in Barcelona, Spain in February 2018.
Cloud-native applications (CNA) are build more and more often according to microservice and independent system architecture (ISA) approaches. ISA involves two architecture layers: the macro and the micro architecture layer. Software engineering outcomes on the micro layer are often distributed in a standardized form as self-contained deployment units (so called container images). There exist plenty of programming languages to implement these units: JAVA, C, C++, JavaScript, Python, R, PHP, Ruby, ... (this list is almost endless) But on the macro layer, one might mention TOSCA and little more. TOSCA is an OASIS deployment and orchestration standard language to describe a topology of cloud based web services, their components, relationships, and the processes that manage them. This works for static deployments. However, CNA are elastic, self-adaptive - almost the exact opposite of what can be defined efficiently using TOSCA. For these kind of scenarios one might mention Kubernetes or Docker Swarm as container orchestrators which are intentionally build to operate elastic services formed of containers. But these operating platforms do not provide expressive and pragmatic programming languages covering the macro layer of cloud-native applications.
So it seems there is a gap and the question arises, whether we need further (and what kind of) macro layer languages for CNA?
GaruaGeo: Global Scale Data Aggregation in Hybrid Edge and Cloud Computing En...Otávio Carvalho
Research work published on the 9th International Conference on Cloud Computing and Services Science (CLOSER 2019) held at Heraklion, Crete.
The combination of Edge Computing devices and Cloud Computing resources brings the best of both worlds: Data aggregation closer to the source and scalable resources to grow the network on demand. However, the ability to leverage each time more powerful edge nodes to decentralize data processing and aggregation is still a significant challenge for both industry and academia. In this work, we extend the Garua platform to analyze the impact of a model for data aggregation in a global scale smart grid application dataset. The platform is extended to support global data aggregators that are placed nearly to the Edge nodes where data is being collected. This way, it is possible to aggregate data not only at the edge of the network but also pre-process data at nearby geographic areas, before sending data to be aggregated globally by global centralization nodes. The results of this work show that the implemented testbed application, through the usage of edge node aggregation, data aggregators geographically distributed and messaging windows, can achieve collection rates above 400 million measurements per second.
Provably Secure Key-Aggregate Cryptosystems with Broadcast Aggregate Keys for...Prasadu Peddi
This document proposes a provably secure key-aggregate cryptosystem that allows for efficient online data sharing on the cloud. It allows data owners to encrypt data and delegate decryption rights to users via a single broadcast aggregate key, while retaining the ability to revoke access. The proposed system provides data confidentiality, user revocation, scalability, and prevents collusion. It is proven to be semantically secure and collusion resistant under appropriate security assumptions. Hardware requirements include a Pentium IV 2.4 GHz processor and 512 MB RAM, while software requirements include Windows XP/7, Java/J2EE, Netbeans 7.4, and MySQL database.
Grid computing involves distributing computing resources across a network to tackle large problems. The Worldwide LHC Computing Grid (WLCG) was established to support the Large Hadron Collider (LHC) experiment, which produces around 15 petabytes of data annually. The WLCG uses a four-tiered model, with raw data stored at Tier-0 (CERN), copies distributed to Tier-1 data centers, computational resources provided by Tier-2 centers, and Tier-3 facilities providing additional analysis capabilities. This distributed model has proven effective in supporting the first year of LHC data collection and analysis through globally shared computing resources.
DEEP-Hybrid-DataCloud is a Horizon 2020 project that aims to promote intensive computing services for analyzing large datasets through a hybrid cloud approach. It received funding from the European Union to develop specialized computing infrastructure and integrate intensive computing services. The project involves 9 academic and 1 industrial partners across 6 European countries. It will define a "DEEP as a Service" solution and evolve existing INDIGO components to better support intensive computing workloads and specialized hardware.
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
This document summarizes the PhD work of Pradeeban Kathiravelu on improving scalability and resilience in multi-tenant distributed clouds. It describes two approaches: 1) SMART uses SDN to provide differentiated quality of service and service level agreements by dynamically diverting and cloning priority network flows. 2) Mayan componentizes big data services as microservices that can be executed in a network-aware and scalable way across distributed clouds. Evaluation shows these approaches improve speedup and ensure SLAs for critical flows compared to network-agnostic distributed execution.
Towards the Intelligent Internet of EverythingRECAP Project
In this presentation, Prof. Theo Lynn (DCU) was talking about observations on Multi-disciplinary Challenges in Intelligent Systems Research, at the RECAP consortium meeting in Dublin, Ireland on 06 November 2018.
This is a poster I presented at ACRO Summer School at Karlstad University. This presents my PhD work.
More details: http://kkpradeeban.blogspot.com/2017/07/my-first-polygonal-journey.html
This document describes SENDIM, a middleware for integrating network simulation, emulation, and deployment. SENDIM uses Software-Defined Networking (SDN) principles to enable incremental development from simulation to emulation to live deployment. It leverages configuration management tools and checkpointing/versioning through OSGi to minimize repetition across stages. The goal is to streamline the process of developing and testing cloud networks.
Dynamic module deployment in a fog computing platform霈萱 蔡
Several applications, such as smart cities, smart homes and smart hospitals adopt Internet of Things (IoT) networks to collect data from IoT devices. The incredible growing speed of the number of IoT devices congests the networks and the large amount of data, which are streamed to data centers for further analysis, overload the data centers. In this paper, we implement a fog computing platform that leverages end devices, edge networks, and data centers to serve the IoT applications. In this paper, we focus on implementing a fog computing platform, which dynamically pushes programs to the devices. The programs pushed to the devices pre-process the data before transmitting them over the Internet, which reduces the network traffic and the load of data centers. We survey the existing platforms and virtualization technologies, and leverage them to implement the fog computing platform. Moreover, we formulate a deployment problem of the programs. We propose an efficient heuristic deployment algorithm to solve the problem. We also implement an optimal algorithm for comparisons. We conduct experiments with a real testbed to evaluate our algorithms and fog computing platform. The proposed algorithm shows near-optimal performance, which only deviates from optimal algorithm by at most 2% in terms of satisfied requests. Moreover, the proposed algorithm runs in real-time, and is scalable. More precisely, it computes 1000 requests with 500 devices in <; 2 seconds. Last, the implemented fog computing platform results in real-time deployment speed: it deploys 20 requests <; 10 seconds.
http://kkpradeeban.blogspot.com/2015/11/cassowary-middleware-platform-for.html
Abstract: Smart devices sense the environment through their sensors and leverage the contextual information derived from the sensor readings to satisfy system requirements such as energy and carbon efficiency and user preferences. Smart buildings compose of smart devices, and local sensors of the devices and device controllers in a coordinated network. Software-Defined Networking (SDN) offers a centralized view of the entire networking data plane elements to a logically centralized controller. While smart buildings and ubiquitous computing are heavily researched, later advancements in networking are not exploited in achieving tenant-aware smart buildings.
This paper describes the research for the design, prototype implementation, and preliminary assessments of Cassowary, a middleware platform for Context-Aware Smart Buildings with Software-Defined Sensor Networks. By extending SDN paradigm and leveraging the message oriented middleware protocols to seamlessly connect the smart devices of the buildings to the centralized SDN controller, Cassowary enables context-aware Software-Defined Smart Buildings.
This document presents a time-aware publish/subscribe model for content dissemination in delay-tolerant networks (DTNs). It proposes TACO-DTN, a prototype that uses temporal descriptions of topics to match publications and subscriptions based on time. Simulations show it effectively routes events in a mixed DTN topology. Future work includes real mobility traces and applications to validate the approach.
Sensors - The Sparkplug in the Engine of the Internet of ThingsRECAP Project
This is a presentation on sensors and the Internet of Things (IoT) delivered by Prof. Theo Lynn (DCU) at the Siemens-IRDG Conference in Dublin, Ireland on 13 June 2018.
The document summarizes the results and lessons learned from the Helix Nebula Science Cloud (HNSciCloud) project. The key points are:
1. HNSciCloud was a joint procurement by several research organizations to deploy cloud services from commercial providers through a competitive tender process. This resulted in services from T-Systems and RHEA being selected.
2. The services were deployed in a hybrid cloud model using the commercial providers as well as the procurers' own data centers. A range of scientific workloads were supported.
3. Lessons included the need for a test validation suite, repatriating data, and that vouchers provided a means to give researchers limited access to the commercial
This document discusses grid architecture design. It covers building grid architectures, different types of grids like computational and data grids, common grid topologies including intra, extra, and inter grids. It also outlines the phases and activities in grid design like deciding the grid type, using a methodology of workshops, documentation, and prototyping. Finally, it discusses benefits of grids such as exploiting underutilized resources, enabling parallel processing and collaboration, improving access to and balancing of resources, and better reliability and management.
This is the presentation I did to the audience of EMJD-DC Spring Event 2017 Brussels to discuss my research. http://kkpradeeban.blogspot.be/2017/05/emjd-dc-spring-event-2017.html
An Efficient Cluster-Tree Based Data Collection Scheme for Large Mobile Wirel...Nexgen Technology
CIDT is a cluster-based data collection scheme for large mobile wireless sensor networks. It constructs a data collection tree based on cluster head locations to minimize energy usage, end-to-end delay, and traffic for cluster heads. The data collection nodes in the tree simply collect and deliver data packets to the sink. Simulation results show CIDT provides better quality of service than existing methods in terms of energy consumption, throughput, end-to-end delay, and network lifetime for mobile sensor networks.
This a RECAP project overview slide deck prepared by Thang Le Duc (UMU), P-O Östberg (UMU) and Tomas Brännström (Tieto). It starts with an introduction and continues with a section on challenges for a self-orchestrated, self-remediated cloud system. It then presents the RECAP vision and use cases and finishes with a conclusion.
My presentation at The 2nd Portugal|UT Austin summer school in systems and networking and
EMJD-DC spring event 2016
June 3, 2016. Costa da Caparica, Portugal describing my thesis work
Cloud computing allows for location-independent computing resources that can be accessed on demand. It has evolved from earlier technologies like utility computing and now commonly uses a client-server model. The key features of cloud computing include agility, cost savings, scalability, and reliability, though privacy and security concerns still need to be addressed.
The document discusses the objectives and timeline of the second Expert Group on the European Open Science Cloud (EOSC). The group aims to advise on practical implementation of EOSC by end of 2018, including governance and financing. Key dates include publishing an interim report in March 2018 and a final report in December 2018. The document also provides examples of EOSC in practice through the Helix Nebula Science Cloud and outlines some priorities for EOSC, including interoperability, capacity building, and analyzing national funding streams.
Summary
The Cytoscape Cyberinfrastructure (CI) extends the successful Cytoscape development and community model by enabling network biologists to contribute and leverage microservices deployable at scale. The CI solves many of Cytoscape’s limitations while also delivering novel and dynamic functionality to both Cytoscape and standalone workflows, thus further empowering the already vital network biology community.
Abstract
Cytoscape is an indispensable tool for network data analysis and visualization. One of Cytoscape’s greatest strengths is that it is powered by a vibrant array of developer-contributed apps. However, as network biologists’ requirements evolve, Cytoscape is challenged not only to keep pace, but to lead new and existing developers to create even greater value. Currently, multiscale and multifaceted networks push the memory limits of a Cytoscape workstation, while complex calculations such as Network Based Stratification and Network Based GWAS strain workstation processors. Increasingly, users demand support for collaborative projects, reproducible workflows, and interoperability with external tool chains. Finally, economic pressures favor solutions that promote code and algorithm reusability and evolvability.
In response, we have created the Cytoscape Cyberinfrastructure (CI), which is both an Internet-scale distributed system (based on Microservices [1]) and the network biology community it serves. Its mission is to enable and encourage network biologists to create and deploy high quality, innovative and scalable services focusing on network-based computation, collaboration and visualization.
Microservices can be written in any language, and are highly testable and evolvable. They can run on servers ranging from a single thread to a large cloud-based cluster. They can easily be reused in reproducible workflows or can serve as components in larger services. The CI links microservices via a light weight REST-based aspect-oriented interchange protocol (called CX), which enables tailored data streams while supporting service innovation via evolvable standards. CI infrastructure services support user authentication, long duration job execution, and a service repository that enables researchers to publish their services or discover services published by others. This model builds on the successful Cytoscape app community, which is based on similar mechanisms though at the scale of individual workstations.
Prominent examples of microservices include NDEx [2] (a repository for biological networks), NodeWalker (which uses heat dispersion to identify the most relevant subnetworks containing a given set of genes), cyNetShare [3] (which visualizes a network in a browser) and Cytoscape itself (which can also call CI services). Interfaces are available for Python, IPython, R and Matlab. Future work includes adding clustering, analysis, layout, publishing and display microservices and interfaces to Galaxy and Taverna workflows.
The document summarizes the evolution of the semantic grid from its origins in 2001 to the present. It describes how early work on the semantic grid aimed to close the gap between grid applications and the vision of global e-science collaboration. Key developments included linking grid services with semantic web technologies to enable automation and advanced functionality through machine-processable descriptions. The semantic grid is now seen as an important approach for virtual research environments that support both formal and informal scientific processes through collaborative tools and persistent representations of discussions.
3rd Workshop on Advances in Slicing for Softwarized Infrastructures (S4SI 2020)
Panel: Network Slicing is multifaceted but does its approach and understanding need to be fragmented?
Abstract: Network Slicing keeps growing in significance in the academic and industrial communities. Network Slicing can be defined from different functional or behavioral perspectives, as well as from different viewpoints depending on the stakeholder (e.g., verticals, solution providers, infrastructure owners) and the technical domain (e.g. cloud data centers, radio access, packet/optical transport networks). Standardization bodies and open source projects are being involved in some forms of network slicing support. How far are these views from each other? Is fragmentation leading to incompatible approaches or is there some hope of convergence, at least at conceptual levels? What is the next frontier in Network Slicing? These and other questions will be thrown to our panel experts after introducing their lightning viewpoints.
Moderator: Christian Esteve Rothenberg, University of Campinas, Brazil
Panel Members
Constantine Polychronopoulos, Juniper Networks, USA
Uma Chunduri, Futurewei, USA
Slawomir Kuklinski, Orange Poland and Warsaw University of Technology, Poland
Stuart Clayman, University College London, UK
Augusto Venancio Neto, Federal University of Rio Grande do Norte, Brazil
Talk at WRNP/SBRC on 5-May-2018 (https://wrnp.rnp.br/programacao) presenting the state of affairs on Network Service Orchestration (NSO) and its role in the evolving landscape of network softwarization. Based on the NSO survey; https://arxiv.org/abs/1803.06596
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
This document summarizes the PhD work of Pradeeban Kathiravelu on improving scalability and resilience in multi-tenant distributed clouds. It describes two approaches: 1) SMART uses SDN to provide differentiated quality of service and service level agreements by dynamically diverting and cloning priority network flows. 2) Mayan componentizes big data services as microservices that can be executed in a network-aware and scalable way across distributed clouds. Evaluation shows these approaches improve speedup and ensure SLAs for critical flows compared to network-agnostic distributed execution.
Towards the Intelligent Internet of EverythingRECAP Project
In this presentation, Prof. Theo Lynn (DCU) was talking about observations on Multi-disciplinary Challenges in Intelligent Systems Research, at the RECAP consortium meeting in Dublin, Ireland on 06 November 2018.
This is a poster I presented at ACRO Summer School at Karlstad University. This presents my PhD work.
More details: http://kkpradeeban.blogspot.com/2017/07/my-first-polygonal-journey.html
This document describes SENDIM, a middleware for integrating network simulation, emulation, and deployment. SENDIM uses Software-Defined Networking (SDN) principles to enable incremental development from simulation to emulation to live deployment. It leverages configuration management tools and checkpointing/versioning through OSGi to minimize repetition across stages. The goal is to streamline the process of developing and testing cloud networks.
Dynamic module deployment in a fog computing platform霈萱 蔡
Several applications, such as smart cities, smart homes and smart hospitals adopt Internet of Things (IoT) networks to collect data from IoT devices. The incredible growing speed of the number of IoT devices congests the networks and the large amount of data, which are streamed to data centers for further analysis, overload the data centers. In this paper, we implement a fog computing platform that leverages end devices, edge networks, and data centers to serve the IoT applications. In this paper, we focus on implementing a fog computing platform, which dynamically pushes programs to the devices. The programs pushed to the devices pre-process the data before transmitting them over the Internet, which reduces the network traffic and the load of data centers. We survey the existing platforms and virtualization technologies, and leverage them to implement the fog computing platform. Moreover, we formulate a deployment problem of the programs. We propose an efficient heuristic deployment algorithm to solve the problem. We also implement an optimal algorithm for comparisons. We conduct experiments with a real testbed to evaluate our algorithms and fog computing platform. The proposed algorithm shows near-optimal performance, which only deviates from optimal algorithm by at most 2% in terms of satisfied requests. Moreover, the proposed algorithm runs in real-time, and is scalable. More precisely, it computes 1000 requests with 500 devices in <; 2 seconds. Last, the implemented fog computing platform results in real-time deployment speed: it deploys 20 requests <; 10 seconds.
http://kkpradeeban.blogspot.com/2015/11/cassowary-middleware-platform-for.html
Abstract: Smart devices sense the environment through their sensors and leverage the contextual information derived from the sensor readings to satisfy system requirements such as energy and carbon efficiency and user preferences. Smart buildings compose of smart devices, and local sensors of the devices and device controllers in a coordinated network. Software-Defined Networking (SDN) offers a centralized view of the entire networking data plane elements to a logically centralized controller. While smart buildings and ubiquitous computing are heavily researched, later advancements in networking are not exploited in achieving tenant-aware smart buildings.
This paper describes the research for the design, prototype implementation, and preliminary assessments of Cassowary, a middleware platform for Context-Aware Smart Buildings with Software-Defined Sensor Networks. By extending SDN paradigm and leveraging the message oriented middleware protocols to seamlessly connect the smart devices of the buildings to the centralized SDN controller, Cassowary enables context-aware Software-Defined Smart Buildings.
This document presents a time-aware publish/subscribe model for content dissemination in delay-tolerant networks (DTNs). It proposes TACO-DTN, a prototype that uses temporal descriptions of topics to match publications and subscriptions based on time. Simulations show it effectively routes events in a mixed DTN topology. Future work includes real mobility traces and applications to validate the approach.
Sensors - The Sparkplug in the Engine of the Internet of ThingsRECAP Project
This is a presentation on sensors and the Internet of Things (IoT) delivered by Prof. Theo Lynn (DCU) at the Siemens-IRDG Conference in Dublin, Ireland on 13 June 2018.
The document summarizes the results and lessons learned from the Helix Nebula Science Cloud (HNSciCloud) project. The key points are:
1. HNSciCloud was a joint procurement by several research organizations to deploy cloud services from commercial providers through a competitive tender process. This resulted in services from T-Systems and RHEA being selected.
2. The services were deployed in a hybrid cloud model using the commercial providers as well as the procurers' own data centers. A range of scientific workloads were supported.
3. Lessons included the need for a test validation suite, repatriating data, and that vouchers provided a means to give researchers limited access to the commercial
This document discusses grid architecture design. It covers building grid architectures, different types of grids like computational and data grids, common grid topologies including intra, extra, and inter grids. It also outlines the phases and activities in grid design like deciding the grid type, using a methodology of workshops, documentation, and prototyping. Finally, it discusses benefits of grids such as exploiting underutilized resources, enabling parallel processing and collaboration, improving access to and balancing of resources, and better reliability and management.
This is the presentation I did to the audience of EMJD-DC Spring Event 2017 Brussels to discuss my research. http://kkpradeeban.blogspot.be/2017/05/emjd-dc-spring-event-2017.html
An Efficient Cluster-Tree Based Data Collection Scheme for Large Mobile Wirel...Nexgen Technology
CIDT is a cluster-based data collection scheme for large mobile wireless sensor networks. It constructs a data collection tree based on cluster head locations to minimize energy usage, end-to-end delay, and traffic for cluster heads. The data collection nodes in the tree simply collect and deliver data packets to the sink. Simulation results show CIDT provides better quality of service than existing methods in terms of energy consumption, throughput, end-to-end delay, and network lifetime for mobile sensor networks.
This a RECAP project overview slide deck prepared by Thang Le Duc (UMU), P-O Östberg (UMU) and Tomas Brännström (Tieto). It starts with an introduction and continues with a section on challenges for a self-orchestrated, self-remediated cloud system. It then presents the RECAP vision and use cases and finishes with a conclusion.
My presentation at The 2nd Portugal|UT Austin summer school in systems and networking and
EMJD-DC spring event 2016
June 3, 2016. Costa da Caparica, Portugal describing my thesis work
Cloud computing allows for location-independent computing resources that can be accessed on demand. It has evolved from earlier technologies like utility computing and now commonly uses a client-server model. The key features of cloud computing include agility, cost savings, scalability, and reliability, though privacy and security concerns still need to be addressed.
The document discusses the objectives and timeline of the second Expert Group on the European Open Science Cloud (EOSC). The group aims to advise on practical implementation of EOSC by end of 2018, including governance and financing. Key dates include publishing an interim report in March 2018 and a final report in December 2018. The document also provides examples of EOSC in practice through the Helix Nebula Science Cloud and outlines some priorities for EOSC, including interoperability, capacity building, and analyzing national funding streams.
Summary
The Cytoscape Cyberinfrastructure (CI) extends the successful Cytoscape development and community model by enabling network biologists to contribute and leverage microservices deployable at scale. The CI solves many of Cytoscape’s limitations while also delivering novel and dynamic functionality to both Cytoscape and standalone workflows, thus further empowering the already vital network biology community.
Abstract
Cytoscape is an indispensable tool for network data analysis and visualization. One of Cytoscape’s greatest strengths is that it is powered by a vibrant array of developer-contributed apps. However, as network biologists’ requirements evolve, Cytoscape is challenged not only to keep pace, but to lead new and existing developers to create even greater value. Currently, multiscale and multifaceted networks push the memory limits of a Cytoscape workstation, while complex calculations such as Network Based Stratification and Network Based GWAS strain workstation processors. Increasingly, users demand support for collaborative projects, reproducible workflows, and interoperability with external tool chains. Finally, economic pressures favor solutions that promote code and algorithm reusability and evolvability.
In response, we have created the Cytoscape Cyberinfrastructure (CI), which is both an Internet-scale distributed system (based on Microservices [1]) and the network biology community it serves. Its mission is to enable and encourage network biologists to create and deploy high quality, innovative and scalable services focusing on network-based computation, collaboration and visualization.
Microservices can be written in any language, and are highly testable and evolvable. They can run on servers ranging from a single thread to a large cloud-based cluster. They can easily be reused in reproducible workflows or can serve as components in larger services. The CI links microservices via a light weight REST-based aspect-oriented interchange protocol (called CX), which enables tailored data streams while supporting service innovation via evolvable standards. CI infrastructure services support user authentication, long duration job execution, and a service repository that enables researchers to publish their services or discover services published by others. This model builds on the successful Cytoscape app community, which is based on similar mechanisms though at the scale of individual workstations.
Prominent examples of microservices include NDEx [2] (a repository for biological networks), NodeWalker (which uses heat dispersion to identify the most relevant subnetworks containing a given set of genes), cyNetShare [3] (which visualizes a network in a browser) and Cytoscape itself (which can also call CI services). Interfaces are available for Python, IPython, R and Matlab. Future work includes adding clustering, analysis, layout, publishing and display microservices and interfaces to Galaxy and Taverna workflows.
The document summarizes the evolution of the semantic grid from its origins in 2001 to the present. It describes how early work on the semantic grid aimed to close the gap between grid applications and the vision of global e-science collaboration. Key developments included linking grid services with semantic web technologies to enable automation and advanced functionality through machine-processable descriptions. The semantic grid is now seen as an important approach for virtual research environments that support both formal and informal scientific processes through collaborative tools and persistent representations of discussions.
3rd Workshop on Advances in Slicing for Softwarized Infrastructures (S4SI 2020)
Panel: Network Slicing is multifaceted but does its approach and understanding need to be fragmented?
Abstract: Network Slicing keeps growing in significance in the academic and industrial communities. Network Slicing can be defined from different functional or behavioral perspectives, as well as from different viewpoints depending on the stakeholder (e.g., verticals, solution providers, infrastructure owners) and the technical domain (e.g. cloud data centers, radio access, packet/optical transport networks). Standardization bodies and open source projects are being involved in some forms of network slicing support. How far are these views from each other? Is fragmentation leading to incompatible approaches or is there some hope of convergence, at least at conceptual levels? What is the next frontier in Network Slicing? These and other questions will be thrown to our panel experts after introducing their lightning viewpoints.
Moderator: Christian Esteve Rothenberg, University of Campinas, Brazil
Panel Members
Constantine Polychronopoulos, Juniper Networks, USA
Uma Chunduri, Futurewei, USA
Slawomir Kuklinski, Orange Poland and Warsaw University of Technology, Poland
Stuart Clayman, University College London, UK
Augusto Venancio Neto, Federal University of Rio Grande do Norte, Brazil
Talk at WRNP/SBRC on 5-May-2018 (https://wrnp.rnp.br/programacao) presenting the state of affairs on Network Service Orchestration (NSO) and its role in the evolving landscape of network softwarization. Based on the NSO survey; https://arxiv.org/abs/1803.06596
"A programmable, flexible and scalable network architecture will be required to support efficiently any Industrial-IoT solution. Vendor-Independent Software Defined Network will play a key role to address low latency, secure and real-time solutions. "
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
The Abstracted Network for Industrial InternetMeshDynamics
Widespread adoption of TCI/IP protocols over the last two decades appears on the surface to have created a lingua franca for computer networking. And with the emergence of IPv6 removing the addressing restrictions of earlier versions, it would appear that now every device in the world may easily be connected with a common protocol.
But three emerging factors are requiring a fresh look at this worldview. The first is the coming wave of sensors, actuators, and devices making up the Internet of Things (IOT). Although not yet widely recognized, it is beginning to be understood that a majority of these devices will be too small, too cheap, too dumb, and too copious to run the hegemonic IPv6 protocol. Instead, much simpler protocols will predominate (see below), which must somehow be incorporated into the IP networks of Enterprises and the Internet.
At the other end of the scale from these tiny devices are huge Enterprise networks, increasing movingly to the cloud for computing and communication resources. An important requirement of these Enterprises is the capacity to manage, control, and tune their networks using a variety of Software Defined Networking (SDN) technologies and protocols. These depend on computing resource at the edges of the network to manage the interactions.
The third element is a conundrum presented by the first two: Enterprises will be struggling with the need to bring vast numbers of simple IOT devices into their networks. Though many of these devices will lack computing and protocol smarts, the requirement will still remain to manage everything via SDN. Along with this, many legacy Machine-to-Machine (M2M) networks (such as those on the factory floor) present the same challenges as the IOT: simple and/or proprietary protocols operating in operational silos today that Enterprises desire to manage and tune with SDN techniques.
A Study of Protocols for Grid Computing EnvironmentCSCJournals
This document summarizes a study of communication protocols for grid computing environments. It discusses the limitations of TCP for high bandwidth-delay networks and the need for new protocols to efficiently transfer bulk data across long distances. It categorizes various protocols that have been proposed into TCP-based, UDP-based, and application-layer protocols and evaluates them based on their operation, congestion control, throughput, fairness and other factors. The document also outlines issues in designing high performance protocols for grid computing and reviews several TCP variants and reliable transport protocols developed to improve performance over high-speed networks.
AN EFFICIENT SECURE CRYPTOGRAPHY SCHEME FOR NEW ML-BASED RPL ROUTING PROTOCOL...IJNSA Journal
Internet of Things (IoT) offers reliable and seamless communication for the heterogeneous dynamic lowpower and lossy network (LLNs). To perform effective routing in IoT communication, LLN Routing Protocol (RPL) is developed for the tiny nodes to establish connection by using deflaut objective functions: OF0, MRHOF, for which resources are constraints like battery power, computation capacity, memory communication link impacts on varying traffic scenarios in terms of QoS metrics like packet delivery ratio, delay, secure communication channel. At present, conventional Internet of Things (IoT) are having secure communication channels issue for transmission of data between nodes. To withstand those issues, it is necessary to balance resource constraints of nodes in the network. In this paper, we developed a security algorithm for IoT networks with RPL routing. Initially, the constructed network in corporates optimizationbased deep learning (reinforcement learning) for route establishment in IoT. Upon the establishment of the route, the ClonQlearn based security algorithm is implemented for improving security which is based onaECC scheme for encryption and decryption of data. The proposed security technique incorporates reinforcement learning-based ClonQlearnintegrated with ECC (ClonQlearn+ECC) for random key generation. The proposed ClonQlearn+ECCexhibits secure data transmission with improved network performance when compared with the earlier works in simulation. The performance of network expressed that the proposed ClonQlearn+ECC increased the PDR of approximately 8% - 10%, throughput of 7% - 13%, end-to-end delay of 5% - 10% and power consumption variation of 3% - 7%.
This document provides a survey of machine and deep learning techniques for resource allocation in multi-access edge computing (MEC). It first presents tutorials on applying machine learning (ML) and deep learning (DL) in MEC to address its challenges. It then discusses enabling technologies for running ML/DL training and inference quickly in MEC. It provides an in-depth survey of ML/DL methods for task offloading, scheduling, and joint resource allocation in MEC. Finally, it discusses key challenges and future research directions of applying ML/DL for resource allocation in MEC networks.
This document proposes an artificial intelligence enabled routing (AIER) mechanism for software defined networking (SDN) that can alleviate issues with monitoring periods in dynamic routing and provide superior route decisions using artificial neural networks (ANNs). The key aspects of the proposed AIER mechanism are:
1) It installs three additional modules in the SDN control plane: a topology discovery module, a monitoring period module, and an ANN module.
2) The ANN module is trained to learn from past routing experiences and avoid ineffective route decisions.
3) Evaluation on the Mininet simulator shows the AIER mechanism improves performance metrics like average throughput, packet loss ratio, and packet delay compared to different monitoring periods in dynamic
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
This document summarizes a research paper on privacy-preserving techniques for IoT data in cloud environments. It introduces two differential privacy algorithms: 1) Generic differential privacy (GenDP) which provides generalized privacy protection for homogeneous and heterogeneous IoT metadata through data portioning. 2) Cluster-based differential privacy which groups similar data into clusters before defining classifiers to validate privacy. The paper evaluates these techniques and finds the cluster-based approach offers better security than customized interactive algorithms while maintaining data utility. Overall, the study presents new differential privacy methods for anonymizing IoT metadata stored in the cloud.
An overview on application of machine learning techniques in optical networksKhaleda Ali
This document provides an overview of machine learning techniques applied to optical networks. It discusses how optical networks have become more complex with the introduction of technologies like coherent transmission and elastic optical networks. This increased complexity motivates the use of machine learning to analyze network data and make decisions. The document surveys existing work on machine learning applications in optical communications and networking. It aims to introduce researchers to this field and propose new research directions to further the application of machine learning to optical networks.
A Flexible Network Architecture for 5G SystemsEiko Seidel
In this paper, we define a flexible, adaptable, and programmable architecture for 5Gmobile networks, taking into consideration the requirements, KPIs, and the current gaps in the literature, based on three design fundamentals: (i) split of user and control plane, (ii) service-based architecturewithin the core network (in line with recent industry and standard consensus), and (iii) fully flexible support of E2E slicing via per-domain and cross-domain optimisation, devising inter-slice control and management functions, and refining the behavioural models via experiment-driven optimisation.The proposed architecture model further facilitates the
realisation of slices providing specific functionality, such as network resilience, security functions, and network elasticity. The proposed architecture consists of four different layers identified as network layer, controller layer, management and orchestration layer, and service layer. A key contribution of this paper is the definition of the role of each layer, the relationship between layers, and the identification of the required internal modules within each of the layers. In particular, the proposed architecture extends the reference architectures proposed in the Standards Developing Organisations like 3GPP and ETSI, by building on these while addressing several gaps identified within the corresponding baseline models. We additionally present findings, the design guidelines, and evaluation studies on a selected set of key concepts identified to enable flexible cloudification of the protocol stack, adaptive network slicing, and inter-slice control and management.
The document discusses grid computing, which connects many computers together into a network to solve large problems requiring massive computing power. It provides high-speed connections that are 10,000 times faster than broadband. Grid computing shares and aggregates resources like supercomputers, storage, and data sources across geographic locations. It has the potential to greatly change business, science, and society by enabling new forms of collaboration and computation. Developers must design applications to take advantage of this distributed, parallel environment.
This document proposes an approach to creating cyber resiliency using emerging technologies and network architectures. It identifies key technologies like deep packet inspection, application performance management, and control plane architectures that can be leveraged to build more resilient networks. The document then illustrates an example architecture and proposes validating cyber resiliency solutions using academic network infrastructure to test solutions on real networks at scale.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
Big Data and Next Generation Network Challenges - PhdassistancePhD Assistance
This document provides an overview of next generation networks and big data challenges. It discusses how 5G networks will generate huge amounts of data from billions of wireless devices. Modern data analytics techniques like big data analytics will be needed to efficiently handle and extract insights from this large and diverse data. The document also outlines some of the key requirements for 5G networks, such as high data rates and low latency, and the underlying technologies being developed to achieve this, including millimeter wave spectrum and massive MIMO. It discusses open issues regarding security, privacy, and analyzing heterogeneous data sources.
Data Decentralisation: Efficiency, Privacy and Fair MonetisationAngelo Corsaro
A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
The NECOS project addresses the limitations of current cloud computing infrastructures to respond to the demand of new services, as presented in two use-cases, that will drive the whole execution of the project.
The NECOS platform will be based on state of the art open software platforms, which will be carefully selected, rather than start from scratch. This baseline platform will be enhanced with the management and orchestration algorithms and the APIs that will constitute the research activity of the project. Finally, the NECOS platform will be validated, in the context of the two proposed use cases, using the 5TONIC and FIBRE testing frameworks.
I am Tapas Kumar Palei. I am studying B.Tech CSE in Ajay Binay Institute Of Technology. Grid computing is my seminar presentation topic. I try to gather everything about the grid computing in this seminar presentation.
Similar to Deep Slicing and Loops in a Loop: Multi-Tenancy and Smart Closed-Loop Control Gone Wild (20)
Abstract:
Following the state of the art is paramount for sound and impact scientific practices informed strategic R&D decisions. This seminar seeks two main contributions: (i) providing a 10,000 foot view of 10 selected hot topics in networking, and (ii) Overview of recent practices in scientific events (e.g. Digitalization, Submission deadlines revisited, Open Science, Artifact Review Badging).
The 10 selected hot topics are as follows:
Intent-Based Networking (IBN)
Zero-Touch Management (ZTM)
Digital Twins (Networking for Digital Twins & Network Digital Twins)
Metaverse
Blockchain Networking
AI/ML (Network protocols meet AI/ML, Machine Learning for Networking)
High precision networking
Quantum Communications & Computing
6G (Beyond 5G)
OpenRAN
This is the welcome lecture for IA377. Topics to be addressed:
Organization
Seminar Dynamics
Evaluation
Tentative Topics and Schedule
Round of introductions
https://ia377-feec-unicamp.github.io/classes/2023/03/02/Welcome.html
This introductory lecture for IA377 will be devoted to the topic of “Literature Review”.
What is a literature review?
Methodology, best practices, tips, tools, etc.
Practical example
Application to IA377 seminar activities.
https://ia377-feec-unicamp.github.io/classes/2023/03/09/Literature-Review.html
Redes LTE Comunitárias no Brasil: Modelamento, Implantação e Manutenção Sustentáveis com base em Novos Paradigmas de Redes.
Projeto financiado pela FAPESP Processo: 18/23101-0
Resumo
Em relatório publicado pelo Comitê Gestor da Internet no Brasil (CGI.br) em 2018, em termos de acesso à Internet por banda larga no Brasil, há uma ampla desigualdade entre as classes econômicas A/B (maior) e D/E (menor), fato evidenciado nas análises entre as áreas urbanas e rural. Além de evidenciar que cerca de 34% dos brasileiros ainda não possuem acesso à Internet, o relatório também explica que o acesso à Internet é um catalisador de desenvolvimento social, econômico e tecnológico: fato consagrado em diversas pesquisas internacionais e enfatizado pela organização Internet Society. Redes sem fio comunitárias têm se tornado um meio sustentável de promover meios acessíveis de conexão à Internet,tanto em áreas rurais remotas quanto em regiões urbanas densas. Em sua ampla maioria, redes sem fio comunitárias adotam a tecnologia wifi, no entanto apenas recentemente, devido ao desenvolvimento de tecnologias de código livre e de baixo custo, o padrão Long-Term Evolution (LTE) começou a ser explorado para estes fins. Logo, não há conhecimento na literatura acadêmica de estudos que busquem utilizar e melhorar o padrão LTE aplicado à redes sem fio comunitárias. Nesse escopo, esta proposta busca trazer conceitos inovadores de novos paradigmas de redes, Redes Definidas por Software (Software Defined Networks -SDN) e Virtualização de Funções de Rede (Network Functions Virtualization - NFV), para o desenvolvimento de redes LTE comunitárias. Por meio de uma metodologia ágil de testes,conceitos de SDN e NFV serão aplicados no desenvolvimento de mecanismos que realizem o gerenciamento inteligente de recursos de redes LTE comunitárias visando desempenho eficiente e tolerância a falhas robusta, i.e., a sustentabilidade da rede. Todos estes estudos serão feitos tendo por base um levantamento de características de redes sem fio comunitárias em operação no Brasil proposto para o início do projeto. Ao final, a execução desta proposta irá produzir um material didático elucidando as formas de modelamento, implantação, e manutenção sustentável de uma rede LTE comunitária nos moldes dos estudos realizados por esta proposta (i.e., com todos os dados, avaliações, metodologias, e protótipos). Este material será utilizado como base de uma proposta de implantação de uma rede LTE comunitária no Brasil junto ao programa "Beyond the Net" da Internet Society.
Evento: https://www.lasse.ufpa.br/co5gam/
Video: https://www.youtube.com/watch?v=5dEb9oIAaPY
The document summarizes several demos of a cloud network slicing platform called NECOS. It describes demos of multi-tenant network slicing with two example services, a scalability demo over Fed4Fire, lightweight slicing with VIM, and intelligent orchestration of slices using machine learning. The goals are to create two end-to-end cloud network slices between Brazil and Europe, one for an IoT service using the Dojot platform and one for a touristic CDN video service. It demonstrates the slices, services, and horizontal elasticity functionality.
The document provides a summary of the NECOS Project Technical Highlights from an industrial workshop held on October 18th, 2019. It discusses the following key points:
- The NECOS approach of Lightweight Slice Defined Cloud (LSDC) to abstract, isolate, orchestrate, and separate logical behaviors from physical network and cloud resources.
- Results and achievements of the project including developing the LSDC platform, Slice-as-a-Service model, use case specifications and scenarios, architecture design, and five prototype demos.
- Dissemination efforts including numerous publications, contributions to standards bodies, workshops organized, keynotes sponsored, and tutorials held to promote the project outcomes.
The document proposes a network slicing technique for IEEE 802.11ah networks that dynamically manages radio resources by reconfiguring Restricted Access Window (RAW) parameters over time. A Virtual Network Slicing Broker is introduced as a virtual network function that defines network slices based on service features and quality of service restrictions. It communicates with an IEEE 802.11ah access point to monitor network statistics and enforce slicing configurations using a static or dynamic approach. Simulation results show the dynamic approach allows the broker to reallocate resources between slices by updating the RAW configurations, improving overall network performance.
The 5G-RANGE project aims to provide broadband internet access to remote and rural areas using unused TV white space spectrum in an economically viable way. The project involves developing a new 5G access network architecture that uses cooperative spectrum sensing to identify unused TV channels and allows 5G signals from a base station to opportunistically transmit in the vacant channels. The project has 10 partners and will run for 30 months, developing a proof-of-concept using software-defined radios to demonstrate the new cognitive MAC layer and dynamic spectrum access capabilities.
5G technology will connect the world through a wireless infrastructure and multifunctional platform using multiple spectrum bands to meet various requirements and use cases. 5G enables innovations like mobile network architecture evolution towards cloud RAN and network virtualization. It utilizes a 5G core, network slicing to partition resources, and multi-access edge computing to help reduce latency, which is important for applications like AR/VR. Optical transport networks will be essential for 5G given maximum transport distances of 20 km.
Network slicing allows building logical networks on a shared infrastructure. Each slice is designed for a business purpose and comprises all required resources. Slices are created, changed, and removed by management functions. Slices are defined using Network Slice Templates that describe the structure, components, and configuration. Network Slice Instances are realized by configuring and instantiating resources and network functions. Automation is key to managing the complexity of multiple slices.
Nokia's network slicing automation vision involves hierarchical closed-loop management across domains using a unified data model. This includes end-to-end service orchestration, assurance, and slice-specific network functions and data layers managed by domain controllers and an NFV orchestrator. AI/ML is used across various layers for tasks like capacity planning, inventory, and experience assurance.
RNP is the national advanced network for higher education, research, and innovation institutions in Brazil. It connects over 600 organizations and 1,500 campuses across Brazil. RNP developed a software defined infrastructure (SDI) composed of an overlay SDN and nationwide distributed edge cloud with an orchestration platform to provision remote computing and networking resources using open source solutions. The SDI supports evolving RNP's existing applications and services to a hybrid cloud model and allows new methods of validation, testing, and provisioning. Current activities with the SDI include improving network and application architectures, automation, security, and advanced services like CDN and video delivery.
5G networks and cloudification enable network slicing, which provides logical network segments tailored for specific use cases and customers. Key points:
- 5G requirements like high bandwidth, low latency, and reliability, combined with network virtualization through cloudification, allow networks to be sliced for different customer needs.
- Slicing provides dedicated virtual networks on a shared physical infrastructure for various use cases like enhanced mobile broadband, massive IoT, and critical communications.
- An end-to-end network slice spans the radio access network, core network, transport network, and edge computing resources, and is orchestrated as one unified product for customers.
Towards Deep Programmable Slicing. IEEE Netsoft'19 Distinguished Expert Panel Theme: Barriers and Frontiers of Softwarization for the Network of 2030, Paris, 2019. https://netsoft2019.ieee-netsoft.org/program/distinguished-expert-panel/
This document provides an overview of network refactoring and offloading trends, including fluid network planes. It discusses the evolution of SDN from 2009 to 2019 and concepts like network softwarization. Instances of fluid network planes are described, such as RouteFlow, NFV layers, and VNF offloading to hardware or multi-vendor P4 fabrics. The document also covers slicing for IoT analytics and references recent works on in-network computing, fast connectivity recovery, and scaling distributed machine learning with in-network aggregation.
The realization of network softwarization, an overarching buzzword to encompass all software-centric developments from the Software-Defined Networking (SDN) and Network Function Virtualization (NFV) trends, is being enabled through a set of innovations in high-speed data plane design and implementation. Recent efforts include te-architecting the hardware-software interfaces and exposing programmatic interfaces (e.g., OpenFlow), programmable hardware-based pipelines (e.g. Protocol Independent Switch Architecture – PISA) along suitabe programming languages (e.g., P4), and multiple advances on low overhead virtualization and fast packet processing libraries (e.g. DPDK, FD.io) for Linux based general purpose processor platforms. This talk provides an overview of relevant ongoing work and discusses the trade-offs of each design and implementation choice of software-defined dataplanes regarding Programmability, Performance, and Portability.
This talk provides a 2017 updated view on SDN and the broader Network Softwarization trend (e.g., + NFV, P4) aiming and trying to provide a clarifying view on the evolving SDN definitions (beyond a purist view) by explaining the main characteristics of SDN embodiments in 2017+
The document discusses a panel at a conference on Software Defined Networking (SDN). The panel will discuss whether SDN is a promise or a reality, and features panelists from industry and academia including representatives from Datacom, UFSCar, Algar, and NIC.BR.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Deep Slicing and Loops in a Loop: Multi-Tenancy and Smart Closed-Loop Control Gone Wild
1. On Deep Slicing and Loops in a Loop
Multi-Tenancy and Smart Closed-Loop Control Gone Wild
Prof. Dr. Christian Esteve Rothenberg (University of Campinas), Brazil
chesteve@dca.fee.unicamp.br
https://intrig.dca.fee.unicamp.br/christian
First ITU Workshop on Network 2030,New York, United States, 2 October 2018
https://www.itu.int/en/ITU-T/Workshops-and-Seminars/201810/Pages/Programme.aspx
2. Slicing Journey: from 5G towads 2030
2
2030
Source. Adapted from slide courtesy by Luis M. Contreras, Telefonica
From siloed slices to
generalized network
cloud slicing
Deep, massive resource
sharing & multi-tenancy
New Tenant-Provider
relationships and
power of choices
Executive Summary
4. History of Network Slicing
● Early references: Programmable Networks research & Federated Testbed
research (1995 -2012)
● GENI Slice (2008): “A GENI slice is the unit of isolation for experiments. A
container for resources used in an experiment; A unit of access control
● ITU-T Slicing (2011) as defined in [ITU-T Y.3011], [ITUTY.3012] Slicing
allows logically isolated network partitions (LINP) with a slice being
considered as a unit of programmable resources such as network,
computation and storage
● Many more...
○ See: Alex Galis, Netsoft 2018 Tutorial:
"Network Slicing Landscape: A holistic architectural approach"
http://www.maps.upc.edu/public/presentations/netsoft18_slicingtutorial_v1.0.pdf
[Not today
towards 2030]
6. Main relevant standardization related activities to Slicing
● NGMN Slices - consist of 3 layers: 1) Service Instance Layer, 2) Network Slice Instance Layer, and
3) Resource layer (2016).
● 3GPP - SA2 23.799 Study Item “Network Slicing’ (2016); SA5 TR 28.801Study Item “Network Slicing
(2017)
● ITU-T IMT2020 - Recommendations: 5G Architecture, Management of 5G, Network Softwarisation
and Slicing - (2016 – 2017)
● ONF - Recommendation TR-526 “Applying SDN architecture to Network Slicing” (2016)
● BBF - Requirements / architecture of transport network slicing SD-406: E2E Network Slicing (2017)
● ETSI - NFV priorities for 5G (white paper) (2017). ZSM ISG automation technology for network slice
management (2018). MEC support for network slicing (2018)
● IETF - No specific WG (despite attempts in 2017-2018).
draft-galis-netslices-revised-problemstatement-03, draft-geng-netslices-architecture-02,
draft-geng-coms-architecture-01, draft-netslices-usecases-01, draft-qiang-coms-use-cases-00,
draft-qiang-coms-netslicing-information-model-02, draft-galis-anima-autonomic-slice-networking-04,
draft-defoy-coms-subnetinterconnection-03, draft-homma-coms-slicegateway-01
7. Slicing in Scope
7
Network Slice – A Network Slice is a jointly managed group of subsets of resources, network
functions / network virtual functions at the data, control, management/orchestration, and
service planes at any given time.
Cross-domain management of network slices in network infrastructure and service functions
8. Source: The NECOS project, Novel Enablers for Cloud Slicing. http://www.h2020-necos.eu/
SDN & Virtualization vs Slicing
9. Source: The NECOS project, Novel Enablers for Cloud Slicing. http://www.h2020-necos.eu/
Net App
Net App
NFs
Net App
Net App
L7 Apps
Network
Resources
NIM
Slicing
Application Services
Vertical
Use
Case i
Control & Management plane
Infrastructure
Business (Application & Service) plane
Slicing
Compute
Resources
VIM
Slicing
MonitoringMonitoringMonitoring
Mode 0: VIM-independent
[Infra Slice aaS]
[Bare-metal Slice]
Mode 1: VIM-dependent
[Platform Slice aaS]
(R) Orchestration
Mode 3: Service-based
[Service Slice aaS]
Network Service
Orchestration
Mode 2: MANO-based
[NFV aaS]
Slicing
S
Vertical
S
Service
iS
Different Slicing Models & Approaches
10. Types of slices and control responsibilities
……
PROVIDER TENANTS
Internal
Slices
External /
Provider- managed
Slices
External /
Tenant- managed
Slices
Infrastructure
Source: A Network Service Provider Perspective on Network Slicing. Luis M. Contreras and Diego R. López. IEEE Softwarization, January 2018
11. Source: Adapted from slide courtesy by Luis M. Contreras, Telefonica
Multi-Domain Slicing Scenario
12. Why slice-ready federation is needed?
• Vertical customers can request services that lay outside the footprint of their
primary provider
• Interaction with other providers are needed but …
– How we can charge and bill for that service?
– How we can ensure SLAs among providers?
– How we can know about the capabilities of other providers for a
comprehensive e2e service provision?
• The current interconnection models is not aware of peer’s network resources (i.e.,
load conditions, etc)
• All these environments are static, requiring long interactions for setting up any
inter-provider connection
• Automation for both the interconnection sessions and the service deployment on
top of that is needed to reach the goal of flexibility and dynamicity
12Source: Luis M. Contreras, Telefonica
13. I
E
T
F
M
E
F
3
G
P
P
E
T
S
I
O
N
F
Deep
Slicing
Towards Deep Slices: Observations
B
B
F
Business challenges
Technological challenges
From infrastructure sharing to any-layer
resource sharing (from PHY to APP)
Deep
End-to-End, Multi-Domain
Tenant Choice & Control
Isolation and Dimensioning / Scaling
Fragmented Standardization
14. Deep Slicing: Challenges up front
Standardization gap goes hand by hand with a series of key challenges from
provider’s perspective on (i) scalability, (ii) arbitration, (iii) slice planning and
dimensioning, and (iv) multi-domain (cf. [FG-NET-Contribution]). Both business and
technical implications can be deemed necessary for such multi-operator slice
provisioning context.
From the business side, some key implications include: (i) coordination models, (ii)
inter-provider SLAs, (iii) pricing schemes, (iv) service specification, and (v) customer
facing advertisement.
From a technical perspective we highlight (i) slice decomposition, (ii) discovery of
domains, (iii) common abstraction models, (iv) standard interfaces/protocols, APIs.
Source & further reading: Doc.6 ITU-T FG 2030 contribution: Network 2030 Challenges and Opportunities in Network Slicing
https://extranet.itu.int/sites/itu-t/focusgroups/net-2030/_layouts/15/WopiFrame.aspx?sourcedoc=%7bC4E9266E-1058-4035-AA25-451ABC
B5C07B%7d&file=NET2030-I-006.docx&action=default
15. Deep Slicing: Ambitious Challenges
Source: Inspired by the author (C. Rothenberg) P3
trade-offs: Programmability, Performance, Portability.
https://www.slideshare.net/chesteve/ieee-hpsr-2017-keynote-softwarized-dataplanes-and-the-p3-tradeoffs-programmability-performance-portabiilty
21. Acknowledgments
Work by Christian Rothenberg was supported by the Innovation Center, Ericsson
Telecomunicações S.A., Brazil under grant agreement UNI.64.
Ack. Mateus Santos and Pedro Gomes for input insights
This work includes contributions funded was partially funded by the EU-Brazil
NECOS project under grant agreement no. 777067.
Luis M. Contreras and Alex Galis, co-authors of ITU-T FG 2030 input Doc.6:
Network 2030 Challenges and Opportunities in Network Slicing.
Raphael Rosa (PhD candidate at UNICAMP), for his contributions to the vision
around Unfolding Slices, Control Loops (in a Loop), Disaggregated Metrics/Prices,
and Smart Peering
24. What do we mean by Network Slices?
Network Slice – A Network Slice is a managed group of subsets of resources, network
functions / network virtual functions at the data, control, management/orchestration,
and service planes at any given time.
The behaviour of the network slice is realized via network slice instances
(i.e. activated network slices, dynamically and non-disruptively re-provisioned).
A network slice is programmable and has the ability to expose its capabilities.
→ A network slice supports at least one type of service.
→ A network slice may consist of cross-domain components from separate domains in the same or
different administrations, or components applicable to the access network, transport network, core
network, and edge networks.
→ A resource-only partition is one of the components of a Network Slice, however on its own does
not fully represent a Network Slice.
→ Underlays / overlays supporting all services equally (‘best effort” support) are not fully
representing a Network Slice. 24