Initially proposed to interconnect computers worldwide, the Internet has significantly evolved to become in two decades a key element in almost all our activities. This (r)evolution mainly relies on the progress that has been achieved in computation and communication fields and that has led to the well-known and widely spread Cloud Computing paradigm.
With the emergence of the Internet of Things (IoT), stakeholders expect a new revolution that will push, once again, the limits of the Internet, in particular by favouring the convergence between physical and virtual worlds. This convergence is about to be made possible thanks to the development of minimalist sensors as well as complex industrial physical machines that can be connected to the Internet through edge computing infrastructures.
Among the obstacles to this new generation of Internet services is the development of a convenient and powerful framework that should allow operators, and devops, to manage the life-cycle of both the digital infrastructures and the applications deployed on top of these infrastructures, throughout the cloud to IoT continuum.
In this keynote, Frédéric Desprez and his colleague Adrien Lebre presented research issues and provide preliminary answers to identify whether the challenges brought by this new paradigm is an evolution or a revolution for our community.
A shallow look at all one needs to know when dealing with Distributed Systems, such as the CAP theorem, Harvest/Yield metrics, Partitioning vs. Replication, and Consensus Algorithms.
Hybrid cloud : why and how to connect your datacenters to OVHcloud ? OVHcloud
Across our products or between OVHcloud and your own datacenters, Oliver Bédouet will detail network architectures you can build and their advantages, from vRack to OVHcloud Connect.
A shallow look at all one needs to know when dealing with Distributed Systems, such as the CAP theorem, Harvest/Yield metrics, Partitioning vs. Replication, and Consensus Algorithms.
Hybrid cloud : why and how to connect your datacenters to OVHcloud ? OVHcloud
Across our products or between OVHcloud and your own datacenters, Oliver Bédouet will detail network architectures you can build and their advantages, from vRack to OVHcloud Connect.
OpenStack and Kubernetes - A match made for Telco HeavenTrinath Somanchi
With the advent of Containerization of Telco Clouds for NFV and SDN based deployments, OpenStack with Kubernetes is a best chosen option to solve the challenges is a better way to build a containerized Telco cloud. This involves, "Kubernetes in OpenStack", "OpenStack in Kubernetes" and "Independent OpenStack and Kubernetes". With this complementing collaboration, in the Stadium of OpenStack's Open Infrastructure, Telecom gaints are developing cloud-native solutions to best fit the next generation networking deployments. In this Presentation, we talk about Containerization and benefits, OpenStack and Kubernetes match making and we give a brief overview on Airship and Kata Container projects.
CNCF TUG (Telecom User Group) Ike Alisson 5G New Service Capabilities Rev pa10Ike Alisson
5G New Service Capabilities (with an overview on the synergy between 5G CN and RAN (O-RAN Specifications) via CUPS and some of the Enhancements for URLLC UCs enhancements
클라우드 네이티브로의 전환이 확산되면서 애플리케이션을 상호 독립적인 최소 구성 요소로 쪼개는 마이크로서비스(microservices) 아키텍쳐가 각광받고 있는데요.
MSA는 애플리케이션의 확장이 쉽고 새로운 기능의 출시 기간을 단축시킬 수 있다는 장점이 있지만,
반면에 애플리케이션이 커지고 동일한 서비스의 여러 인스턴스가 동시에 실행되면 MSA간 통신이 복잡해 진다는 단점이 있습니다.
서비스 메쉬(Service Mesh)는 이러한 MSA의 트래픽 문제를 보완하기 위해 탄생한 기술로,
서비스 간의 네트워크 트래픽 관리에 초점을 맞춘 네트워킹 모델입니다.
서로 다른 애플리케이션이 얼마나 원활하게 상호작용하는지를 기록함으로써 커뮤니케이션을 최적화하고 애플리케이션 확장에 따른 다운 타임을 방지할 수 있습니다.
서비스 메쉬의 탄생 배경과 기능, 그리고 현재 오픈소스로 배포되어 있는 서비스 메쉬 솔루션에 대해 소개합니다.
Step1. Cloud Native Trail Map
Step2. Service Proxy, Discover, & Mesh
Step3. Service Mesh 솔루션
Step4. Service Mesh 구현화면 - Istio / linkerd
Step5. Multi-cluster (linkerd)
Creating Highly Available MongoDB Microservices with Docker Containers and Ku...MongoDB
In this webinar recording we explored how to successfully define our database infrastructure with MongoDB running on Docker containers, how to orchestrate MongoDB containers with Kubernetes in multiple environments, considerations and strategies for managing stateful MongoDB containers, and how to manage high availability and resiliency in a distributed system while running on a container technology such as Kubernetes.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
Comparing Service-Oriented Architecture (SOA), Microservices and Service-Based Architecture (SBA - SOA and Microservices Hybrid) patterns.
Also discussing coupling and cohesion concepts in relation to the systems design.
Advanced DNS Traffic Management using Amazon Route 53 - AWS Online Teck TalksAmazon Web Services
Dynamically managing routing and traffic to multiple network resources, such as web servers, app servers, and load balancers across multiple locations is challenging. Amazon Route 53 Traffic Flow provides a visual editor that helps you quickly create sophisticated trees that route traffic to the best endpoint for your application based on latency, health, and other considerations. The tech talk will explain how to use Traffic Flow to solve routing and traffic management use cases like disaster recovery, blue/green deployments, and A/B testing.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
Technology Introduction Series brings to you tutorials from experts and organisations across the Telecom Industry.
In this video, Jim Morrish, Founding Partner of Transforma Insights provides a tutorial on Edge Computing. Transforma Insights is a leading research firm focused on the world of Digital Transformation (DX).
In this presentation, Jim covers the following topics:
Definitions of Edge Computing.
How and why Edge Computing is used.
Planning for deployment of Edge Computing.
Forecasts for Edge Computing.
All our #3G4G5G slides, videos, blogs and tutorials are available at:
Videos: https://www.youtube.com/3G4G5G
Slides: https://www.slideshare.net/3G4GLtd
6G and Beyond-5G Page: https://www.3g4g.co.uk/6G/
Free Training Videos: https://www.3g4g.co.uk/Training/
3G4G Website – https://www.3g4g.co.uk/
3G4G Blog – https://blog.3g4g.co.uk/
Telecoms Infrastructure Blog – https://www.telecomsinfrastructure.com/
Operator Watch Blog – https://www.operatorwatch.com/
Connectivity Technology Blog – https://www.connectivity.technology/
Free 5G Training – https://www.free5gtraining.com/
Free 6G Training – https://www.free6gtraining.com/
Cloud computing introduced with emphasis on the underlying technology explaining that more than virtualization is involved. Topics covered include: Cloud Technologies, Web Applications, Clustering, Terminal Services, Application Servers, Virtualization, Hypervisors, Service Models, Deployment Models, and Cloud Security.
Join us to learn the concepts and terminology of Kubernetes such as Nodes, Labels, Pods, Replication Controllers, Services. After taking a closer look at the Kubernetes master and the nodes, we will walk you through the process of building, deploying, and scaling microservices applications. Each attendee gets $100 credit to start using Google Container Engine. The source code is available at https://github.com/janakiramm/kubernetes-101
The use cases for blockchain in healthcare will start in small projects that reduce duplicative work but can eventually shift to a system where patients control access rights to their data.
What is Your Edge From the Cloud to the Edge, Extending Your ReachSUSE
As companies continue to take advantage of the benefits of cloud – increased flexibility, speed of innovation and quickly responding to business demands, it is no wonder that they want to extend these benefits to the edge. But there are still a lot of questions.
It is an exciting time in computing with the sea-change happening both on the technology fronts and application fronts. Networked sensors and embedded platforms with significant computational capabilities with access to backend utility computing resources, offer a tremendous opportunity to realize large-scale cyber-physical systems (CPS) to address the many societal challenges including emergency response, disaster recovery, surveillance, and transportation. Referred to as Situation awareness applications, they are latency-sensitive, data intensive, involve heavy-duty processing, run 24x7, and result in actuation with possible retargeting of sensors. Examples include surveillance deploying large-scale distributed camera networks, and personalized traffic alerts in vehicular networks using road and traffic sensing. This talk covers ongoing research in Professor Ramachandran’s embedded pervasive lab to provide system support for Internet of Things.
OpenStack and Kubernetes - A match made for Telco HeavenTrinath Somanchi
With the advent of Containerization of Telco Clouds for NFV and SDN based deployments, OpenStack with Kubernetes is a best chosen option to solve the challenges is a better way to build a containerized Telco cloud. This involves, "Kubernetes in OpenStack", "OpenStack in Kubernetes" and "Independent OpenStack and Kubernetes". With this complementing collaboration, in the Stadium of OpenStack's Open Infrastructure, Telecom gaints are developing cloud-native solutions to best fit the next generation networking deployments. In this Presentation, we talk about Containerization and benefits, OpenStack and Kubernetes match making and we give a brief overview on Airship and Kata Container projects.
CNCF TUG (Telecom User Group) Ike Alisson 5G New Service Capabilities Rev pa10Ike Alisson
5G New Service Capabilities (with an overview on the synergy between 5G CN and RAN (O-RAN Specifications) via CUPS and some of the Enhancements for URLLC UCs enhancements
클라우드 네이티브로의 전환이 확산되면서 애플리케이션을 상호 독립적인 최소 구성 요소로 쪼개는 마이크로서비스(microservices) 아키텍쳐가 각광받고 있는데요.
MSA는 애플리케이션의 확장이 쉽고 새로운 기능의 출시 기간을 단축시킬 수 있다는 장점이 있지만,
반면에 애플리케이션이 커지고 동일한 서비스의 여러 인스턴스가 동시에 실행되면 MSA간 통신이 복잡해 진다는 단점이 있습니다.
서비스 메쉬(Service Mesh)는 이러한 MSA의 트래픽 문제를 보완하기 위해 탄생한 기술로,
서비스 간의 네트워크 트래픽 관리에 초점을 맞춘 네트워킹 모델입니다.
서로 다른 애플리케이션이 얼마나 원활하게 상호작용하는지를 기록함으로써 커뮤니케이션을 최적화하고 애플리케이션 확장에 따른 다운 타임을 방지할 수 있습니다.
서비스 메쉬의 탄생 배경과 기능, 그리고 현재 오픈소스로 배포되어 있는 서비스 메쉬 솔루션에 대해 소개합니다.
Step1. Cloud Native Trail Map
Step2. Service Proxy, Discover, & Mesh
Step3. Service Mesh 솔루션
Step4. Service Mesh 구현화면 - Istio / linkerd
Step5. Multi-cluster (linkerd)
Creating Highly Available MongoDB Microservices with Docker Containers and Ku...MongoDB
In this webinar recording we explored how to successfully define our database infrastructure with MongoDB running on Docker containers, how to orchestrate MongoDB containers with Kubernetes in multiple environments, considerations and strategies for managing stateful MongoDB containers, and how to manage high availability and resiliency in a distributed system while running on a container technology such as Kubernetes.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
Comparing Service-Oriented Architecture (SOA), Microservices and Service-Based Architecture (SBA - SOA and Microservices Hybrid) patterns.
Also discussing coupling and cohesion concepts in relation to the systems design.
Advanced DNS Traffic Management using Amazon Route 53 - AWS Online Teck TalksAmazon Web Services
Dynamically managing routing and traffic to multiple network resources, such as web servers, app servers, and load balancers across multiple locations is challenging. Amazon Route 53 Traffic Flow provides a visual editor that helps you quickly create sophisticated trees that route traffic to the best endpoint for your application based on latency, health, and other considerations. The tech talk will explain how to use Traffic Flow to solve routing and traffic management use cases like disaster recovery, blue/green deployments, and A/B testing.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
Technology Introduction Series brings to you tutorials from experts and organisations across the Telecom Industry.
In this video, Jim Morrish, Founding Partner of Transforma Insights provides a tutorial on Edge Computing. Transforma Insights is a leading research firm focused on the world of Digital Transformation (DX).
In this presentation, Jim covers the following topics:
Definitions of Edge Computing.
How and why Edge Computing is used.
Planning for deployment of Edge Computing.
Forecasts for Edge Computing.
All our #3G4G5G slides, videos, blogs and tutorials are available at:
Videos: https://www.youtube.com/3G4G5G
Slides: https://www.slideshare.net/3G4GLtd
6G and Beyond-5G Page: https://www.3g4g.co.uk/6G/
Free Training Videos: https://www.3g4g.co.uk/Training/
3G4G Website – https://www.3g4g.co.uk/
3G4G Blog – https://blog.3g4g.co.uk/
Telecoms Infrastructure Blog – https://www.telecomsinfrastructure.com/
Operator Watch Blog – https://www.operatorwatch.com/
Connectivity Technology Blog – https://www.connectivity.technology/
Free 5G Training – https://www.free5gtraining.com/
Free 6G Training – https://www.free6gtraining.com/
Cloud computing introduced with emphasis on the underlying technology explaining that more than virtualization is involved. Topics covered include: Cloud Technologies, Web Applications, Clustering, Terminal Services, Application Servers, Virtualization, Hypervisors, Service Models, Deployment Models, and Cloud Security.
Join us to learn the concepts and terminology of Kubernetes such as Nodes, Labels, Pods, Replication Controllers, Services. After taking a closer look at the Kubernetes master and the nodes, we will walk you through the process of building, deploying, and scaling microservices applications. Each attendee gets $100 credit to start using Google Container Engine. The source code is available at https://github.com/janakiramm/kubernetes-101
The use cases for blockchain in healthcare will start in small projects that reduce duplicative work but can eventually shift to a system where patients control access rights to their data.
What is Your Edge From the Cloud to the Edge, Extending Your ReachSUSE
As companies continue to take advantage of the benefits of cloud – increased flexibility, speed of innovation and quickly responding to business demands, it is no wonder that they want to extend these benefits to the edge. But there are still a lot of questions.
It is an exciting time in computing with the sea-change happening both on the technology fronts and application fronts. Networked sensors and embedded platforms with significant computational capabilities with access to backend utility computing resources, offer a tremendous opportunity to realize large-scale cyber-physical systems (CPS) to address the many societal challenges including emergency response, disaster recovery, surveillance, and transportation. Referred to as Situation awareness applications, they are latency-sensitive, data intensive, involve heavy-duty processing, run 24x7, and result in actuation with possible retargeting of sensors. Examples include surveillance deploying large-scale distributed camera networks, and personalized traffic alerts in vehicular networks using road and traffic sensing. This talk covers ongoing research in Professor Ramachandran’s embedded pervasive lab to provide system support for Internet of Things.
Walking through the fog (computing) - Keynote talk at Italian Networking Work...FBK CREATE-NET
"Walking through the fog (computing): trends, use-cases and open issues"
Despite its huge success in many IT-enabled application scenarios, cloud computing has demonstrated some intrinsic limitations that may severely limit its adoption in several contexts where constraints like e.g. preserving data locally, ensuring real-time reactivity or guaranteeing operation continuity despite lack of Internet connectivity (or a combination of them) are mandatory. These distinguishing requirements fostered an increased interest toward computing approaches that inherit the flexibility and adaptability of the cloud paradigm, while acting in proximity of a specific scenario. As a consequence, the emergence of this “proximity computing” approach has exploded into a plethora of architectural solutions (and novel terms) like fog computing, edge computing, dew computing, mist computing but also cloudlets, mobile cloud computing, mobile edge computing (and probably few others I may not be aware of…). The talk will initially make an attempt to introduce some clarity among these “foggy” definitions by proposing a taxonomy whose aim is to help identifying their peculiarities as well as their overlaps. Afterwards, the most important components of a generalized proximity computing architecture will be explained, followed by the description of few research works and use cases investigated within our Center and based on this emerging paradigm. An overview of open issues and interesting research directions will conclude the talk.
Over the past five years, cloud computing has gone from a curiosity to
core scientific technology. The cloud's relative simplicity, instant
availability, and reasonable cost have made it attractive to
scientists, especially in domains relatively new to large scale data
analysis. This trend will continue into the foreseeable future,
challenging resource providers to adapt their services, to provide
easy federation with other providers, and to accommodate many
different scientific disciplines. For developers of cloud services,
there are also many challenges. Efficient access to, and the curation
of large data sets remain largely unsolved problems. Image
management also raises new issues, especially if these images are to
be shared and trusted. This presentation reviews the current status
of cloud computing and presents some ideas on how the upcoming
challenges might be met.
Presented at CNAF in Bologna, Italy by Charles Loomis in May 2013.
In computing, It is the description about Grid Computing.
It gives deep idea about grid, what is grid computing? , why we need it? , why it is so ? etc. History and Architecture of grid computing is also there. Advantages , disadvantages and conclusion is also included.
Professor Michael Devetsikiotis gave a lecture on "Networked 3-D Virtual Collaboration in Science and Education: Towards 'Web 3.0' (A Modeling Perspective) " in the Distinguished Lecturer Series - Leon The Mathematician.
More Information available at:
http://goo.gl/U5nGq
"A programmable, flexible and scalable network architecture will be required to support efficiently any Industrial-IoT solution. Vendor-Independent Software Defined Network will play a key role to address low latency, secure and real-time solutions. "
SILECS/SLICES - Super Infrastructure for Large-Scale Experimental Computer Sc...Frederic Desprez
The aim of the SILECS and SLICES projects is to design and build a large infrastructure for experimental research on various aspects of distributed computing, from small connected objects to the large data centres of tomorrow. This infrastructure will allow end-to-end experimentation with software and applications at all levels of the software layers, from event capture (sensors, actuators) to data processing and storage, to radio transmission management and dynamic deployment of edge computing services, enabling reproducible research on all-point programmable networks, ... SILECS is the french node of a european infrastructure called SLICES.
Super Infrastructure for Large-Scale Experimental Computer Science, (Almost) everything you wanted to know about SILECS/SLICES but didn't dare to ask. Presentation at "journées du GDR RSD", Nantes, Jan. 23, 2020/.
SILECS: Super Infrastructure for Large-scale Experimental Computer ScienceFrederic Desprez
SILECS, based on two existing infrastructure (FIT and Grid'5000), aims to provide a large robust, trustable and scalable instrument for research in
distributed computing and networks. Experiments from the Internet of Things, data centers, cloud computing, security services, and the networks
connecting them will be possible, in a reproducible way, on various hardware and software. This instrument will offer a multi-platform experimental
infrastructure (HPC, Cloud, Big Data, Software Defined Storage, IoT, wireless, Software Defined Network / Radio) capable of exploring the
infrastructures that will be deployed tomorrow and assist researchers and industrial about how to design, build and operate a multi-scale, robust and
safe computer system. Diverse digital resources (compute, storage, link, IO devices) are be assembled to support a “playground” at scale.
Challenges and Issues of Next Cloud Computing PlatformsFrederic Desprez
Cloud computing has now crossed the frontiers of research to reach industry. It is used every day , whether to exchange emails or make
reservations on web sites. However, many research works remain to be done to improve the performance and functionality of these platforms of tomorrow. In this talk, I will do an overview of some these theoretical and appliead researches done at INRIA and particularly around Clouds distribution, energy monitoring and management, massive data processing and exchange, and resource management.
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Frederic Desprez
The increasing complexity of available infrastructures (hierarchical, parallel, distributed, etc.) with specific features (caches, hyper-threading, dual core, etc.) makes it extremely difficult to build analytical models that allow for a satisfying prediction. Hence, it raises the question on how to validate algorithms and software systems if a realistic analytic study is not possible. As for many other sciences, the one answer is experimental validation. However, such experimentations rely on the availability of an instrument able to validate every level of the software stack and offering different hardware and software facilities about compute, storage, and network resources.
Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of large HPC facilities, Grid’5000 has evolved in order to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. We now target new processors features such as hyperthreading, turbo boost, and power management or large applications managing big data. In this keynote we will both address the issue of experiments in HPC and computer science and the design and usage of the Grid'5000 platform for various kind of applications.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
(R)evolution of the computing continuum - A few challenges
1. (R)evolution of the computing
continuum
A few challenges…
International Symposium on Stabilization, Safety, and Security of Distributed Systems
F. Desprez (INRIA), A. Lebre (IMT Atlantique)
2. Agenda
• Introduction, context, and research issues
• Some recent challenges/scientific issues addressed by the Stack team
1. How to operate a geo-distributed infrastructure
2. Services placement
3. Decentralized indexation
• Experimental infrastructures
• Conclusions
3. Why do we need a computing continuum ? Mahadev Satyanarayanan
4. Introduction and context
• Huge increase of data generated (2.5 exabytes of new data generated each
day)
• More than 50 billions of connected devices around the world
• Moving the data from IoT devices to the cloud is an issue
• New applications (time-sensitive, location aware) with ultra-low latencies
requirements
• Privacy issues
• Solution: A computing paradigm closer to the data is generated and used
Impossible !
6. Several ‘flavors’ of distributed computing
• Cloud computing
• Ubiquitous, on-demand access to shared computing resources. Virtualization. Elasticity. IaaS, PaaS, SaaS.
• Fog computing
• « Horizontal system-level architecture that distributes computing, storage, control, and networking closer to the
users along a cloud-to-thing continuum » (OpenFog consortium).
• Mobile computing
• Mobile devices, resource constrained devices, connected though Bluetooth, Wifi, ZigBee, …
• Mobile cloud computing (MCC)
• An infrastructure where both the data storage and data processing occur outside of the mobile device, bringing
mobile computing applications to not just smartphone users but a much broader range of mobile subscribers.
• Mobile and ad hoc cloud computing
• Mobile devices in an ad hoc mobile network form a highly dynamic network topology; the network formed by the
mobile devices is highly dynamic and must accommodate for devices that continuously join or leave the
network.
All one needs to know about fog computing and related edge computing paradigms: A complete survey, A. Yousefpour et al., Journal of Syst. Arch., Vol 98, Sep. 2019
7. Several ‘flavors’ of distributed computing, contd
• Edge computing
• « Computation done at the edge of the network through small data centers that are close to users »
(OpenEdge Computing).
• Multi-access Edge Computing (MEC)
• « A platform that provides IT and cloud-computing capabilities within the Radio Access Network (RAN) in
4G and 5G, in close proximity to mobile subscribers » (ETSI).
• Cloudlet computing
• Trusted resource-rich computer or a cluster of computers with strong connection to the Internet that is
utilized by nearby mobile devices (Carnegie Mellon University)
• Mist computing
• Dispersed computing at the extreme edge (the IoT devices themselves).
All one needs to know about fog computing and related edge computing paradigms: A complete survey, A. Yousefpour et al., Journal of Syst. Arch., Vol 98, Sep. 2019
8. Some common characteristics
• Low Latency
• Nodes are closer to the end users and can offer a faster analysis and response to the data generated and
requested by the users
• Geographic Distribution
• Geo-distributed deployment and management,
• Heterogeneity
• Collection and processing of information obtained from different sources and collected by several means of
network communication,
• Interoperability and Federation
• Resources must be able to interoperate with each other and services and applications must be federated
across domains,
• Real-Time Interactions
• Services and applications involve real-time interaction, not just batch processing,
• Scalability
• Fast detection of variation in workload’s response time and of changes in network and device conditions,
supporting elasticity of resources.
Orchestration in Fog Computing: A Comprehensive Survey, B. Costa et al., ACM Computing Surveys, Vol. 55, No. 2, Jan. 2022.
9. Some research issues
Application lifecycle management (initial deployment, configuration, reconfiguration,
maintenance)
• Abstracting the description of the whole application structure, globally optimize the resources used with respect to multi-criteria
objectives (price, deadline, performance, energy, etc.), models and associated languages to describe applications, their
objective functions, placement and scheduling algorithms supporting system and application-level criteria, ...
Infrastructure management
• Virtualization (hyper-converged 2.0 architecture, complexity, heterogeneity, dynamicity, scaling and locality), storage
(compromise between moving computation vs. data, files, BLOB, key/value systems, geo-distributed graph database, …), and
administration (intelligent orchestrator, geo-distributed scale, automatically adaption to users' needs, ...)
Hardware
• Trusted hardware solutions, architectural support for high level features, energy reduction solutions, new accelerators, …
Security
• Vulnerabilities in VMs, hypervisors and orchestrators, virtual network technologies (SDN, NFV), programming or access
interfaces, adapting security policies to a more complex environment, ...
Energy
• End-to-end energy analysis and management of large-scale hierarchical Cloud/Edge/Fog infrastructures on processing, network
and storage aspects, trade-offs between energy efficiency and other performance metrics in virtualized infrastructures, Eco-
design of digital applications and services, ...
• …
10. CLOUDLET/FoG/Edge/CLOUD-To-IOT/CONTINUUM Computing
Inter Micro DCs latency
[50ms-100ms]
Edge
Frontier
Edge
Frontier
Extreme Edge
Frontier
Domestic network
Enterprise network
Wired link
Wireless link
Cloud Latency
> 100ms
Cloud Computing
Micro/Nano DC
Intra DC latency
< 10ms
Hybrid network
11. CHALLENGE 1: HOW TO GEO-DISTRIBUTE
CLOUD APPLICATIONS TO THE EDGE
12. Defacto open source standard to administrate/virtualize/use
resources of one DC
Scalability?
Latency/throughput impact?
Network partitioning issues?
…
From LAN to WAN? ⇒
Bring Cloud applications to the Edge
INITIATING THE DEBATE WITH OPENSTACK (2016-2021)
13. Inter Micro DCs latency
[50ms-100ms]
Edge
Frontier
Edge
Frontier
Extreme Edge
Frontier
Domestic network
Enterprise network
Wired link
Wireless link
Cloud Latency
> 100ms
Cloud Computing
Micro/Nano DC
Intra DC latency
< 10ms
Hybrid network
WANWIDE
Collaborative?
Bring Cloud applications to the Edge
INITIATING THE DEBATE WITH OPENSTACK (2016-2021)
14. 13 Millions of LOCs,186 subservices
Designed for a single location
OPENSTACK (THE DEVIL IN
DETAILS)
16. Collaboration code is
required in every Service
A broker per service
must be implemented
DB values might be
location dependant
Bring Cloud applications to the Edge
COLLABORATION: ADDITIONAL PIECES OF CODE IS REQUIRED
18. The SCOPE lang: Andy defines the scope of the request into the CLI.
The scope specifies where the request applies.
Bring Cloud applications to the Edge
A SERVICE DEDICATED TO ON DEMAND COLLABORATIONS
19. openstack server create my-vm ——flavor m1.tiny --image cirros.uec
—-scope {compute: Nantes, image: Paris}
OpenStack Summit Berlin - Nov 2018
Hacking the Edge hosted by Open Telekom Cloud
• A complete model in order to enhance the scope description with sites compositions (e.g., AND, OR)
• List VMs on Nantes and Paris
openstack server list --scope {compute:Nantes&Paris}
Bring Cloud applications to the Edge
A SERVICE DEDICATED TO ON DEMAND COLLABORATIONS
21. • Expose consistency policies at to the user level (extend the scope syntax)
• Manage the dependencies between resources
• Notion of replication set: manage a fixed pool of resources with an automatic control loop
(implemented in a geo-distributed way at the Cheops level).
Replication overview/challenge
Bring Cloud applications to the Edge
A CHEOPS AS A BUILDING BLOCK TO DEAL WITH GEO-DISTRIBUTION
22. Manage partition issues using appropriate replication/aggregation policies
Cross overview/challenge
Bring Cloud applications to the Edge
A CHEOPS AS A BUILDING BLOCK TO DEAL WITH GEO-DISTRIBUTION
23. A bit more complicated than it looks like…
Delavergne, Marie; Antony, Geo Johns; Lebre, Adrien
Cheops, a service to bloud away Cloud applications to the Edge, To appear in ICSOC 2022
Bring Cloud applications to the Edge
TOWARD A GENERALISATION OF THE SERVICE (OpenStack/Kubernetes/…)
25. Service placement problems
How to assign the IoT applications to computing nodes (Fog nodes) which are distributed in a Fog environment ?
• Different kinds of applications
• Monolithic service, data pipeline, set of inter-dependent components, Directed Acyclic Graphs (DAGs)
• Several constraints
• Computing and networking resources are heterogeneous and dynamic, Computing and network resources are not always available,
Service cannot be processed everywhere
• Different approaches
• Centralized or distributed approaches
• Online or offline placement
• Static or dynamic
• Mobility support
• Different performance criterions
• Execution time, quality of service, latency, energy consumption
• Problem formulations
• Linear programming: Integer Linear Programming (ILP), Integer Nonlinear Programming (INLP), Mixed Integer Linear Programming
(MILP), Mixed-integer non-linear programming (MINLP), Mixed Integer Quadratic Programming (MIQP)
• Constraint programming, Markov decision process, stochastic optimization, potential games, …
An overview of service placement problems in Fog and Edge Computing. F. Ait-Salaht, F. Desprez, and. A. Lebre. ACM Computing Surveys, Vol. 53, Issue 3, May 2021
26. Service Placement Problem using Constraint programming
and Choco solver
• Goals
• Elaborate a generic and easy to upgrade model
• Define a new formulation of the placement problem considering a general definition of service and
infrastructure network through graphs using constraint programming
Service Placement in Fog Computing Using Constraint Programming. F. Ait-Salaht, F. Desprez, A. Lebre, C. Prud’homme and M. Abderrahim. IEEE
27. System model and problem formulation
• A directed graph G = <V,E> represents the Network
• V: set of vertices or nodes (server)
• E: set of edges or arcs (connections)
• Each node defines CPU and RAM capacities
• Each arc defines a latency and a bandwidth
capacity
• Infrastructure
• An application is an ordered set of components
• A component requires CPU/RAM to work
• A component can send data (bandwidth, latency)
• Some components are fixed (f-ex., cameras)
• Application
28. • CPU capacity of each node is
respected
• Same goes with RAM capacity
• Bandwidth capacity is respected on
arcs too
• Latencies are satisfied
Placement (mapping)
Assign services (each component and each edge) to network infrastructure
(node and link) such that:
System model and problem formulation
29. Constraint Programming model (CP)
What is CP ?
• CP stands for Constraint Programming
• CP is a general purpose implementation of Mathematical Programming
• MP theoretically studies optimization problems and resolution techniques
• It aims at describing real combinatorial problems in the form of Constraint
Satisfaction Problems and solving them with Constraint Programming techniques
• The problem is solved by alternating constraint filtering algorithms with a search
mechanism
• Modeling steps (3)
• Declare variables and their domain
• Find relation between them
• Declare a objective function, if any
39. Experiment 1
Infrastructure Smart bell application
91 fog
nodes
86
sensors
• Requirements
‣ Resources: CPU, RAM, DISK
‣ Networking: Latency and Bandwidth
‣ Locality
• Objective
‣ Minimize average latency
Implementation of the model on the Choco solver (Free Open-Source Java library dedicated to Constraint Program
44. Where is the content I’m looking for?
Locating the closest replica of a specific content requires indexing every live replica along with
its location
Existing solutions
• Remote services (centralized index, DHT)
In contradiction with the objectives of Edge infrastructures:
The indexing information might be stored in a node that is far away
(or even unreachable) while the replica could be in the vicinity
• Broadcast
• Maintaining such an index at every node would prove overly costly in terms of memory
and traffic (it does not confine the traffic)
• Epidemic propagation
46. Challenges
How to maintain such a logical partitioning in a dynamic environment
where…
• Nodes can ADD or DELETE content any time (no synchronization)
• Nodes can join or leave the system at any time (without any warning)
…while limiting the scope of transferred information as much as possible
48. Lock Down the Traffic of Decentralized Content Indexing at the Edge, B. Nedelec et al., ICA3PP 2022
A preliminary step toward a complete solution
• Definitions of the properties that guarantee
decentralized consistent partitioning in dynamic
infrastructures.
• Demonstration that concurrent creation and
removal of partitions may impair the propagation
of control information
• Proposal of a first algorithm solving this dynamic
partitioning problem (and its evaluation by
simulations)
50. Experimental infrastructures
SILECS/SLICES: Super Infrastructure for Large-Scale Experimental Computer Science
• The Discipline of Computing: An Experimental Science
• Studied objects are more and more complex (Hardware, Systems, Networks, Programs, Protocols, Data,
Algorithms, …)
• A good experiment should fulfill the following properties
• Reproducibility: must give the same result with the same input
• Extensibility: must target possible comparisons with other works and extensions (more/other processors,
larger data sets, different architectures)
• Applicability: must define realistic parameters and must allow for an easy calibration
• “Revisability”: when an implementation does not perform as expected, must help to identify the reasons
• ACM Artifact Review and Badging
51. SILECS/Grid’5000
• Testbed for research on distributed systems
• Born in 2003 from the observation that we need a better and larger testbed
• HPC, Grids, P2P, and now Cloud computing, and BigData systems
• A complete access to the nodes’ hardware in an exclusive mode (from one node to
the whole infrastructure)
• Dedicated network (RENATER)
• Reconfigurable: nodes with Kadeploy and network with KaVLAN
• Current status
• 8 sites, 36 clusters, 838 nodes, 15116 cores
• Memory: ~100 TiB RAM + 6.0 TiB PMEM, Storage: 1.42 PB (1515 SSDs and 953
HDDs on nodes), 617.0 TFLOPS (excluding GPUs)
• Diverse technologies/resources (Intel, AMD, Myrinet, Infiniband, two GPU clusters,
energy probes)
• Some Experiments examples
• In Situ analytics, Big Data Management,
• HPC Programming approaches, Batch scheduler optimization
• Network modeling and simulation
• Energy consumption evaluation
• Large virtual machines deployments
52. SILECS/FIT
Providing Internet players access to a
variety of fixed and mobile
technologies and services, thus
accelerating the design of advanced
technologies for the Future Internet
53. Experiments
• Discovering resources from their description
• Reconfiguring the testbed to meet experimental
needs
• Monitoring experiments, extracting and
analyzing data
• Controlling experiments: API
59. Conclusion
The disconnection is the norm
• High latency, unreliable connections,
• Logical partitioning (Edge areas/zones)
A (r)evolution of distributed systems and networks?
• Algorithms, (distributed) system building blocks should be revised to satisfy geo-
distributed constraints
• Decentralized vs collaborative (e.g. DHT, network ASes)
60. Questions / THANKS
Post-scriptum
• We are looking for students, Phd candidates,
postdocs, engineers, researchers, associate-
professors (AI/infrastructure experts, this is trendy ;-)),
use-cases, fundings, collaborations…
• We propose … a lot of fun and work!
http://stack.inria.fr
Editor's Notes
CDF = cumulative Distribution Function of responses times (3 runs)
E2E = Eye to Eye
VR < 20 ms
Todo: ordres de grandeurs comparatifs
Une version simplifiée du edge mais pouvons nous déjà opérer une telle infrastructure?
First application: OpenStack
Can we operate an edge infrastructure with a single instance (aka. a single controlplane) of OpenStack?
Une version simplifiée du edge mais pouvons nous déjà opérer une telle infrastructure?
Good results and Openstack, impossible to provision VMs when having network disconnection
One version of OpenStack on every site
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Nova is the OpenStack project that provides a way to provision compute instances (aka virtual servers). Glance: OpenStack Image service
Last scenario when Andy wants to launch a VM instance which is only available on site 2
Having a solution without changing the code
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Cheops: new dedicated service acting as a proxy
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
K8S = Kubernete
ILP = CPLEX, First fit with backtrack, Genetic alg., Xia et al., Choco
If we look to the state of the art, when a client wants to access a specific content, it has to request a remote node to provide at least one node identity to retrieve this content from. After retrieving the content, the client can create another replica to improve the performance of future accesses, but it must then recontact the indexing service to notify of the creation of this new replica.
This approach has two drawback.
- First, accessing a remote node to request content location(s) raises hot spots and availability issues. But most importantly, it results in additional delays [3,12] that occur even before the actual download started.
Second, the client gets a list of content locations at the discretion of content indexing services. Without information about these locations, it often ends up downloading from multiple hosts, yet only keeping the fastest answer. In turn, clients either waste network resources, or face slower response time.
A naive approach would be that every node indexes and ranks every live replica along with its location information. When creating or destroying a replica, a node would notify all other nodes by broadcasting its operation
This flooding approach is counter performant as a node may acknowledge the existence of replicas at the other side of the network while there already exists a replica next to it.
A promising approach would be to use epidemic propagation by limiting the propagation only to a subset of relevant nodes. To better understand this idea, let’s discuss a concrete example.
In this example, we consider a Node R that creates a new content and that efficiently advertises its content by epidemic propagation. At the end of the epidemic phase, every node can request R to get the content if needed
Let’s consider that Node G gets the content and creates a second replica splitting the red set in two (now we have a set of nodes in red that should request R and a set in Green that should requests G in order to get the closest replica host.
In this example, we consider the geographical distance but the notion of distance can be defined in a more advanced way considering latency, throughput, robustness etc.).
Now, let’s consider that Node B creates another replica. Node B needs to notify only a small subset of nodes (resulting a 3 sets, red, green and blue)
Finally, let’s consider G destroys its replica. Nodes that belonged to its partition must find the closest partition they are in, resulting at the end in two sets (red and blue).
While it makes sense for Node G to broadcast its removal, Node B and Node R cannot afford to continuously advertise their replica to fill the gap left open by Node G. A better approach would consist in triggering notifications at bordering nodes of red and blue partitions once again. In other words, the indexing problem can be seen as a distributed and dynamic partitioning problem.
This dynamic partitioning raises additional challenges related to concurrent operations where removed partitions could block the propagation of other partitions.
So the problem that should be tackled is: how can we maintain such a logical partitioning in a dynamic environment where
nodes can add….
Node can join…
While limiting the scope of the messages between nodes as much as possible (network confinement)
Just to give you an idea of the consistency issue, let’s consider the following example.
In the first part (a)): nodes a and c create a new replica of a the same content.
The colors illustrate in which partition node belong to (black no partition).
So node a belongs to the blue partition and node c to the green one.
Both nodes send a creation message to their neighbour (here b).
b) Let’s consider node b receives the notification of node a, so it joins the blue partition and forwards the notification towards C (alpha a3,3 since distance equals AB+BC: 2+1).
Meanwhile, node a and node c delete the replica and so send a new notification related to the removal to b (respectively deltaA and deltaB). Once again nodes evolve independently from the broadcasted messages.
c) The creation notification from c to b (alpha c1) is finally received on b and so b will join the green partition since the distance is better and forward this message to A (alpha c3).
The removal message from node a is received on b. Since node b belongs to the green partition, it does not consider the notification related to the removal of the replica sent by node a (deltaA is discarded). Remind that the goal is to mitigate the network traffic as much as possible.
Meanwhile, node c receives the initial creation of node a. Since it does not have the replica anymore, it joins the blue partition (here we have a first inconsistency, since b believes its closest replica is on node C while node C believes it is on node a going through node b, which is obviously not possible). Anyway let’s continue the scenario.
d) node A receives the initial notification of node c, since it does not have the replica anymore, it joins the green partition (although the content has been already deleted at C but there is no way for node A to be aware of that). Node b receives the removal notification from node c and so leaves the green partition and forwards the notification to node A.
e) node A receives the removal notification and leaves in its turn the green partition.
f) at the end node C belongs to a partition that does not exist anymore and of C have children, they would stay in the wrong partition too.
Without diving into details, nor presenting the algorithm, the idea is to make echos to creation and removals notifications. If a node receives a notification that it has already proceed and that it knows it is deprecated it will make an echo of the previous message (in the previous case, the removal notification that has been discarded on node B will be triggered once again to node C). For further details please refer to the article.