Our slides deck, used at the LinuxCon+ContainerCon+CLOUDOPEN China 2018, on Kubernetes cluster design considerations and our journey to 1000+ node single cluster with IBM Cloud.
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
Keeping your Kubernetes Cluster SecureGene Gotimer
Many organizations are shifting to containers and Kubernetes, and that move means learning new ways to secure their environments. Kubernetes clusters have to be hardened at different levels. We have to consider the nodes where the Kubernetes control plane is running. We also need to secure the Kubernetes workloads and check the files that create them. And we need to inspect the containers we are using for vulnerabilities and unusual behavior.
Gene will show you some open-source tools that can find issues and vulnerabilities at each layer. You will see how they can be used in a pipeline to build your Kubernetes cluster safely and keep it secure.
Kubernetes is more or less one of the biggest players when it comes to Container orchestration. Since Kubernetes 1.7 RBAC (Role Based Access Control) is the default for the authorisation of actions in you cluster. There are many other components, like Pod Security Policies, Network Policies, Admisstion Controllers, that allows you to secure your Kubernetes cluster.
In this talk I will show you how these things can work together and which problem these components try to solve. Also I will show you an overview how other tools like Vault can fit into the Kubernetes ecosystem to make you platform more secure.
Event: DevFest Karlsruhe, 09.12.2017
Speaker: Johannes M. Scheuermann
Weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Weitere Tech-Artikel: https://www.inovex.de/blog/
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
Keeping your Kubernetes Cluster SecureGene Gotimer
Many organizations are shifting to containers and Kubernetes, and that move means learning new ways to secure their environments. Kubernetes clusters have to be hardened at different levels. We have to consider the nodes where the Kubernetes control plane is running. We also need to secure the Kubernetes workloads and check the files that create them. And we need to inspect the containers we are using for vulnerabilities and unusual behavior.
Gene will show you some open-source tools that can find issues and vulnerabilities at each layer. You will see how they can be used in a pipeline to build your Kubernetes cluster safely and keep it secure.
Kubernetes is more or less one of the biggest players when it comes to Container orchestration. Since Kubernetes 1.7 RBAC (Role Based Access Control) is the default for the authorisation of actions in you cluster. There are many other components, like Pod Security Policies, Network Policies, Admisstion Controllers, that allows you to secure your Kubernetes cluster.
In this talk I will show you how these things can work together and which problem these components try to solve. Also I will show you an overview how other tools like Vault can fit into the Kubernetes ecosystem to make you platform more secure.
Event: DevFest Karlsruhe, 09.12.2017
Speaker: Johannes M. Scheuermann
Weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Weitere Tech-Artikel: https://www.inovex.de/blog/
Kubernetes is the new cloud OS, and enterprises are rapidly migrating existing applications to Kubernetes as well as creating new Kubernetes-native applications. However, Kubernetes configuration management remains complex, and due to this complexity, most implementations do not leverage Kubernetes constructs for security.
In this session you will learn:
- Key Kubernetes constructs to use for properly securing application workloads in any cloud
- How to manage Kubernetes configurations across multiple clusters and cloud providers
- How to audit and enforce enterprise-wide Kubernetes best practices
The Good, the Bad and the Ugly of Networking for Microservices by Mathew Lodg...Docker, Inc.
Ugly connectivity challenges in your development and production environments.
The good: Advances in container networking in the past year, including the emergence of “Micro SDNs” as the way to simplify Docker deployments
The bad: Understanding live network behavior and troubleshooting
The ugly: Multicast, security, robustness and resiliency at scale
In this talk you will learn how to weave Dockerized microservices together without tying yourself in knots or putting your head in a noose. You’ll learn how to effectively use micro SDNs, service discovery and request routing. You'll also see how to solve the bad and the ugly connectivity challenges in your development and production environments.
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
Secure Your Containers: What Network Admins Should Know When Moving Into Prod...Cynthia Thomas
This session offers techniques for securing Docker containers and hosts using open source network virtualization technologies to implement microsegmentation. Come learn real tips and tricks that you can apply to keep your production environment secure.
Docker Online Meetup #22: Docker NetworkingDocker, Inc.
Building on top of his talk at DockerCon 2015, Jana Radhakrishnan, Lead Software Engineer at Docker, does a deep dive into Docker Networking with additional demos and insights on the product roadmap.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
Security best practices for kubernetes deploymentMichael Cherny
Security best practices for a Kubernetes Deployment - from development, through build, ship, networking and run time controls.
Was presented at New York Kubernetes meetup https://www.meetup.com/New-York-Kubernetes-Meetup/events/237790149/
Slides for the OpenStack Newton Summit in Austin that cover the changes done during the Mitaka cycle and the direction we will take for Neutron. Swarm and Kubernetes integrations explained
Meetup 12-12-2017 - Application Isolation on Kubernetesdtoledo67
Here are the slides I presented on 12-12-2017 at the Bay Area Microservices Meeting. I presented some of the best practices to achieve application isolation on Kubernetes
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
DockerCon EU 2015: The Missing Piece: when Docker networking unleashing soft ...Docker, Inc.
Presented by Adrien Blind, DevOps Coach, Socîeté Générale and Laurent Grangeau, Solutions Architect, Finaxys
Docker now provides several building blocks, combining engine, clustering, and componentization, while the new networking and service features enable many new usecases such as multi-tenancy.
In this session, you will first discover the new experimental networking and service features expected soon, and then drift rapidly to software architecture, explaining how a complete Docker stack unleashes microservices paradigms.
Using Apache Spark to analyze large datasets in the cloud presents a range of challenges. Different stages of your pipeline may be constrained by CPU, memory, disk and/or network IO. But what if all those stages have to run on the same cluster? In the cloud, you have limited control over the hardware your cluster runs on.
You may have even less control over the size and format of your raw input files. Performance tuning is an iterative and experimental process. It’s frustrating with very large datasets: what worked great with 30 billion rows may not work at all with 400 billion rows. But with strategic optimizations and compromises, 50+ TiB datasets can be no big deal.
By using Spark UI and simple metrics, explore how to diagnose and remedy issues on jobs:
Sizing the cluster based on your dataset (shuffle partitions)
Ingestion challenges – well begun is half done (globbing S3, small files)
Managing memory (sorting GC – when to go parallel, when to go G1, when offheap can help you)
Shuffle (give a little to get a lot – configs for better out of box shuffle) – Spill (partitioning for the win)
Scheduling (FAIR vs FIFO, is there a difference for your pipeline?)
Caching and persistence (it’s the cost of doing business, so what are your options?)
Fault tolerance (blacklisting, speculation, task reaping)
Making the best of a bad deal (skew joins, windowing, UDFs, very large query plans)
Writing to S3 (dealing with write partitions, HDFS and s3DistCp vs writing directly to S3)
Kubernetes is the new cloud OS, and enterprises are rapidly migrating existing applications to Kubernetes as well as creating new Kubernetes-native applications. However, Kubernetes configuration management remains complex, and due to this complexity, most implementations do not leverage Kubernetes constructs for security.
In this session you will learn:
- Key Kubernetes constructs to use for properly securing application workloads in any cloud
- How to manage Kubernetes configurations across multiple clusters and cloud providers
- How to audit and enforce enterprise-wide Kubernetes best practices
The Good, the Bad and the Ugly of Networking for Microservices by Mathew Lodg...Docker, Inc.
Ugly connectivity challenges in your development and production environments.
The good: Advances in container networking in the past year, including the emergence of “Micro SDNs” as the way to simplify Docker deployments
The bad: Understanding live network behavior and troubleshooting
The ugly: Multicast, security, robustness and resiliency at scale
In this talk you will learn how to weave Dockerized microservices together without tying yourself in knots or putting your head in a noose. You’ll learn how to effectively use micro SDNs, service discovery and request routing. You'll also see how to solve the bad and the ugly connectivity challenges in your development and production environments.
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
Secure Your Containers: What Network Admins Should Know When Moving Into Prod...Cynthia Thomas
This session offers techniques for securing Docker containers and hosts using open source network virtualization technologies to implement microsegmentation. Come learn real tips and tricks that you can apply to keep your production environment secure.
Docker Online Meetup #22: Docker NetworkingDocker, Inc.
Building on top of his talk at DockerCon 2015, Jana Radhakrishnan, Lead Software Engineer at Docker, does a deep dive into Docker Networking with additional demos and insights on the product roadmap.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
Security best practices for kubernetes deploymentMichael Cherny
Security best practices for a Kubernetes Deployment - from development, through build, ship, networking and run time controls.
Was presented at New York Kubernetes meetup https://www.meetup.com/New-York-Kubernetes-Meetup/events/237790149/
Slides for the OpenStack Newton Summit in Austin that cover the changes done during the Mitaka cycle and the direction we will take for Neutron. Swarm and Kubernetes integrations explained
Meetup 12-12-2017 - Application Isolation on Kubernetesdtoledo67
Here are the slides I presented on 12-12-2017 at the Bay Area Microservices Meeting. I presented some of the best practices to achieve application isolation on Kubernetes
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
DockerCon EU 2015: The Missing Piece: when Docker networking unleashing soft ...Docker, Inc.
Presented by Adrien Blind, DevOps Coach, Socîeté Générale and Laurent Grangeau, Solutions Architect, Finaxys
Docker now provides several building blocks, combining engine, clustering, and componentization, while the new networking and service features enable many new usecases such as multi-tenancy.
In this session, you will first discover the new experimental networking and service features expected soon, and then drift rapidly to software architecture, explaining how a complete Docker stack unleashes microservices paradigms.
Using Apache Spark to analyze large datasets in the cloud presents a range of challenges. Different stages of your pipeline may be constrained by CPU, memory, disk and/or network IO. But what if all those stages have to run on the same cluster? In the cloud, you have limited control over the hardware your cluster runs on.
You may have even less control over the size and format of your raw input files. Performance tuning is an iterative and experimental process. It’s frustrating with very large datasets: what worked great with 30 billion rows may not work at all with 400 billion rows. But with strategic optimizations and compromises, 50+ TiB datasets can be no big deal.
By using Spark UI and simple metrics, explore how to diagnose and remedy issues on jobs:
Sizing the cluster based on your dataset (shuffle partitions)
Ingestion challenges – well begun is half done (globbing S3, small files)
Managing memory (sorting GC – when to go parallel, when to go G1, when offheap can help you)
Shuffle (give a little to get a lot – configs for better out of box shuffle) – Spill (partitioning for the win)
Scheduling (FAIR vs FIFO, is there a difference for your pipeline?)
Caching and persistence (it’s the cost of doing business, so what are your options?)
Fault tolerance (blacklisting, speculation, task reaping)
Making the best of a bad deal (skew joins, windowing, UDFs, very large query plans)
Writing to S3 (dealing with write partitions, HDFS and s3DistCp vs writing directly to S3)
Presented at Spark+AI Summit Europe 2019
https://databricks.com/session_eu19/apache-spark-at-scale-in-the-cloud
Using Apache Spark to analyze large datasets in the cloud presents a range of challenges. Different stages of your pipeline may be constrained by CPU, memory, disk and/or network IO. But what if all those stages have to run on the same cluster? In the cloud, you have limited control over the hardware your cluster runs on.
You may have even less control over the size and format of your raw input files. Performance tuning is an iterative and experimental process. It’s frustrating with very large datasets: what worked great with 30 billion rows may not work at all with 400 billion rows. But with strategic optimizations and compromises, 50+ TiB datasets can be no big deal.
By using Spark UI and simple metrics, explore how to diagnose and remedy issues on jobs:
Sizing the cluster based on your dataset (shuffle partitions)
Ingestion challenges – well begun is half done (globbing S3, small files)
Managing memory (sorting GC – when to go parallel, when to go G1, when offheap can help you)
Shuffle (give a little to get a lot – configs for better out of box shuffle) – Spill (partitioning for the win)
Scheduling (FAIR vs FIFO, is there a difference for your pipeline?)
Caching and persistence (it’s the cost of doing business, so what are your options?)
Fault tolerance (blacklisting, speculation, task reaping)
Making the best of a bad deal (skew joins, windowing, UDFs, very large query plans)
Burst workloads Cutting costs with Kubernetes and Virtual KubeletAdi Polak
y running your workloads in Kubernetes, we can focus on designing and building your applications instead of managing the infrastructure that runs them.
But wait! what about the cost??
With the Virtual Kubelet provider for AKS and Azure Container Instances, both Linux and Windows containers can be scheduled on a container instance as if it is a standard Kubernetes node. This configuration allows you to take advantage of both the capabilities of Kubernetes and the management value and cost-benefit of container instances.
In this talk you will learn how to deploy an application to AKS and ACI with Virtual Kublet. While leveraging the scalability of Kubernetes and cost efficiency of ACI.
https://dev.to/adipolak/kubernetes-and-virtual-kubelet-in-a-nutshell-gn4
CloudZone's Meetup at Google offices, 20.08.2018
Covering Google Cloud Platform Kubernetes Engine in Depth, including networking, compute, storage, monitoring & logging
Latest (storage IO) patterns for cloud-native applications OpenEBS
Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration.
Sql Start! 2020 - SQL Server Lift & Shift su AzureMarco Obinu
Slide of the session delivered during SQL Start! 2020, where I illustrate different approaches to determine the best landing zone for you SQL Server workloads.
Video (ITA): https://youtu.be/1hqT_xHs0Qs
Migration of an Enterprise UI Microservice System from Cloud Foundry to Kuber...Tony Erwin
Presented at Open Source Summit Japan with Jonathan Schweikhart on June 21, 2018.
Abstract: The 40 Node.js microservices making up the IBM Cloud UI historically have been deployed as apps on Cloud Foundry (CF), an open source PaaS. But, recently, this enterprise microservice system has been migrated to run on Kubernetes to take advantage of improved orchestration, higher availability, and better performance. Tony Erwin & Jonathan Schweikhart will discuss their team's journey and provide you with insights into the advantages of Kube over CF. Even more importantly, they will describe approaches to solving new problems that took the place of old ones, such as: 1) adapting PaaS apps to run as containers on Kube, 2) enabling geo load balancing between the different runtimes (to vette Kube before completely turning off CF), 3) integrating tools like Prometheus into existing monitoring systems, and more! Their team's first-hand experiences will help you avoid pitfalls as you prepare your own migrations to Kube!
Link to Info on Talk: https://ossalsjp18.sched.com/event/EaYj/migration-of-an-enterprise-ui-microservice-system-from-cloud-foundry-to-kubernetes-tony-erwin-jonathan-schweikhart-ibm?iframe=no
NOTE: CF is always evolving and the limitations on private networking and private host names mentioned in the slides are no longer current. If you have access to CF API 2.115.0 or higher (released on June 25, 2018), you can leverage CF's service discovery feature (see https://docs.cloudfoundry.org/devguide/deploy-apps/cf-networking.html#discovery ).
Simplify Your Way To Expert Kubernetes ManagementDevOps.com
Kubernetes is a deep and complex technology that is evolving fast with new functionality and a growing ecosystem of cloud-native solutions. While the public cloud delivers an almost frictionless user experience, configuring and managing a production Kubernetes environment is an enormous technical challenge for the majority of enterprises that choose to do so on premises. Without the right approach, operationalizing Kubernetes in the data center can take upwards of 6 months, jeopardizing developer productivity and speed-to-market.
In this webinar, you’ll learn from Nutanix cloud native experts on how to fast-track your way to operationalizing a production-ready Kubernetes environment on-prem.
Specifically, we’ll talk about:
How containerized applications use IT resources (and why legacy infrastructure isn’t built for Kubernetes);
The main advantages of running Kubernetes on prem (as part of a multi-cloud strategy);
Key aspects of Kubernetes lifecycle management that greatly benefit from automation.
Why Kubernetes as a container orchestrator is a right choice for running spar...DataWorks Summit
Building and deploying an analytic service on Cloud is a challenge. A bigger challenge is to maintain the service. In a world where users are gravitating towards a model where cluster instances are to be provisioned on the fly, in order for these to be used for analytics or other purposes, and then to have these cluster instances shut down when the jobs get done, the relevance of containers and container orchestration is more important than ever.
Container orchestrators like Kubernetes can be used to deploy and distribute modules quickly, easily, and reliably. The intent of this talk is to share the experience of building such a service and deploying it on a Kubernetes cluster. In this talk, we will discuss all the requirements which an enterprise grade Hadoop/Spark cluster running on containers bring in for a container orchestrator.
This talk will cover in details how Kubernetes orchestrator can be used to meet all our needs of resource management, scheduling, networking, and network isolation, volume management, etc. We will discuss how we have replaced our home grown container orchestrator with Kubernetes which used to manage the container lifecycle and manage resources in accordance to our requirements. We will also discuss the feature list as container orchestrator which is helping us deploy and patch 1000s of containers and also a list which we believe need improvement or can be enhanced in a container orchestrator.
Speaker
Rachit Arora, SSE, IBM
Kubernetes at NU.nl (Kubernetes meetup 2019-09-05)Tibo Beijen
Slides of the presentation about Kubernetes practices and learnings at NU.nl.
This presentation was the first of two at the Dutch Kubernetes meetup at the Sanoma Netherlands offices, that took place on Sept. 5th 2019
Grid middleware is easy to install, configure, secure, debug and manage acros...Paul Brebner
A presentation made while I was managing the UK OGSA Evaluation Project in 2004, while I was on leave from CSIRO, at UCL Computer Science department, working with Wolfgang Emmerich: in which we "believe 6 impossible things before breakfast". This project encountered and partially solved many of the problems that Cloud computing finally solved.
Paul Brebner, Oxford University Computing Laboratory invited talk: "Grid middleware is easy to install, configure, debug and manage - across multiple sites (One can't believe impossible things)", 15 October 2004.
The project web site is still here (2020): http://sse.cs.ucl.ac.uk/UK-OGSA/
Cloud providers like Amazon or Goggle have great user experience to create and manage PaaS and IaaS services. But is it possible to reproduce same experience and flexibility locally, in on premise datacenter? This talk describes success story of creation private cloud based on DC/OS cluster. It is used to host and share different services like hadoop or kafka for development teams, dynamically manage services and resource pools with GKE integration.
Sergey Dzyuban "To Build My Own Cloud with Blackjack…"Fwdays
Cloud providers like Amazon or Google have a great user experience to create and manage PaaS. But is it possible to reproduce the same experience and flexibility locally, in the on-premise datacenter? What if your own infrastructure grows to fast and your team can’t deal with it in the old way? What does Jenkins, .NET microservices and TVs for daily meetings have in common?
This talk shares our experience using DC/OS (datacenter operating system) for building flexible and stable infrastructure. I will show the evolution of private cloud from the first steps with Vagrant to the hybrid cloud with instance groups in Google Cloud, the benefits it gives us and the problems we get instead.
Similar to Lc3 beijing-june262018-sahdev zala-guangya (20)
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. Not one size fits all, how to size
Kubernetes clusters
Sahdev Zala / Guang Ya Liu
@sp_zala / @gyliu513
spzala@us.ibm.com / liugya@cn.ibm.com
2. Outline
• Some basics
– Containers, Kubernetes
• Kubernetes cluster
– What is it?
– Design and sizing consideration
– Optimization techniques
• Large scale enterprise cluster
– How we created a 1000+ node cluster
– Lessons learned
3. What is Kubernetes?
• Enterprise level container orchestration
• Provision, manage, scale applications (containers)
across a cluster
• Manage infrastructure resources needed by
applications
• Compute
• Volumes
• Networks
• And many many many more...
• Declarative model
• Provide the "desired state" and Kubernetes will make it happen
• What's in a name?
• Kubernetes (K8s/Kube): "Helmsman" in ancient Greek
4. K8s – API vs Compute Resources
• Pod
• ReplicaSet
• Deployment
• Service
• ConfigMap
• Secrets
• Jobs
• …But what about your cluster and
compute resources?
– Node, CPU, Memory ..
5. Kubernetes Cluster
• A running Kubernetes
cluster contains a cluster
control plane
(AKA master) and worker
node(s), with cluster state
backed by a distributed
storage system(etcd)
• Cluster can be a single
node to several nodes.
The effort required to set
up a cluster varies from
running a single
command to crafting your
own customized cluster
6. Kubernetes Cluster Choices
• Local-machine Solutions
– Minikube, DIND, Ubuntu on LXD
– IBM Cloud Private Community Edition (CE) -
https://hub.docker.com/r/ibmcom/cfc-installer/
– Running Kubernetes locally has obvious development
advantages, such as lower cost and faster iteration than
constantly deploying and tearing down clusters on a public
cloud.
• Cloud Provider Solutions
– AKS, EKS, GKE, IKS ..
• Hosted/Managed cluster
• On-Premises Solutions
– Allow you to create Kubernetes clusters on your internal,
secure, cloud network with only a few commands
• IBM Cloud Private, Kubermatics ..
• Turnkey Solution, Custom Cluster etc.
7. Factors Impacting Cluster Size
• Single node to several nodes - what’s the right size for me?
• What you want to do with it?
– Just kick off the tires – i.e. learn/play/development
– Production level cluster
– Managed by a cloud provider or you want to manage?
• What do you want to run on it?
– One or few or many applications
– Kind of applications
• Big Data/Artificial Intelligence (AI) application
• CPU vs Memory intensive
• Stateless or Stateful
– Scale vs Performance
– Networking need
– Monitoring, logging need
• What kind of traffic do you expect?
– Steady heavy traffic
– Burst traffic
• What is your your budget?
– Hardware/Virtualization infrastructure set up
• Etc.
8. Scale vs Performance
• After reaching a recommended threshold, scale
and performances are inversely proportional
– Kubernetes has defined two service level objectives
• Return 99% of all API calls in < 1sec
• Start 99% of pods within < 5 sec
– According to study, clusters with more than 5,000
nodes may not be able to achieve these service level
objectives
– Per what we learned, a single cluster with maximum
2500 nodes should be good enough
• Anything above, go for multi-cluster approach.
This is not very stable yet but it’s a WIP. Learn
more here, https://github.com/kubernetes-
sigs/federation-v2
9. Networking Need
• NodePort, LoadBalancer, or Ingress services?
• Also, what you using for networking – e.g. Calico, Flannel?
10. Master Node Considerations
• A big cluster with single master may not be enough
– You may need multiple master nodes and want to divide the load of master to
multiple servers
• In our case,
– We had 3 master nodes
– Added management node for Monitoring, Logging
– Multiple Proxy nodes
– Etcd with SSD
11. Requests and Limits
• Helps manage compute
resources for containers
– Specify how much CPU and
memory (RAM) each Container
needs by using requests and
limits
• Requests determines
minimum cpu/memory
required by container
• Limits set the max
cpu/memory allowed
• Improve scheduler efficiency
– Allows Kubernetes to increases
utilization, while at the same
time maintaining resource
guarantees for the containers
that need guarantees
250
millicore/
millicpu
64 MiB
12. Node Selector
• Provides control on how to
assign a pod to nodes
• Simplest form of
constrains
• Constrain a pod to run
on a specific nodes
• Useful in certain
circumstances
– Ensure that a pod ends up
on a machine with an SSD
attached to it
Node label
13. Node Affinity / Anti Affinity
• The node affinity expands the
types of constraints in compare
to Node Selector
– Allows you to constrain which
nodes your pod is eligible to be
scheduled on, based on labels on
the node
– Two types
• Hard –
requiredDuringSchedulingIgnoredDuri
ngExecution
– must be met for a pod to
be scheduled onto a node
• Soft –
preferredDuringSchedulingIgnoredDur
ingExecution
– scheduler will try to
enforce but will not
guarantee
• Anti Affinity is just opposite
14. Pod Affinity / Anti Affinity
• Allow you to constrain which
nodes your pod is eligible to be
scheduled based on labels on
pods that are already running on
the node rather than based on
labels on nodes. Anti Affinity is
just opposite of it
• Inter-pod affinity and anti affinity
require substantial amount of
processing which can slow down
scheduling in large clusters
significantly. It is not recommend
using them in clusters larger than
several hundred nodes
15. Taints and Tolerations
• Taints are key-value
pairs associated with
an effect
• Together they ensure
that pods are not
scheduled onto
inappropriate nodes
• Provides a flexible way
to steer pods away from
certain nodes
Corresponding pod
Adding taint to a node
16. Kubernetes Based IBM Cloud Private
• Kubernetes is not enough
• An enterprise Kubernetes
distribution should also include
some other core services for
logging, monitoring etc
• Learn more about IBM Cloud Private at here
https://www-
03.ibm.com/support/knowledgecenter/SSBS
6K_2.1.0.3/kc_welcome_containers.html
18. 500 Nodes Deployment Arch
• IBM Cloud Private 2.1.0.2 which was released in 2018.3
• Calico V2 with Node to Node Mesh
• Sharing one etcd cluster between Kubernetes and Calicos
19. Network Impact for 500+ Nodes
• Kubernetes claim support 5000 nodes, why IBM Cloud Private cannot in 2.1.0.2?
– IBM Cloud Private using calico as default network
– IBM Cloud Private Calico using node-to-node mesh to configure peering between all calico nodes.
– etcd load is very high when deploying 1000 node cluster, most load is from calico
– Node-to-node mesh stops working if there are more than 700 nodes in the cluster.
– Mesh number would be 1000! in a 1000 node cluster which is not acceptable!
https://docs.projectcalico.org/v2.6/getting-
started/kubernetes/installation/integration#requirements
http://fuel-ccp.readthedocs.io/en/latest/design/k8s_1000_nodes_architecture.html
https://coreos.com/etcd/docs/latest/tuning.html
20. ETCD Benchmark Test
• ETCD Benchmark Comparison
– Calico V2 with ETCD V2 API
– Calico V3 with ETCD V3 API
• Conclusion
– Migrate to Calico V3 and use
ETCD V3 API for IBM Cloud
Private
https://coreos.com/etcd/docs/latest/op-guide/v2-migration.html
21. 1000+ Nodes Deployment Arch
• IBM Cloud Private 2.1.0.3 which was released in 2018.5
• Calico V3 with Router Reflector
• ETCD V3
22. Deployment Topology Changes
etcd
Kubernetes Calico 2.6 with Node Node Mesh
etcd1
Kubernetes Calico v3.0.4 with Router Reflector
Etcd2(Optional) Router Reflector
IBM Cloud Private 2.1.0.2 (k8s 1.9)
IBM Cloud Private 2.1.0.3 (k8s 1.10)
23. Summary
• Sizing Kubernetes cluster can be
challenging specially for large scale
cluster
• Be benefitted from the experience of
others
– Do good research on what others recommend. Learn
from already proven approaches.
• Understand scheduler optimization
techniques in Kubernetes
• Etcd storage with SSD gives better
performance