Continuous Deployment with Jenkins on KubernetesMatt Baldwin
Google Senior Software Engineer Evan Brown's presentation from the March 18, 2016 Seattle Kubernetes meetup hosted by StackPointCloud. Evan shows how you deploy Jenkins into Kubernetes, then takes us through CD and canary deployments. Join us in Seattle: http://www.meetup.com/Seattle-Kubernetes-Meetup/
Monitoring, Logging and Tracing on KubernetesMartin Etmajer
In this presentation, I'll describe a variety of tools, like the Kubernetes Dashboard, Heapster, Grafana, Fluentd, Elasticsearch, Kibana, Jolokia and OpenTracing to bring Monitoring, Logging and Tracing to the Kubernetes container platform.
Continuous Deployment with Jenkins on KubernetesMatt Baldwin
Google Senior Software Engineer Evan Brown's presentation from the March 18, 2016 Seattle Kubernetes meetup hosted by StackPointCloud. Evan shows how you deploy Jenkins into Kubernetes, then takes us through CD and canary deployments. Join us in Seattle: http://www.meetup.com/Seattle-Kubernetes-Meetup/
Monitoring, Logging and Tracing on KubernetesMartin Etmajer
In this presentation, I'll describe a variety of tools, like the Kubernetes Dashboard, Heapster, Grafana, Fluentd, Elasticsearch, Kibana, Jolokia and OpenTracing to bring Monitoring, Logging and Tracing to the Kubernetes container platform.
XP Days Ukraine 2015 Talk http://xpdays.com.ua/programs/scaling-docker-with-kubernetes/
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
Google has been running everything in containers for the past 15 years, but how do we orchestrate and manage all those containers? We've built and released the open source Kubernetes (http://kubernetes.io), which is based on years of running containers internally at Google. Join us for an introduction to containers and Kubernetes, followed by a hands-on workshop building and deploying your own Kubernetes cluster with multiple front end, database and caching instances.
Docker containers help solve the issue of process-level reproducibility by packaging up your apps and execution environments into a number of containers. But once you have a lot of containers running, you'll need to coordinate them across a cluster of machines while keeping them healthy and making sure they can find each other. This can quickly turn into an unmanageable mess! Wouldn't it be helpful if you could declare what wanted, and then have the cluster assign the resources to get it done and to recover from failures and scale on demand? Kubernetes is here to help!
Key takeaways
- Gentle introduction into containers: why and how
- Learn how Google manages applications using containers
- Intro to Kubernetes: managing applications and services
- Build and deploy your own multi-tier application using Kubernetes
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Deep dive in container service discoveryDocker, Inc.
Service discovery and traffic load-balancing in the container ecosystem relies on different technologies, such as IPVS and iptables, and container orchestrators use different approaches. This talk will present in details how Docker Swarm and Kubernetes achieve this. The talk will continue with a demo showing how applications that are not managed by Kubernetes can take advantage of its native load-balancing. Finally, it will compare these approaches to service-mesh solutions.
Docker storage designing a platform for persistent dataDocker, Inc.
Docker containers have popularised the concept of read-only/immutable infrastructure and lead to changes in system and application architecture across the IT industry. However nearly every application generates some data that will need to persist long after the life-span of the container that generated it. This talk will look at the best practices around persistent storage with containers, from providing design advice around the construction of your application/container to the functionality provided from storage vendors through the Docker Volume driver plugins.
The Nova driver for Docker has been maturing rapidly since its mainline removal in Icehouse. During the Juno cycle, substantial improvements have been made to the driver, and greater parity has been reached with other virtualization drivers. We will explore these improvements and what they mean to deployers. Eric will additionally showcase deployment scenarios for the deployment of OpenStack itself inside and underneath of Docker for powering traditional VM-based computing, storage, and other cloud services. Finally, users should expect a preview of the planned integration with the new OpenStack Containers Service effort to provide automation of advanced containers functionality and Docker-API semantics inside of an OpenStack cloud.
Note that the included Heat templates are NOT usable. See the linked Heat resources for viable templates and examples.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
Tectonic Summit 2016: Networking for Kubernetes CoreOS
Sreekanth Pothanis, Cloud Engineering, eBay shares a networking Kubernetes tale from the trenches.
Networking is the hardest component in any ones infrastructure, everything depends on it. Specifically when we have web scale infrastructure with tens of thousands of servers. eBay is investing heavily in Kubernetes and networking again is one of the areas we have the most difficulty with.
During the course of this talk we will go through various approaches we tried to make container networking conform to Kubernetes networking principles, while ensuring that it adapts to the existing networking models our infrastructure supports.
We would also cover how we have automated the process of setting up networking for Kubernetes clusters and how it offers seamless integration with non-Kubernetes workloads.
12/12/16
XP Days Ukraine 2015 Talk http://xpdays.com.ua/programs/scaling-docker-with-kubernetes/
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
Google has been running everything in containers for the past 15 years, but how do we orchestrate and manage all those containers? We've built and released the open source Kubernetes (http://kubernetes.io), which is based on years of running containers internally at Google. Join us for an introduction to containers and Kubernetes, followed by a hands-on workshop building and deploying your own Kubernetes cluster with multiple front end, database and caching instances.
Docker containers help solve the issue of process-level reproducibility by packaging up your apps and execution environments into a number of containers. But once you have a lot of containers running, you'll need to coordinate them across a cluster of machines while keeping them healthy and making sure they can find each other. This can quickly turn into an unmanageable mess! Wouldn't it be helpful if you could declare what wanted, and then have the cluster assign the resources to get it done and to recover from failures and scale on demand? Kubernetes is here to help!
Key takeaways
- Gentle introduction into containers: why and how
- Learn how Google manages applications using containers
- Intro to Kubernetes: managing applications and services
- Build and deploy your own multi-tier application using Kubernetes
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Deep dive in container service discoveryDocker, Inc.
Service discovery and traffic load-balancing in the container ecosystem relies on different technologies, such as IPVS and iptables, and container orchestrators use different approaches. This talk will present in details how Docker Swarm and Kubernetes achieve this. The talk will continue with a demo showing how applications that are not managed by Kubernetes can take advantage of its native load-balancing. Finally, it will compare these approaches to service-mesh solutions.
Docker storage designing a platform for persistent dataDocker, Inc.
Docker containers have popularised the concept of read-only/immutable infrastructure and lead to changes in system and application architecture across the IT industry. However nearly every application generates some data that will need to persist long after the life-span of the container that generated it. This talk will look at the best practices around persistent storage with containers, from providing design advice around the construction of your application/container to the functionality provided from storage vendors through the Docker Volume driver plugins.
The Nova driver for Docker has been maturing rapidly since its mainline removal in Icehouse. During the Juno cycle, substantial improvements have been made to the driver, and greater parity has been reached with other virtualization drivers. We will explore these improvements and what they mean to deployers. Eric will additionally showcase deployment scenarios for the deployment of OpenStack itself inside and underneath of Docker for powering traditional VM-based computing, storage, and other cloud services. Finally, users should expect a preview of the planned integration with the new OpenStack Containers Service effort to provide automation of advanced containers functionality and Docker-API semantics inside of an OpenStack cloud.
Note that the included Heat templates are NOT usable. See the linked Heat resources for viable templates and examples.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
Tectonic Summit 2016: Networking for Kubernetes CoreOS
Sreekanth Pothanis, Cloud Engineering, eBay shares a networking Kubernetes tale from the trenches.
Networking is the hardest component in any ones infrastructure, everything depends on it. Specifically when we have web scale infrastructure with tens of thousands of servers. eBay is investing heavily in Kubernetes and networking again is one of the areas we have the most difficulty with.
During the course of this talk we will go through various approaches we tried to make container networking conform to Kubernetes networking principles, while ensuring that it adapts to the existing networking models our infrastructure supports.
We would also cover how we have automated the process of setting up networking for Kubernetes clusters and how it offers seamless integration with non-Kubernetes workloads.
12/12/16
In this webinar, Alex Casalboni will overview the main FaaS concepts and best practices (Function as a Service), explore the open-source FaaS options and discuss pros and cons of deploying and managing your own serverless platform on Kubernetes.
Slides from the talk given to the Startup Berlin Slack Group that demonstrates how TruckIN is implementing its continuous delivery workflow using technologies and open-source tools.
Topics that are covered: Automated Cloud Provisioning (Network, Subnets, VMs, Kubernetes Cluster, Firewall, Disks, Credentials, Private Docker Registry); Configuration Management (Salt Stack), Continuous Integration (Jenkins CI), Continuous Delivery/Deployment (Salt API/Reactor + Kubernetes) to a Google Cloud Kubernetes Cluster, Remote Application Debugging, Managing Google Cloud Kubernetes Cluster, Logging, Monitoring and ChatOps (Slack and operable.io)
Scaling Jenkins with Docker and KubernetesCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures. Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others. Jenkins Continuous Integration environment can be dynamically scaled by using the Kubernetes and Docker plugins, using containers to run slaves and jobs, and also isolate job execution.
WSO2Con US 2015 Kubernetes: a platform for automating deployment, scaling, an...Brian Grant
Kubernetes can run application containers on clusters of physical or virtual machines.
It can also do much more than that.
Kubernetes satisfies a number of common needs of applications running in production, such as co-locating helper processes, mounting storage systems, distributing secrets, application health checking, replicating application instances, horizontal auto-scaling, load balancing, rolling updates, and resource monitoring.
However, even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. Application-specific workflows can be streamlined to accelerate developer velocity.
This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications. The Kubernetes control plane is built upon the same APIs that are available to developers and users, implementing resilient control loops that continuously drive the current state towards the desired state. This design has enabled Apache Stratos and a number of other Platform as a Service and Continuous Integration and Deployment systems to build atop Kubernetes.
This presentation introduces Kubernetes’s core primitives, shows how some of its better known features are built on them, and introduces some of the new capabilities that are being added.
Kubernetes dealing with storage and persistenceJanakiram MSV
Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. This webinar discusses various strategies for adding persistence to the containerised workloads.
In celebration of the launch of the Knative Cookbook, we will run a fast-paced live code demonstration of the coolest Knative-based techniques that we can imagine that include Kafka and Kamel.
kubernetes install and practice
* Environment (bare metal installation, not using cloud service)
- VM 1 : Mater node, 30GB, 2 vCPU, 4GB Mem
- VM 2 : Worker node, 30GB, 2 vCPU, 4GB Mem
* Practice
- deploying pod, make a deployment and service
- expose service using ingress(nginx-ingress)
Rob Hirschfeld talk at the 2017 KubeCon in Austin, TX. In this talk he presents an Immutable Bootstrap demo of Kubernetes using Kubeadm to provision on bare metal. Talk URL http://sched.co/CU8h.
Continuous Delivery the Hard Way with Kubernetes Weaveworks
How do you version control your deployment config + automate the delivery of your software through a CI/CD pipeline?
We will cover open source tools that facilitate:
• Simultaneously updating configuration and performing releases to Kubernetes
• Rolling back and pinning releases
• Having different release policies to different environments
Read the blog: https://www.weave.works/blog/continuous-delivery-the-hard-way
Visit Weave Cloud: https://www.weave.works/product/cloud/
For more free talks, join our Weave Online User Group: https://www.meetup.com/Weave-User-Group/
Kubernetes Clusters as a Service with GardenerQAware GmbH
Cloud Native Night November 2018, Munich: Talk by Dirk Marwinski (SAP).
Join our Meetup: www.meetup.com/cloud-native-muc
Abstract: There are many Open Source tools which help in creating and updating single Kubernetes clusters. Corporations usually require many clusters, depending on their size they may require hundreds or even thousands of clusters. However, the more clusters you need the harder it becomes to operate, monitor, manage, and keep all of them alive and up-to-date.
That is exactly what open source project “Gardener” focuses on. It is not just another provisioning tool, but it is rather designed to manage Kubernetes clusters as a service. It provides Kubernetes-conformant clusters on various cloud providers and the ability to maintain hundreds or thousands of them at scale. At SAP, we face this heterogeneous multi-cloud & on-premise challenge not only in our own platform, but also encounter the same demand at all our larger and smaller customers implementing Kubernetes & Cloud Native.
Inspired by the possibilities of Kubernetes and the ability to self-host, the foundation of Gardener is Kubernetes itself. While self-hosting, as in, to run Kubernetes components inside Kubernetes is a popular topic in the community, we apply a special pattern catering to the needs of operating a huge number of clusters with minimal total cost of ownership.
In this session Dirk will provide a comprehensive overview of Gardener, the underlying concepts, and talk about interesting implementation details. In addition there will be a hands-on sessions where attendants will be given free access to a Gardener instance and given the opportunity to dynamically create Kubernetes cluster and test them.
Kubernetes on Bare Metal at the Kitchener-Waterloo Kubernetes and Cloud Nativ...CloudOps2005
Charlie Drage discussed Kubernetes on bare metal at last week's Kubernetes and Cloud Native meetup in Kitchener-Waterloo. His presentation demonstrated how to deploy Kubernetes on bare metal servers. Charlie is an active Kubernetes maintainer, and his contributions have included fixing some common issues with bare metal servers and using Ansible to build clusters with kubedm.
Project Gardener - EclipseCon Europe - 2018-10-23msohn
Open Source project Gardener (https://gardener.cloud) is a production-grade Kubernetes-as-a-Service management tool that works across various cloud-platforms (e.g, AWS, Azure, GCP, Alibaba & SAP Datacenters) and on-premise (e.g. with OpenStack)
Jonathan Donaldson, VP & GM, Cloud and Infrastructure Technologies, Intel Corporation talks about Intel's work in the community to help make Kubernetes ready for the enterprise.
12/12/16
Clair is an open-source container image security analyzer and was recently launched by CoreOS for production workloads. This is a powerful and extensible tool that inspects container images for known security flaws and enables developers to build services that scan containers for security threats and vulnerabilities.
Tectonic Summit 2015: Containers Across the Cloud and Data CenterCoreOS
At Tectonic Summit in December 2015, Rob Cornish, CTO of International Securities Exchange, and Paul Morgan, Systems Architect, International Securities Exchange spoke about how they use containers across the cloud and data center.
The last decade belonged to virtual machines and the next one belongs to containers. CoreOS is a new Linux distribution designed specifically for application containers and running them at scale. This talk will examine all the major components of CoreOS (etcd, fleet, docker, systemd) and how these components work together.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
7. Problem: Setting up a Kubernetes cluster is hard
Today:
Use kube-up.sh (and hope you don’t have to customize)
Compile from HEAD and manually address security
Use a third-party tool (some of which are great!)
Simplified Setup
10. Solution: kubeadm!
Simplified Setup
master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
master.myco.com# kubeadm init
Kubernetes master initialized successfully!
You can now join any number of nodes by running the following command:
kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
11. Solution: kubeadm!
Simplified Setup
master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
master.myco.com# kubeadm init
Kubernetes master initialized successfully!
You can now join any number of nodes by running the following command:
kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
12. Solution: kubeadm!
Simplified Setup
master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
master.myco.com# kubeadm init
Kubernetes master initialized successfully!
You can now join any number of nodes by running the following command:
kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
Node join complete.
13. Solution: kubeadm!
Simplified Setup
master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
master.myco.com# kubeadm init
Kubernetes master initialized successfully!
You can now join any number of nodes by running the following command:
kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
Node join complete.
master.myco.com# kubectl apply -f https://git.io/weave-kube
14. Solution: kubeadm!
Simplified Setup
master.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
master.myco.com# kubeadm init
Kubernetes master initialized successfully!
You can now join any number of nodes by running the following command:
kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
node-01.myco.com# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
node-01.myco.com# kubeadm join --token 48b69e.b61e2d0dd5c 10.140.0.3
Node join complete.
master.myco.com# kubectl apply -f https://git.io/weave-kube
Network setup complete.
15. Problem: Using multiple-clusters is hard
Today:
Clusters as multiple independent silos
Use Kubernetes federation from scratch
Simplified Setup: Federation Edition
25. Sophisticated Scheduling
Problem: Deploying and managing workloads on large,
heterogenous clusters is hard
Today:
Liberal use of labels (and keeping your team in sync)
Manual tooling
Didn’t you use Kubernetes to avoid this?
58. Sophisticated Scheduling: Taints/Toleration
Node 1
(Premium)
Node 2
(Premium)
Kubernetes Cluster
Node 3
(Regular)
I will fail to schedule
even though there’s a
spot for me.
Premium
Pod
Regular
Pod
Premium
Pod
SCENARIO: Reserved instances
74. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
API
Server
Taint the
node
SCENARIO: Hardware failing (but not failed)
75. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
API
Server
Taint the
node
SCENARIO: Hardware failing (but not failed)
76. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
API
Server
SCENARIO: Hardware failing (but not failed)
Taint the
node
77. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
Schedule new
pod and kill the
old one
API
Server
SCENARIO: Hardware failing (but not failed)
78. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
Schedule new
pod and kill the
old one
New
Pod
API
Server
SCENARIO: Hardware failing (but not failed)
79. Node 1
Pod
Kubernetes Cluster
Node 2 Node 3
Sophisticated Scheduling: Taints/Toleration
Schedule new
pod and kill the
old one
New
Pod
API
Server
SCENARIO: Hardware failing (but not failed)
81. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 1m
since I heard
from Node 1
Pod
(t=5m)
Pod
(t=30m)
API
Server
SCENARIO: Supporting network failure
82. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 2m
since I heard
from Node 1
Pod
(t=5m)
Pod
(t=30m)
API
Server
SCENARIO: Supporting network failure
83. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 3m
since I heard
from Node 1
Pod
(t=5m)
Pod
(t=30m)
API
Server
SCENARIO: Supporting network failure
84. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 4m
since I heard
from Node 1
Pod
(t=5m)
Pod
(t=30m)
API
Server
SCENARIO: Supporting network failure
85. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 5m
since I heard
from Node 1
Pod
(t=5m)
Pod
(t=30m)
API
Server
SCENARIO: Supporting network failure
87. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
Treat pod as
dead & schedule
new 5m Pod
Pod
(t=30m)
API
Server
Pod
(t=5m)
SCENARIO: Supporting network failure
88. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
Treat pod as
dead & schedule
new 5m Pod
Pod
(t=30m)
API
Server
Pod
(t=5m)
SCENARIO: Supporting network failure
89. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
Treat pod as
dead & schedule
new 5m Pod
Pod
(t=30m)
API
Server
Pod
(t=5m)
Pod
(t=5m)
SCENARIO: Supporting network failure
90. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
Treat pod as
dead & schedule
new 5m Pod
Pod
(t=30m)
API
Server
Pod
(t=5m)
Pod
(t=5m)
SCENARIO: Supporting network failure
91. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
It’s been 30m
since I heard
from Node 1
Pod
(t=30m)
API
Server
Pod
(t=5m)
Pod
(t=5m)
SCENARIO: Supporting network failure
93. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Pod
(t=5m)
Treat pod as
dead & schedule
a new 30m pod
Pod
(t=5m)
Pod
(t=30m)
SCENARIO: Supporting network failure
94. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Pod
(t=5m)
Treat pod as
dead & schedule
a new 30m pod
Pod
(t=5m)
Pod
(t=30m)
SCENARIO: Supporting network failure
95. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Pod
(t=5m)
Treat pod as
dead & schedule
a new 30m pod
Pod
(t=5m)
Pod
(t=30m)
SCENARIO: Supporting network failure
96. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Pod
(t=5m)
Treat pod as
dead & schedule
a new 30m pod
Pod
(t=5m)
Pod
(t=30m)
Pod
(t=30m)
SCENARIO: Supporting network failure
97. Sophisticated Scheduling: Forgiveness
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Pod
(t=5m)
Treat pod as
dead & schedule
a new 30m pod
Pod
(t=5m)
Pod
(t=30m)
Pod
(t=30m)
SCENARIO: Supporting network failure
98. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Time to upgrade
to Kubernetes
1.5!
Two Pod
Set (B)
SCENARIO: Cluster upgrades with stateful workloads
99. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
“Evict
A!”
SCENARIO: Cluster upgrades with stateful workloads
100. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
“Shut
down”
SCENARIO: Cluster upgrades with stateful workloads
101. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
SCENARIO: Cluster upgrades with stateful workloads
102. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
“Evict
B!”
SCENARIO: Cluster upgrades with stateful workloads
103. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
“Sorry,
can’t!”
SCENARIO: Cluster upgrades with stateful workloads
104. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
SCENARIO: Cluster upgrades with stateful workloads
105. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
SCENARIO: Cluster upgrades with stateful workloads
106. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
Two Pod
Set (A)
API
Server
Two Pod
Set (B)
“Ok, now
Evict B!”
SCENARIO: Cluster upgrades with stateful workloads
107. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Two Pod
Set (B)
“OK!”
Two Pod
Set (A)
SCENARIO: Cluster upgrades with stateful workloads
108. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Two Pod
Set (B)
Two Pod
Set (A)
“Shutdown”
SCENARIO: Cluster upgrades with stateful workloads
109. Sophisticated Scheduling: Disruption Budget
Node 1 Node 2
Kubernetes Cluster
Node 3
API
Server
Two Pod
Set (B)
Two Pod
Set (A)
SCENARIO: Cluster upgrades with stateful workloads
110. Network Policy
Problem: Network policy is complicated!
Today:
Use VM tooling to support security (but limit VM utilization)
Managing port level security
Proxy-ing everything
127. Kubernetes Cluster
Network Policy Object
SCENARIO: Two-tier app needs to be locked down
VM 1 VM 2 VM 3
“Green” can talk
to “Red”
✓
✓
128. Problem: I need to deploy complicated apps!
Today:
Manually deploy applications once per cluster
Manually publish global endpoints and load balance
Build a control plane for monitoring application
Helm
143. Accelerating Stateful Applications
Container-optimized servers for
compute and storage
Management of storage and data for
stateful applications on Kubernetes
Management of Kubernetes at
enterprise scale
+
Automated Stateful Apps on K8S
148. What’s Next
Nothing!*
* for large values of “Nothing”
Bringing many features from alpha to beta & GA, including:
Federated deployments and daemon sets
Improved RBAC
StatefulSet upgrades
Improved scaling & etcd 3
Easy cluster setup for high availability configuration
Integrated Metrics API
149. Kubernetes is Open
• open community
• open design
• open source
• open to ideas
Twitter: @aronchick
Email: aronchick@google.com
• kubernetes.io
• github.com/kubernetes/kubernetes
• slack.kubernetes.io
• twitter: @kubernetesio