While the need for network automation is becoming a key part of the overall digital transformation agenda, the reality is that the success rate behind previous attempts to drive network automation (a.k.a NFV) is only at about 30%, where the statistics show that 70% of transformation projects fail!
Kafka and Kafka Streams in the Global Schibsted Data PlatformFredrik Vraalsen
In this talk we will present how we in Schibsted have set up a new global streaming data platform using Kafka and Kafka Streams, replacing a homegrown solution based on Kinesis and micro batches in Amazon S3.
Talk presented at Kafka Summit 2018 in San Francisco: https://kafka-summit.org/sessions/kafka-kafka-streams-global-schibsted-data-platform/
See Consul running on Kubernetes and learn how to use Consul as a universal service mesh to securely connect your applications running on different platforms.
While the need for network automation is becoming a key part of the overall digital transformation agenda, the reality is that the success rate behind previous attempts to drive network automation (a.k.a NFV) is only at about 30%, where the statistics show that 70% of transformation projects fail!
Kafka and Kafka Streams in the Global Schibsted Data PlatformFredrik Vraalsen
In this talk we will present how we in Schibsted have set up a new global streaming data platform using Kafka and Kafka Streams, replacing a homegrown solution based on Kinesis and micro batches in Amazon S3.
Talk presented at Kafka Summit 2018 in San Francisco: https://kafka-summit.org/sessions/kafka-kafka-streams-global-schibsted-data-platform/
See Consul running on Kubernetes and learn how to use Consul as a universal service mesh to securely connect your applications running on different platforms.
Alex Nauda [Nobl9] | How Not to Build an SLO Platform | InfluxDays NA 2021InfluxData
Nobl9 is a Service Level Objective Platform for measuring and monitoring reliability. We will look under the hood of an SLO platform using InfluxDB as part of the core architecture. We’ll talk about the project, the decisions we took, the challenges we faced, the mistakes we made, and the lessons learned.
Concept to reality: An advanced agile integration blueprintEric D. Schabell
Red Hat Summit 2020:
Are you all in on the concept of agile integration or just getting your toe in the water? Are you an expert or just getting started with concepts like integration, microservices, message integration, process integration, APIs, and all the things that make your customers experience the best it can be? This session has something for all levels as it walks attendees through the architecture concepts with whiteboard diagrams, easy to grasp images, with a bottom up approach to connecting the dots for the concepts of an integration architecture. Once the groundwork has been laid, the second half of this session take a look at an integration architecture blueprint based on three successful customer integration solutions. Presenting the results of researching these successful solution architectures provides attendees with a clear blueprint for matching to their own architectures, or to help solidify their plans for architecting successful integration solutions.
(Joseph deBuzna + Zulfikar Quereshi, HVR) Kafka Summit SF 2018
This presentation is a customer story about France-based regional airline HOP! and their need to make better use of data that was contained in various applications. They also needed this information to be available in real time. As one can imagine, airlines manage a wide variety of information such as weather, customer information, flight plans, sensor data from planes and much more.
In this presentation, Joe will discuss how HOP! was delivering their data before and the limitations associated with delivering this data. Joe will then talk about HOP!’s selection of Kafka and HVR as a solution to enabling data availability and real-time information for analysis and action.
In this session, attendees will learn:
-How Kafka was selected and chosen as a solution for HOP!’s complex challenges
-Architecture and capabilities implemented that enabled data feeding from multiple sources to Kafka
-Considerations and challenges with this approach
-Business results and future plans
stackconf 2021 | Reference Architecture for a Cloud Native Digital EnterpriseNETWAYS
In an era of digital transformation, (digital) enterprises are looking for fast innovation through effective collaboration to deliver more value to their customers with dramatically less effort. Digital enterprises enable companies of every sector to integrate, expose, and monetize their business capabilities by digitizing entire value chains. As a result, APIs have become the norm to expose integrated business functionalities to deliver an enhanced digital experience. Enterprises can start their digital transformation in greenfield or brownfield; in both cases, having a well-defined API-led integration architecture is important. Apart from integration and API platforms, these architectures should be able to provide agility, flexibility, and scalability. This session discusses a vendor/technology-neutral reference architecture for a cloud native digital enterprise to increase productivity by having agility, flexibility, and scalability through automation and services. The architecture discussed in this session can be mapped into different cloud-native platforms (Kubernetes and service mesh), different cloud providers (Microsoft Azure, Amazon AWS, and Google GCP), and infrastructure services to perform the implementation.
A Walkthrough of InfluxCloud 2.0 by Tim HallInfluxData
Tim Hall, VP of Products at InfluxData, will demonstrate how to setup and use the next InfluxCloud 2.0 in this InfluxDays NYC 2019 presentation. He provides a brief history of InfluxCloud followed by an overview of InfluxDB 2.0 and a demo.
Building a Codeless Log Pipeline w/ Confluent Sink Connector | Pollyanna Vale...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
Monitor Kubernetes in Rancher using InfluxDataInfluxData
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard.
Rancher includes everything you need to manage containers in production—you no longer need to build container management platforms from scratch using multiple open source technologies. Infrastructure services management and the overlay networking, storage, and load balancing capabilities provide the basis for portability across infrastructure providers.
In this webinar, William Jiminez, Solutions Architect at Rancher Labs, and Gunnar Aasen, Partner Engineering, provide an introduction to Rancher and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
This talk is part of www.tid-x.com 2018 conference. It explains basic views and high level concepts about Cloud Services and what considerations should be taken when building cloud services. The presentation put images to this blog post https://medium.com/@ruben.gblanco/being-a-cloud-service-vs-a-service-in-the-cloud-9452f79d03eb and the following video is the talk itself https://www.youtube.com/watch?v=qPVIFCY6q1g&index=2&list=PL94ziy7W5BvtHjQ7cOqzfolaKSlaWYjlz
Sam Dillard [InfluxData] | Performance Optimization in InfluxDB | InfluxDays...InfluxData
Like my past talks on this, I will give a rundown of the different levers one can pull to make InfluxDB perform better for one's use case. As I do each iteration of this, I have additional slides to add to this topic.
Most of the presentation focuses on write procedure as that is what defines schema and, ultimately, how queries will work against the DB.
Edge Computing: A Unified Infrastructure for all the Different PiecesCloudify Community
Edge Computing along with 5G promises to revolutionize customer experience with immersive applications that we can only imagine at this point. The edge will include PNFs, VNFs, and mobile-edge applications; requiring containers, virtual machines and bare-metal compute. But while edge computing promises numerous new revenue streams, managing and orchestrating these edge infrastructure environments is not going to be a seamless, instant process. In this webinar, experts in NFV orchestration discuss the concerns you must address in the transition to the edge, and show how you can use available open source tools to create a single management environment for PNFs, VNFs, and mobile-edge applications.
Data governance and discoverability at AO.com | Jon Vines, AO.com and Christo...HostedbyConfluent
One challenge of widespread adoption of any technology within an organisation is balancing organic growth and maintaining standards and best practice. At AO.com - one of the UK's largest online electrical retailers - we’ve invested in tooling to simplify application onboarding into the event processing platform. This includes creating topics, defining access to the platform and supporting our governance functions.
A key part of any data function is data governance and discoverability. Through standardised definitions of Kafka topics and use of Avro schemas, we can map which topics exist, what data they contain and who has access to them. This allows us to support multiple cross-functional teams using automatic data gathering.
In this session, AO.com and Confluent Professional Services will share how we tackled the challenge of platform adoption and provide hands-on examples of the open-source tools, ""Kafka Clusterstate Tools"" and ""Kafka Streams Inspector"", we developed.
Gain Deep Visibility into APIs and Integrations with Anypoint MonitoringInfluxData
On average, a business supporting digital transactions now crosses 35 backend systems—and legacy tools haven’t been able to keep up. This session will cover how MuleSoft uses InfluxCloud to help power their monitoring and diagnostic solutions as well as provide end-to-end actionable visibility to APIs and integrations to help customers identify and resolve issues quickly.
Commerce as a Service with Cloud Foundry (Cloud Foundry Summit 2014)VMware Tanzu
Keynote delivered by Rene Welches, Product Owner – PaaS Cloud Foundry at hybris.
[y]aaS is a multi tenant cloud platform which allows everyone to easily develop, extend and sell commerce services and apps. [y]aaS is based on a steadily growing micro service architecture running on Cloud Foundry as foundation. All services within [y]aaS are exposed through a consistent RESTful API. Besides the API, [y]aas also includes a Marketplace for hybris services as well as 3rd party services, an On Demand Store Front and a Back office application, all running on Cloud Foundry.
In this talk we will share our experience developing such an architecture and how Cloud Foundry helped us to streamline and speed up our development.
Moving 150 TB of data resiliently on Kafka With Quorum Controller on Kubernet...HostedbyConfluent
At Wells-Fargo, we move 150 TB of logs data from our syslogs to Splunk forwarders that get indexed and organized for analytic queries. As we modernize and migrate our applications to our hybrid cloud the performance expectations for this infrastructure will proportionately increase. Those improvements include the resilience of the end to end infrastructure. First, we decoupled the applications from their logging interface through a loglibrary which split the streams of logs from their sources to KAFKA which routed them to two separate destinations Splunk and ELK respectively. We also used prometheus and grafana for monitoring the metrics. We also deployed KAFKA, Splunk, ELK, Prometheus and Grafana on the Kubernetes clusters. Confluent had released a version of KAFKA without Zookeeper and replaced its functionality with Quorum Controller. The Quorum-Controller version exhibited better disposability one of the 12factors that's important for Cloud-Nativeness. We packaged this version into a Kubernetes operator called Keda and deployed this for auto-scaling. We tested this to simulate the amount of logdata that we typically generate in production. Based on the above we have also implemented distributed tracing and help make it just as resilient. We will share our lessons learnt, the patterns and practices to modernize both our underlying runtime platforms and our applications with highly performing and resilient event-driven architectures.
Nicolas Steinmetz [CérénIT] | Sustain Your Observability from Bare Metal TICK...InfluxData
When moving your apps to Kubernetes, you need to keep your existing observability at the same level or better. Kubernetes will give you some challenge, as you can’t strictly deploy the TICK Stack as you did before, but also allow some opportunities. The talk is about my journey on this topic and will cover Telegraf as DaemonSet to fetch nodes resources, as a deployment to fetch metrics from different endpoints and hopefully with Telegraf as an operator to illustrate sidecar deployment. All these metrics will be pushed to InfluxDB (v1/v2) and may be visualized in Chronograf or Grafana.
Three Ways InfluxDB Enables You to Use Time Series Data Across Your Entire En...InfluxData
The more your team can collaborate around data, the more useful that data is. This is especially true for time-series data that is increasingly the heartbeat of your business. When your entire team can utilize time series data, they know the pulse of your devices, your equipment, your customers, and your software -- and can act accordingly.
In this webinar, product manager Russ Savage will show you three new ways for your team to collaborate around time-series data.
First, InfluxDB Notebooks let you create and share computational narratives that combine live code, visualizations, and explanatory notes, which can output to your InfluxDB Dashboards, Tasks, and Buckets. You can use Notebooks to better document your downsampling, data processing, incident investigations, postmortems, and runbooks.
Next, InfluxDB Annotations let you explain the why behind time series data trends. Annotations can be used to communicate how time series data is impacted by changes to software deployments (like configurations, upgrades, or outages), user behavior (Cyber Monday, deadlines), business activities (ad campaign, sales incentives), or external events (natural disasters, weather). With team members sharing contextual clues, you’ll more quickly determine root cause and restore services faster.
Finally, learn how to apply gitops practices to managing InfluxDB configurations, dashboards, tasks, and alerts, as well as Telegraf configurations, ensuring better collaboration workflows between developers, SREs, and every stakeholder involved in time series collection, enrichment, and analysis.
Setting Up InfluxDB for IoT by David G SimmonsInfluxData
David will be walking you through a typical data architecture for an IoT device. Then, it will be a hands-on workshop to gather data from the device, display it on a dashboard and trigger alerts based on thresholds that you set. View this InfluxDays NYC 2019 presentation to learn about setting up InfluxDB for IoT.
This presentation provides an introduction to the Cloudify integration plugin with Terraform.
This integration allows Terraform users to use Cloudify to manage configuration and workflow of applications ontop of an infrastructure that was created by Terraform.
Join our webinar on dealing with too many automation tools and platforms, and how the newest Cloudify 5.1 release brings in the Orchestrator of Orchestrators and how this helps.
Alex Nauda [Nobl9] | How Not to Build an SLO Platform | InfluxDays NA 2021InfluxData
Nobl9 is a Service Level Objective Platform for measuring and monitoring reliability. We will look under the hood of an SLO platform using InfluxDB as part of the core architecture. We’ll talk about the project, the decisions we took, the challenges we faced, the mistakes we made, and the lessons learned.
Concept to reality: An advanced agile integration blueprintEric D. Schabell
Red Hat Summit 2020:
Are you all in on the concept of agile integration or just getting your toe in the water? Are you an expert or just getting started with concepts like integration, microservices, message integration, process integration, APIs, and all the things that make your customers experience the best it can be? This session has something for all levels as it walks attendees through the architecture concepts with whiteboard diagrams, easy to grasp images, with a bottom up approach to connecting the dots for the concepts of an integration architecture. Once the groundwork has been laid, the second half of this session take a look at an integration architecture blueprint based on three successful customer integration solutions. Presenting the results of researching these successful solution architectures provides attendees with a clear blueprint for matching to their own architectures, or to help solidify their plans for architecting successful integration solutions.
(Joseph deBuzna + Zulfikar Quereshi, HVR) Kafka Summit SF 2018
This presentation is a customer story about France-based regional airline HOP! and their need to make better use of data that was contained in various applications. They also needed this information to be available in real time. As one can imagine, airlines manage a wide variety of information such as weather, customer information, flight plans, sensor data from planes and much more.
In this presentation, Joe will discuss how HOP! was delivering their data before and the limitations associated with delivering this data. Joe will then talk about HOP!’s selection of Kafka and HVR as a solution to enabling data availability and real-time information for analysis and action.
In this session, attendees will learn:
-How Kafka was selected and chosen as a solution for HOP!’s complex challenges
-Architecture and capabilities implemented that enabled data feeding from multiple sources to Kafka
-Considerations and challenges with this approach
-Business results and future plans
stackconf 2021 | Reference Architecture for a Cloud Native Digital EnterpriseNETWAYS
In an era of digital transformation, (digital) enterprises are looking for fast innovation through effective collaboration to deliver more value to their customers with dramatically less effort. Digital enterprises enable companies of every sector to integrate, expose, and monetize their business capabilities by digitizing entire value chains. As a result, APIs have become the norm to expose integrated business functionalities to deliver an enhanced digital experience. Enterprises can start their digital transformation in greenfield or brownfield; in both cases, having a well-defined API-led integration architecture is important. Apart from integration and API platforms, these architectures should be able to provide agility, flexibility, and scalability. This session discusses a vendor/technology-neutral reference architecture for a cloud native digital enterprise to increase productivity by having agility, flexibility, and scalability through automation and services. The architecture discussed in this session can be mapped into different cloud-native platforms (Kubernetes and service mesh), different cloud providers (Microsoft Azure, Amazon AWS, and Google GCP), and infrastructure services to perform the implementation.
A Walkthrough of InfluxCloud 2.0 by Tim HallInfluxData
Tim Hall, VP of Products at InfluxData, will demonstrate how to setup and use the next InfluxCloud 2.0 in this InfluxDays NYC 2019 presentation. He provides a brief history of InfluxCloud followed by an overview of InfluxDB 2.0 and a demo.
Building a Codeless Log Pipeline w/ Confluent Sink Connector | Pollyanna Vale...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
Monitor Kubernetes in Rancher using InfluxDataInfluxData
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard.
Rancher includes everything you need to manage containers in production—you no longer need to build container management platforms from scratch using multiple open source technologies. Infrastructure services management and the overlay networking, storage, and load balancing capabilities provide the basis for portability across infrastructure providers.
In this webinar, William Jiminez, Solutions Architect at Rancher Labs, and Gunnar Aasen, Partner Engineering, provide an introduction to Rancher and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
This talk is part of www.tid-x.com 2018 conference. It explains basic views and high level concepts about Cloud Services and what considerations should be taken when building cloud services. The presentation put images to this blog post https://medium.com/@ruben.gblanco/being-a-cloud-service-vs-a-service-in-the-cloud-9452f79d03eb and the following video is the talk itself https://www.youtube.com/watch?v=qPVIFCY6q1g&index=2&list=PL94ziy7W5BvtHjQ7cOqzfolaKSlaWYjlz
Sam Dillard [InfluxData] | Performance Optimization in InfluxDB | InfluxDays...InfluxData
Like my past talks on this, I will give a rundown of the different levers one can pull to make InfluxDB perform better for one's use case. As I do each iteration of this, I have additional slides to add to this topic.
Most of the presentation focuses on write procedure as that is what defines schema and, ultimately, how queries will work against the DB.
Edge Computing: A Unified Infrastructure for all the Different PiecesCloudify Community
Edge Computing along with 5G promises to revolutionize customer experience with immersive applications that we can only imagine at this point. The edge will include PNFs, VNFs, and mobile-edge applications; requiring containers, virtual machines and bare-metal compute. But while edge computing promises numerous new revenue streams, managing and orchestrating these edge infrastructure environments is not going to be a seamless, instant process. In this webinar, experts in NFV orchestration discuss the concerns you must address in the transition to the edge, and show how you can use available open source tools to create a single management environment for PNFs, VNFs, and mobile-edge applications.
Data governance and discoverability at AO.com | Jon Vines, AO.com and Christo...HostedbyConfluent
One challenge of widespread adoption of any technology within an organisation is balancing organic growth and maintaining standards and best practice. At AO.com - one of the UK's largest online electrical retailers - we’ve invested in tooling to simplify application onboarding into the event processing platform. This includes creating topics, defining access to the platform and supporting our governance functions.
A key part of any data function is data governance and discoverability. Through standardised definitions of Kafka topics and use of Avro schemas, we can map which topics exist, what data they contain and who has access to them. This allows us to support multiple cross-functional teams using automatic data gathering.
In this session, AO.com and Confluent Professional Services will share how we tackled the challenge of platform adoption and provide hands-on examples of the open-source tools, ""Kafka Clusterstate Tools"" and ""Kafka Streams Inspector"", we developed.
Gain Deep Visibility into APIs and Integrations with Anypoint MonitoringInfluxData
On average, a business supporting digital transactions now crosses 35 backend systems—and legacy tools haven’t been able to keep up. This session will cover how MuleSoft uses InfluxCloud to help power their monitoring and diagnostic solutions as well as provide end-to-end actionable visibility to APIs and integrations to help customers identify and resolve issues quickly.
Commerce as a Service with Cloud Foundry (Cloud Foundry Summit 2014)VMware Tanzu
Keynote delivered by Rene Welches, Product Owner – PaaS Cloud Foundry at hybris.
[y]aaS is a multi tenant cloud platform which allows everyone to easily develop, extend and sell commerce services and apps. [y]aaS is based on a steadily growing micro service architecture running on Cloud Foundry as foundation. All services within [y]aaS are exposed through a consistent RESTful API. Besides the API, [y]aas also includes a Marketplace for hybris services as well as 3rd party services, an On Demand Store Front and a Back office application, all running on Cloud Foundry.
In this talk we will share our experience developing such an architecture and how Cloud Foundry helped us to streamline and speed up our development.
Moving 150 TB of data resiliently on Kafka With Quorum Controller on Kubernet...HostedbyConfluent
At Wells-Fargo, we move 150 TB of logs data from our syslogs to Splunk forwarders that get indexed and organized for analytic queries. As we modernize and migrate our applications to our hybrid cloud the performance expectations for this infrastructure will proportionately increase. Those improvements include the resilience of the end to end infrastructure. First, we decoupled the applications from their logging interface through a loglibrary which split the streams of logs from their sources to KAFKA which routed them to two separate destinations Splunk and ELK respectively. We also used prometheus and grafana for monitoring the metrics. We also deployed KAFKA, Splunk, ELK, Prometheus and Grafana on the Kubernetes clusters. Confluent had released a version of KAFKA without Zookeeper and replaced its functionality with Quorum Controller. The Quorum-Controller version exhibited better disposability one of the 12factors that's important for Cloud-Nativeness. We packaged this version into a Kubernetes operator called Keda and deployed this for auto-scaling. We tested this to simulate the amount of logdata that we typically generate in production. Based on the above we have also implemented distributed tracing and help make it just as resilient. We will share our lessons learnt, the patterns and practices to modernize both our underlying runtime platforms and our applications with highly performing and resilient event-driven architectures.
Nicolas Steinmetz [CérénIT] | Sustain Your Observability from Bare Metal TICK...InfluxData
When moving your apps to Kubernetes, you need to keep your existing observability at the same level or better. Kubernetes will give you some challenge, as you can’t strictly deploy the TICK Stack as you did before, but also allow some opportunities. The talk is about my journey on this topic and will cover Telegraf as DaemonSet to fetch nodes resources, as a deployment to fetch metrics from different endpoints and hopefully with Telegraf as an operator to illustrate sidecar deployment. All these metrics will be pushed to InfluxDB (v1/v2) and may be visualized in Chronograf or Grafana.
Three Ways InfluxDB Enables You to Use Time Series Data Across Your Entire En...InfluxData
The more your team can collaborate around data, the more useful that data is. This is especially true for time-series data that is increasingly the heartbeat of your business. When your entire team can utilize time series data, they know the pulse of your devices, your equipment, your customers, and your software -- and can act accordingly.
In this webinar, product manager Russ Savage will show you three new ways for your team to collaborate around time-series data.
First, InfluxDB Notebooks let you create and share computational narratives that combine live code, visualizations, and explanatory notes, which can output to your InfluxDB Dashboards, Tasks, and Buckets. You can use Notebooks to better document your downsampling, data processing, incident investigations, postmortems, and runbooks.
Next, InfluxDB Annotations let you explain the why behind time series data trends. Annotations can be used to communicate how time series data is impacted by changes to software deployments (like configurations, upgrades, or outages), user behavior (Cyber Monday, deadlines), business activities (ad campaign, sales incentives), or external events (natural disasters, weather). With team members sharing contextual clues, you’ll more quickly determine root cause and restore services faster.
Finally, learn how to apply gitops practices to managing InfluxDB configurations, dashboards, tasks, and alerts, as well as Telegraf configurations, ensuring better collaboration workflows between developers, SREs, and every stakeholder involved in time series collection, enrichment, and analysis.
Setting Up InfluxDB for IoT by David G SimmonsInfluxData
David will be walking you through a typical data architecture for an IoT device. Then, it will be a hands-on workshop to gather data from the device, display it on a dashboard and trigger alerts based on thresholds that you set. View this InfluxDays NYC 2019 presentation to learn about setting up InfluxDB for IoT.
This presentation provides an introduction to the Cloudify integration plugin with Terraform.
This integration allows Terraform users to use Cloudify to manage configuration and workflow of applications ontop of an infrastructure that was created by Terraform.
Join our webinar on dealing with too many automation tools and platforms, and how the newest Cloudify 5.1 release brings in the Orchestrator of Orchestrators and how this helps.
Hitchhiker's guide to Cloud-Native Build Pipelines and Infrastructure as CodeRobert van Mölken
As more and more application deployments move to the cloud the scale and complexity becomes harder to manage. Instead of a handful of large instances, you might have many smaller instances, so there are many more things you need to provision. Because of this cloud vendors provide API abstraction of their compute, storage, network and other platform services. In this talk I present a guide to provision these services, such as a Kubernetes cluster, using infrastructure as code and deploy your applications through cloud-native build pipelines. Get to know the concepts behind these DevOps practices and come hear which tools to use like Terraform and Oracle Container Pipelines to automate these laborious tasks on the Oracle Cloud Infrastructure.
A short introduction into how the GitOps toolkit can be used to deploy Confluent for Kubernetes.
The demo covers:
1. Building a clear Kafka vision
2. Declarative cluster management (including Connectors)
3. Automating Confluent Cloud
4. Demo’ing GitOps with Terraform provision of Confluent Cloud
All code for this Demo can be found here: https://github.com/osodevops/confluent-gitops-demo
The Oracle Cloud allows to build and configure various infrastructure resources. But you won't get far by just using "click acrobatics" via Web Console, especially if you want to build several similar and complex environments. A mouse click cannot be saved just like that. Oracle offers several API's to create and manage objects in OCI, e.g. Oracle OCI commandline utility, OCI SDK, Terraform Provider etc. This presentation will explain how to implement Infrastructure as Code in OCI using Terraform and the Oracle Terraform Provider. Using a training environment as an example, it will be shown how to build components with Terraform Server, databases and network components and how to scale them in terms of resources or number.
Presenting the newest version of Cloudify - 4.6 including a orchestrated SD-WAN demo from MEF18 where Cloudify is used as the orchestration platform for uCPE based on containers.
Moving at the speed of startup with Pivotal Cloud Foundry 1.11VMware Tanzu
Pivotal Cloud Foundry 1.11 is now generally available. Join Jared Ruckle and Pieter Humphrey for a deeper look at new capabilities, along with a Q&A about many of the new product features, including:
CredHub Bootstrapping
- A new way to manage and secure credentials for Pivotal Cloud Foundry
Container Networking
- Create app-level security policies and run modern apps in a "zero trust" environment
Volume Services
- Bring stateful apps to Pivotal Cloud Foundry
New Spring Boot Actuator
- Integrations with Apps Manager to ease troubleshooting
PCF Metrics 1.4
- New custom metrics tracking as a result of a tighter integration with Spring Boot
Attend this webinar and learn how to get the most from the enhancements to Pivotal Cloud Foundry 1.11, the leading multi-cloud app development platform.
Presenter : Jared Ruckle, Mukesh Gadiya and Pieter Humphrey, Pivotal
https://content.pivotal.io/webinars/jul-19-pivotal-cloud-foundry-1-11-credhub-container-networking-spring-boot-actuator-webinar
What A No Compromises Hybrid Cloud Looks Like Nati Shalom
Expectation vs. reality of a typical enterprise cloud journey
Lesson learned on how to set a cloud native strategy without compromising on the least common denominator, nor going through a complete rewrite
It has long been debated whether OpenStack is production ready. In this session you will learn how a major bank has gone to production with more than 5000 VMs that delivered the results of a 40% decrease in cost, reduced deployment time to hours not weeks, 56 new technologies introduced, 7 new platforms launched - all in under a year. Learn how their platform built on Rackspace and RHEL, coupled with best of breed open source tooling - SaltStack, Jenkins, Cloudify, and Nexus are the enablers for production-grade OpenStack.
http://sched.co/7fH1
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Real World Application Orchestration Made Easy on VMware vCloud Air, vSphere ...Nati Shalom
Looking for application orchestration in a hybrid or multi-cloud environment? You’ve got to hear about TOSCA orchestration. TOSCA (Topology and Orchestration Specification for Cloud Applications), brought to you by the same people who brought us XML, enables you to seamlessly migrate your workloads across environments or build a hybrid deployment that runs simultaneously across the VMware cloud offering.
Join our Cloud Online Meetup to learn how Cloudify’s TOSCA-compliant orchestration can be your common management interface across the VMware cloud offering, OpenStack and heterogeneous cloud environments.
Speakers:
Nati Shalom, Founder and CTO at GigaSpaces, is a thought leader in Cloud Computing and Big Data Technologies. Shalom was recently recognized as a Top Cloud Computing Blogger for CIOs by The CIO Magazine and his blog is listed as an excellent blog by YCombinator. Shalom is the founder and also one of leaders of OpenStack Israel group, and is a frequent presenter at industry conferences.
Paco Gomez, Senior Solution Architect at VMware vCloud Air. Paco evaluates and integrates strategic solutions that help vCloud Air clients benefit from VMware's hybrid cloud and application services. Paco is a seasoned technologist, having extensive experience in diverse fields including mainframes, distributed systems, enterprise development, cloud computing, mobile, assistive technology, electrical engineering and embedded systems. Across his career, Paco has held positions in consulting, sales engineering
OpenStack Juno The Complete Lowdown and Tales from the SummitNati Shalom
This presentation covers the main points from the summit and the OpenStack Juno release
It also covers how users use OpenStack based on the recent survey
Application and Network Orchestration using Heat & ToscaNati Shalom
The buzzwords Neutron, Heat, and TOSCA are spoken about quite often when it comes to the OpenStack - and many of us are still trying to make sense of the terminology and its place in the OpenStack world.
Where OpenStack Neutron provides APIs for creating network elements, OpenStack Heat provides an orchestration engine for automating the setup and configuration of OpenStack infrastructure, while TOSCA is a standard for templating and defining application topology and policies (that form the basis for Heat). In this context, it really makes sense to put these all together to achieve application and network automation for OpenStack on steroids.
In this session we will learn how we can use the robust combination of Heat and TOSCA to configure and control resources on Nova and Neutron in order to automate the network configuration as part of the application deployment.
The session will include a demo and code examples that show how you can configure virtual networks, attach public IPs, set up security groups, set up load balancing and automatically scale up/down and more. You will leave this session understanding where Neutron meets Heat and TOSCA.
This talk was delivered as part of OpenStack Paris summit - 2014 - http://openstacksummitnovember2014paris.sched.org/event/2b85b682ccaf3a5961e463b61e2403f8#.VFeuG_TF8mc
During the past few years we’ve seen how our entire data-center becomes software defined. This include the Compute, Storage, Network and also Configuration. This new data centre is the cloud.
The missing piece in the puzzle:
While this is pretty much old news there is one big thing that is missing in this puzzle and that is the operator itself.
The operator is responsible for running processes such:
* Installation of new apps
* Upgrades and update of new features or patches
* Performance tuning
* Handling failure
* Managing the capacity to meet the scaling demand.
Most of those tasks today involves lots of human intervention. Users who realised that gap try to mitigate that by putting their own custom automation - usually that comes in a form of scripts on-top of the configuration management. Those custom scripts tend to grow fairly quickly to the point where they become unmanageable.
This presentation will introduce how we can use an orchestrator to automate those tasks and by that create a software defined Operator.
Complex Analytics with NoSQL Data Store in Real TimeNati Shalom
NOSQL are often limited in the type of queries that they can support due to the distributed nature of the data. In this session we would learn patterns on how we can overcome this limitation and combine multiple query semantics with NoSQL based engines.
We will demonstrate specifically a combination of key/value, SQL like, Document model and Graph based queries as well as more advanced topic such as handling partial update and query through projection. We will also demonstrate how we can create a meshaup between those API's i.e. write fast through Key/Value API and execute complex queries on that same data through SQL query.
- See more at: http://nosql2014.dataversity.net/sessionPop.cfm?confid=81&proposalid=6335#sthash.PNSZi5TJ.dpuf
Is Orchestration the Next Big Thing in DevOpsNati Shalom
DevOps processes (such as continuous deployment and delivery) often involve writing many custom scripts that are triggered by the build system. With that approach, it is relatively hard to trace the deployment process and troubleshoot when something goes wrong. Additionally, custom scripts are often not written in an easily understood manner. In this session we will walk through specific DevOps workflows (such as install, update, etc) using Riemann as the framework in subject and see the steps required to automate those processes. We will also discuss how Cloudify uses Riemann to provide simple execution and monitoring of those workflow processes. We will share how one customer, PaddyPower, was able to leverage Cloudify to transition their traditional IT into a DevOps environment, bridging the gap betwe
When networks meets apps (open stack atlanta)Nati Shalom
Recent advancements in OpenStack capabilities have made the cloud better tuned to enterprise needs by introducing much more flexible network designs and networking services, with the tradeoff of making the cloud more complex.
In this session we will describe how we can leverage the power of the new networking advancement without exposing the complexity to the end user. We will present alternative approaches and their tradeoffs for automating the deployment of a typical n-tier enterprise application that include multi-tenant environment, separate network for admin and applications, cross region network, attach a floating IP, setup security groups etc. all through a combination of Heat, TOSCA, Chef, Puppet, and more.
The experience of automating continuous delivery processes with Chef and Cloudify through an application-centric approach to DevOps, and how this model transformed PaddyPower's traditional IT into DevOps, keeping their Devs and their Ops happy.
References:
---------------
- Cloudify & Chef : http://www.cloudifysource.org/guide/2.7/integrations/chef_documentation
- Blog Post: http://www.cloudifysource.org/2013/10/27/application_centric_approach_to_devops.html
- Earlier Video Presentation : http://www.youtube.com/watch?v=YhDNKyP_s7U
Real-Time Big Data at In-Memory Speed, Using StormNati Shalom
Storm, a popular framework from Twitter, is used for real-time event processing. The challenge presented is how to manage the state of your real-time data processing at all times. In addition, you need Storm to integrate with your batch processing system (such as Hadoop) in a consistent manner.
This session will demonstrate how to integrate Storm with an in-memory database/grid, and explore various strategies for integrating the data grid with Hadoop and Cassandra, seamlessly. By achieving smooth integration with consistent management, you will be able to easily manage all the tiers of you Big Data stack in a consistent and effective way.
- See more at: http://nosql2013.dataversity.net/sessionPop.cfm?confid=74&proposalid=5526#sthash.FWIdqRHh.dpuf
Disaster Recovery on Demand on the CloudNati Shalom
How to avoid Cloud Outages and leverage cloud economics to keep the cost down through automation of disaster recovery processes and on-demand deployment of the backup nodes.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. Cloudify & Terraform
Infrastructure
Orchestration
Service
Orchestration
Host
Other
resource
Application Service
Enterprise
Service Management
Portal Service Catalogue
RBAC & Multi
Tenancy
Logging &
Monitoring
Contained-in
● Targeted for existing TF users as an
extension
● TF developers continue using TF as is
● Cloudify syncs with the TF backend
○ Pulls new states
○ Applies updates
● Cloudify deployment exposes selected data
from TF as Cloudify Nodes - this allows to
execute workflow, trigger auto-scaling,
healing using the standard Cloudify
platform on resources that was created by
TF.
3. Cloudify & Terraform: The Big Pic
Service
Orchestration
Host
Other
resource
Service
Enterprise
Service Management
Portal Service Catalog
RBAC & Multi
Tenancy
Logging & Monitoring
Contained-in
uService
uService Orchestration
Infrastructure
Orchestration
Azure Arm
4. Show me the code...
Cloudify compute nodes gets its IP
from a host that has been created
by Terraform
Cloudify Terraform Module creates
the underlying infrastructure by
executing the relevant Terraform
file and populates its state as
runtime properties
https://github.com/cloudify-incubator/cloudify-terraform-plugin
Cloudify passes input variables to
TF through Cloudify secrets