The KELK Stack - Kinesis, Elasticsearch, Lambda, Kibana.
This presentation walks through an example of a ServerLess logging pattern using AWS services, capable of processing any amount of streaming data with low latency.
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
Automate all your EMR related activitiesEitan Sela
This presentation was part of "AWS Big Data Demystified #5 | Automate all your EMR related activities" meetup.
in this presentation I shared from my own experience how we managed to automate EMR Clusters creation for scheduled running ETL Spark jobs, submitting ad-hoc Spark steps and creating EMR Clusters per developer request using Slack with the help of the super cool chatbot they developed in WeissBeerger.
Serverless architectures are promising and will play an important role in the coming years but the ecosystem around serverless is still pretty young. We have been operating Lambda based applications for about a year and faced several challenges. In this presentation we share these challenges and propose some solutions to work around them.
Automate all your EMR related activitiesEitan Sela
This presentation was part of "AWS Big Data Demystified #5 | Automate all your EMR related activities" meetup.
in this presentation I shared from my own experience how we managed to automate EMR Clusters creation for scheduled running ETL Spark jobs, submitting ad-hoc Spark steps and creating EMR Clusters per developer request using Slack with the help of the super cool chatbot they developed in WeissBeerger.
How to move a mission critical system to 4 AWS regions in one year?Wojciech Gawroński
A year ago our team was challenged to enhance the scope and scale of an existing platform, that is providing significant revenue for our client. As the designers and maintainers of that solution, we decided to leverage AWS cloud during that transition. In the presentation, I would like to discuss how we have tackled that migration - with the assumption that we had to move in a limited resource, hybrid cloud environment - working in close cooperation with teams responsible for other parts of the system. As I stated previously - it was a challenge - and I would like to talk what problems we have solved during that process. Also, what services we have leveraged to smooth the transition. And last, but not least - I would like to present how we have maintained the delivery pipeline, automation and massive pile of CloudFormation templates and why AWS Lambda is an excellent glue for any operational work you have to do in the cloud. Our hard work paid off. In October 2017 we have deployed our system into 4th AWS region. Bare with me during the talk, and you will learn how we achieved that
Connect Code to Resource Consumption to Scale Your Production Spark Applicati...Databricks
Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will also discuss various sources of information on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. We will show how Pepperdata’s products can clearly identify such usages and tie them to specific lines of code. We will show how Spark application owners can quickly identify the root causes of such common problems as job slowdowns, inadequate memory configuration, and Java garbage collection issues.
Presented at the Auckland AWS Meet-up:
In this meet-up, Chris will take us through an interactive session that will examine log solutions in the cloud.
We'll take a look at some possible build-your-own architectures on AWS, common tools and practices, and commercial options. We'll then demo logging data from an EC2 Instance using Amazon Kinesis, Amazon Elasticsearch Service and S3.
Docker and AWS have been working together to improve the Docker experience you already know and love. Deploying from Docker straight to AWS with your existing workflow has never been easier. Developers can use Docker Compose and Docker Desktop to deploy applications on Amazon ECS on AWS Fargate. This new functionality streamlines the process of deploying and managing containers in AWS from a local development environment running Docker. Join us for a hands-on walk through of how you can get started today.
Automating Application over OpenStack using WorkflowsYaron Parasol
OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows. The Heat community has started discussing a higher level DSL that will support not just infrastructure components.
This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling. We will also share some of our thoughts on how this DSL can interface with native OpenStack projects, such as Heat, Keystone and Ceilometer.
All the Ops: DataOps with GitOps for Streaming data on Kafka and KubernetesDevOps.com
Running Apache Kafka and Kubernetes is synonymous with containerized real time data. Many users have adopted the pairing to deploy and manage individual distributed real time applications.
While Kubernetes allows developers to scale applications in microservices quicker, there are still productivity blockers such as visibility and governance.
Enter DataOps.
In this webinar, you'll learn how to:
Enhance the productivity of your Kafka & Kubernetes stream with DataOps
Enable enterprise adoption and scaling
Govern & secure your stream
Auditing data and answering the life long question, is it the end of the day ...Simona Meriam
At Nielsen, data is very important. Being the core of our business, we love it and there’s lots of it. We don’t want to lose it, and at the same time, we don’t want to duplicate it.
Our data goes through a robust Kafka architecture, into several ETLs, receiving, transforming and storing the data.
While we clearly understood our ETLs’ workflow, we had no visibility into what parts of the data, if any, were lost or duplicated, and in which stage or stages of the workflow, from source to destination.
But how much do we know about the way our data makes though our systems? And what about the life long question, is it the end of the day yet?
In this talk I’m going to present to you the design process behind our Data Auditing system, Life Line. From tracking and producing , to analysing and storing auditing information, using technologies such as Kafka, Avro, Spark, Lambda functions and complex SQL queries. We’re going to cover:
* AVRO Audit header
* Auditing heart beat - designing your metadata
* Designing and optimising your auditing table - what does this data look like anyway?
* Creating an alert based monitoring system
* Answering the most important question of all - is it the end of the day yet?
Continuous Deployment to the Cloud using SpinnakerTim Ysewyn
In our quest to get to production faster, we've tackled culture, architecture and infrastructure: organizing ourselves in cross-functional DevOps teams, embracing microservice architectures, and deploying to the various clouds out there. Along the way we’ve learned some best practices about how to deploy software at velocity — things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.
Back in 2014, Netflix started building Spinnaker, an open-source multi-cloud continuous delivery platform that embodied these core principles of safe, frequent and reliable releases. In June Spinnaker 1.0 was released, with core contributions from Google, Microsoft, Oracle and Pivotal to name a few.
Built on Spring Boot, its architecture is surprisingly familiar. During this session we'll give you a tour of how Spinnaker works, how we are using it at our clients, as well as what it could do for your continuous delivery pipeline.
How do you improve the visibility of your logs while running Spark on EMR? If you're tired of ssh-ing into your servers and searching log files, this architecture design is for you.
The OSCAR framework is previewed in order to automatically deploy an elastic Kubernetes cluster with Minio (as the storage back-end), OpenFaaS (as the FaaS framework to execute functions in response to events), Event Gateway (to route events).
Open-source development in GitHub: https://github.com/grycap/oscar
Nielsen Presents: Fun with Kafka, Spark and Offset ManagementSimona Meriam
Ingesting billions of events per day into our big data stores and we need to do it in a scalable, cost-efficient and consistent way. When working with Spark and Kafka the how and where you manage your consumer offsets has a major implication on that. We will go in depths of the solution we ended up implementing and discuss the working process, the dos and don'ts that led us to its final design.
(SEC403) Diving into AWS CloudTrail Events w/ Apache Spark on EMRAmazon Web Services
Do you want to analyze AWS CloudTrail events within minutes of them arriving in your Amazon S3 bucket? Would you like to learn how to run expressive queries over your CloudTrail logs? We will demonstrate Apache Spark and Apache Spark Streaming as two tools to analyze recent and historical security logs for your accounts. To do so, we will use Amazon Elastic MapReduce (EMR), your logs stored in S3, and Amazon SNS to generate alerts. With these tools at your fingertips, you will be the first to know about security events that require your attention, and you will be able to quickly identify and evaluate the relevant security log entries.
Serverless design considerations for Cloud Native workloadsTensult
We have built a news website with more than a billion views per month and we are sharing the learnings from that experience covering Serverless architectures, Design considerations, and Gotchas.
How to move a mission critical system to 4 AWS regions in one year?Wojciech Gawroński
A year ago our team was challenged to enhance the scope and scale of an existing platform, that is providing significant revenue for our client. As the designers and maintainers of that solution, we decided to leverage AWS cloud during that transition. In the presentation, I would like to discuss how we have tackled that migration - with the assumption that we had to move in a limited resource, hybrid cloud environment - working in close cooperation with teams responsible for other parts of the system. As I stated previously - it was a challenge - and I would like to talk what problems we have solved during that process. Also, what services we have leveraged to smooth the transition. And last, but not least - I would like to present how we have maintained the delivery pipeline, automation and massive pile of CloudFormation templates and why AWS Lambda is an excellent glue for any operational work you have to do in the cloud. Our hard work paid off. In October 2017 we have deployed our system into 4th AWS region. Bare with me during the talk, and you will learn how we achieved that
Connect Code to Resource Consumption to Scale Your Production Spark Applicati...Databricks
Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will also discuss various sources of information on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. We will show how Pepperdata’s products can clearly identify such usages and tie them to specific lines of code. We will show how Spark application owners can quickly identify the root causes of such common problems as job slowdowns, inadequate memory configuration, and Java garbage collection issues.
Presented at the Auckland AWS Meet-up:
In this meet-up, Chris will take us through an interactive session that will examine log solutions in the cloud.
We'll take a look at some possible build-your-own architectures on AWS, common tools and practices, and commercial options. We'll then demo logging data from an EC2 Instance using Amazon Kinesis, Amazon Elasticsearch Service and S3.
Docker and AWS have been working together to improve the Docker experience you already know and love. Deploying from Docker straight to AWS with your existing workflow has never been easier. Developers can use Docker Compose and Docker Desktop to deploy applications on Amazon ECS on AWS Fargate. This new functionality streamlines the process of deploying and managing containers in AWS from a local development environment running Docker. Join us for a hands-on walk through of how you can get started today.
Automating Application over OpenStack using WorkflowsYaron Parasol
OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows. The Heat community has started discussing a higher level DSL that will support not just infrastructure components.
This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling. We will also share some of our thoughts on how this DSL can interface with native OpenStack projects, such as Heat, Keystone and Ceilometer.
All the Ops: DataOps with GitOps for Streaming data on Kafka and KubernetesDevOps.com
Running Apache Kafka and Kubernetes is synonymous with containerized real time data. Many users have adopted the pairing to deploy and manage individual distributed real time applications.
While Kubernetes allows developers to scale applications in microservices quicker, there are still productivity blockers such as visibility and governance.
Enter DataOps.
In this webinar, you'll learn how to:
Enhance the productivity of your Kafka & Kubernetes stream with DataOps
Enable enterprise adoption and scaling
Govern & secure your stream
Auditing data and answering the life long question, is it the end of the day ...Simona Meriam
At Nielsen, data is very important. Being the core of our business, we love it and there’s lots of it. We don’t want to lose it, and at the same time, we don’t want to duplicate it.
Our data goes through a robust Kafka architecture, into several ETLs, receiving, transforming and storing the data.
While we clearly understood our ETLs’ workflow, we had no visibility into what parts of the data, if any, were lost or duplicated, and in which stage or stages of the workflow, from source to destination.
But how much do we know about the way our data makes though our systems? And what about the life long question, is it the end of the day yet?
In this talk I’m going to present to you the design process behind our Data Auditing system, Life Line. From tracking and producing , to analysing and storing auditing information, using technologies such as Kafka, Avro, Spark, Lambda functions and complex SQL queries. We’re going to cover:
* AVRO Audit header
* Auditing heart beat - designing your metadata
* Designing and optimising your auditing table - what does this data look like anyway?
* Creating an alert based monitoring system
* Answering the most important question of all - is it the end of the day yet?
Continuous Deployment to the Cloud using SpinnakerTim Ysewyn
In our quest to get to production faster, we've tackled culture, architecture and infrastructure: organizing ourselves in cross-functional DevOps teams, embracing microservice architectures, and deploying to the various clouds out there. Along the way we’ve learned some best practices about how to deploy software at velocity — things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.
Back in 2014, Netflix started building Spinnaker, an open-source multi-cloud continuous delivery platform that embodied these core principles of safe, frequent and reliable releases. In June Spinnaker 1.0 was released, with core contributions from Google, Microsoft, Oracle and Pivotal to name a few.
Built on Spring Boot, its architecture is surprisingly familiar. During this session we'll give you a tour of how Spinnaker works, how we are using it at our clients, as well as what it could do for your continuous delivery pipeline.
How do you improve the visibility of your logs while running Spark on EMR? If you're tired of ssh-ing into your servers and searching log files, this architecture design is for you.
The OSCAR framework is previewed in order to automatically deploy an elastic Kubernetes cluster with Minio (as the storage back-end), OpenFaaS (as the FaaS framework to execute functions in response to events), Event Gateway (to route events).
Open-source development in GitHub: https://github.com/grycap/oscar
Nielsen Presents: Fun with Kafka, Spark and Offset ManagementSimona Meriam
Ingesting billions of events per day into our big data stores and we need to do it in a scalable, cost-efficient and consistent way. When working with Spark and Kafka the how and where you manage your consumer offsets has a major implication on that. We will go in depths of the solution we ended up implementing and discuss the working process, the dos and don'ts that led us to its final design.
(SEC403) Diving into AWS CloudTrail Events w/ Apache Spark on EMRAmazon Web Services
Do you want to analyze AWS CloudTrail events within minutes of them arriving in your Amazon S3 bucket? Would you like to learn how to run expressive queries over your CloudTrail logs? We will demonstrate Apache Spark and Apache Spark Streaming as two tools to analyze recent and historical security logs for your accounts. To do so, we will use Amazon Elastic MapReduce (EMR), your logs stored in S3, and Amazon SNS to generate alerts. With these tools at your fingertips, you will be the first to know about security events that require your attention, and you will be able to quickly identify and evaluate the relevant security log entries.
Serverless design considerations for Cloud Native workloadsTensult
We have built a news website with more than a billion views per month and we are sharing the learnings from that experience covering Serverless architectures, Design considerations, and Gotchas.
AWS to Bare Metal: Motivation, Pitfalls, and ResultsMongoDB
Like many startups, Wish grew up on AWS. As our cluster grew and the price of SSDs fell, we started exploring bare metal. Fast-forward 2 years and we have hundreds of MongoDB instances on bare metal fully integrated with our AWS infrastructure. It wasn't all smooth sailing, but the performance & cost improvements were worth it! Hear the story of how we did it and gain a framework for thinking about how to make the leap from cloud-centric architecture to a hybrid model.
Serverless is aiming to be the future of software development, but what does it really mean running without servers? In this session we will explain how to build a serverless application on top of AWS. We will understand how AWS Lambda functions work, how to use them properly and how can we debug and monitor serverless application.
AWS re:Invent 2016: Running Batch Jobs on Amazon ECS (CON310)Amazon Web Services
Batch computing is a common way for developers, scientists and engineers to run a series of jobs on a large pool of shared compute resources, such as servers, virtual machines, and containers. Amazon ECS makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. In this session will show you how to run batch jobs using Amazon ECS and together with other AWS services, such as AWS Lambda and Amazon SQS. We will see how you can leverage Amazon EC2 Spot Instances to power your ECS cluster and easily scale your batch workloads. You'll hear from Mapbox on how they use ECS to power their entire batch processing architecture to collect and process over 100 million miles of sensor data per day that they use for powering their maps. Mapbox will also discuss how they optimize their batch processing framework on ECS using Spot Instances and demo their open source framework that will help you get up and running with ECS in minutes.
AWS Summit 2014 Brisbane - Breakout 6
Technical deep dive in to 10 AWS Cloud best practices with in-depth look at the tips and tricks of architecting on the AWS platform.
Presenter: Dean Samuels, Solutions Architect, Amazon Web Services
Scality CTO Giorgio Regni and Software Engineer Lauren Spiegel talk about the open source S3 clone, written in Node.js. This presentation was given at a meetup on September 1, 2016 in San Francisco.
AWS Summit 2014 Perth - Breakout 3
Technical deep dive in to 10 AWS Cloud best practices with in-depth look at the tips and tricks of architecting on the AWS platform.
Presenter: Dean Samuels, Solutions Architect, Amazon Web Services
AWS Summit 2014 Melbourne - Breakout 5
Technical deep dive in to 10 AWS Cloud best practices with in-depth look at the tips and tricks of architecting on the AWS platform.
Presenter: Dean Samuels, Solutions Architect, Amazon Web Services
AWS Certified Solutions Architect Professional Course S15-S18Neal Davis
This deck contains the slides from our AWS Certified Solutions Architect Professional video course. It covers:
Section 15 Analytics Services
Section 16 Monitoring, Logging and Auditing
Section 17 Security: Defense in Depth
Section 18 Cost Management
Full course can be found here: https://digitalcloud.training/courses/aws-certified-solutions-architect-professional-video-course/
Apache Hadoop and Spark on AWS: Getting started with Amazon EMR - Pop-up Loft...Amazon Web Services
Amazon EMR is a managed service that makes it easy for customers to use big data frameworks and applications like Apache Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3, Amazon’s highly scalable object storage service. In this session, we will introduce Amazon EMR and the greater Apache Hadoop ecosystem, and show how customers use them to implement and scale common big data use cases such as batch analytics, real-time data processing, interactive data science, and more. Then, we will walk through a demo to show how you can start processing your data at scale within minutes.
Containers Managing Secrets for Containers with Amazon ECS - AWS Online Tech ...Amazon Web Services
- Common methods and risks of injecting and sharing secrets for containerized applications
- Learn how to manage and insert secrets for containers using IAM roles and Amazon S3
- Learn how to configure container networking for security
Introducing Amazon EMR Release 5.0 - August 2016 Monthly Webinar SeriesAmazon Web Services
Amazon EMR is a managed Hadoop service that makes it easy for customers to use big data frameworks and applications like Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3 , Amazon’s highly scalable object storage service. In this webinar, we will introduce the latest release of Amazon EMR. With Amazon EMR release 5.0, customers can now launch the latest versions of popular open source frameworks including Apache Spark 2.0, Hive 2.1, Presto 0.151, Tez 0.8.4, and Apache Hadoop 2.7.2. We will walk through a demo to show you how to deploy a Hadoop environment within minutes. We will cover common use cases and best practices to lower costs using Amazon S3 as your data store and Amazon EC2 Spot Instances, which allow you to bid on space Amazon computing capacity.
Learning Objectives:
• Describe the new features and updated frameworks in Amazon EMR 5.0
• Learn best practices and real-world applications for Amazon EMR
• Understand how to use EC2 Spot pricing to save costs
• Explain the advantages of decoupling storage and compute with Amazon S3 as storage layer for EMR workloads
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
4. No centralised logging
User needs OS
knowledge
Distribution
Of keys
Enemy of
autoscaling
Log
rotation
Users download
logs unnecessarily
Doesn’t scale
To many servers
Slow to
find issues
Alerting
is hard
Sshing to
servers :(
7. SteamhausKELK ON AWS
KELK on AWS
• Low maintenance - No ec2, Uses entirely AWS serverless technologies and services
• ALB, Cloudfront and Cloudtrail logs are ingested as well as EC2 logs
• Logs are archived in S3 for long term storage, and indexed in Elasticsearch for short term analytics
• Automated with Terraform
• Open source
Kinesis: buffering and delivering instance logs
Elasticsearch: Indexing and log storage
Lambda: processing and delivering S3 logs
Kibana: Search and analytics
21. SteamhausKELK ON AWS
Automation
code
Sample
Web Stack
VPC
ALB
EC2
Logging
Stack
Kinesis
Elasticsearch
Service
Lambda
S3
CloudfrontPython
Terraform
Do try this at home!
github.com/steamhaus/kelk-example
22. SteamhausKELK ON AWS
Callouts from the build
• It’s not production ready, built for readability
• Nailing iam and bucket policies can take a while!
• Testing lambda - create a test event in the UI
• Use Terraform, rinse and repeat