Kamon is an open-source tool for monitoring JVM applications like those using Akka. It provides metrics collection and distributed tracing capabilities. The document discusses how Kamon 1.0 can be used to monitor Akka applications by collecting automatic and custom metrics. It also describes how to set up Kamon with Prometheus and Grafana for metrics storage and visualization. The experience of instrumenting an application at EMnify with Kamon is presented as an example.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Codifying the Build and Release Process with a Jenkins Pipeline Shared LibraryAlvin Huang
These are my slides from my Jenkins World 2017 talk, detailing a war story of migrating 150-200 Freestyle Jobs for build and release, into ~10 line Jenkinsfiles that heavily leverages Jenkins Pipeline Shared Libraries (https://jenkins.io/doc/book/pipeline/shared-libraries/)
Jenkins Pipeline @ Scale. Building Automation Frameworks for Systems IntegrationOleg Nenashev
This is a follow-up presentation to my talk at CloudBees | Jenkins Automotive and Embedded Day 2016, where I was presenting Pipeline usage strategies for use-cases in the Embedded area. In this presentation I talk about Jenkins Pipeline features for automation frameworks and talk about lessons learned in several project.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Codifying the Build and Release Process with a Jenkins Pipeline Shared LibraryAlvin Huang
These are my slides from my Jenkins World 2017 talk, detailing a war story of migrating 150-200 Freestyle Jobs for build and release, into ~10 line Jenkinsfiles that heavily leverages Jenkins Pipeline Shared Libraries (https://jenkins.io/doc/book/pipeline/shared-libraries/)
Jenkins Pipeline @ Scale. Building Automation Frameworks for Systems IntegrationOleg Nenashev
This is a follow-up presentation to my talk at CloudBees | Jenkins Automotive and Embedded Day 2016, where I was presenting Pipeline usage strategies for use-cases in the Embedded area. In this presentation I talk about Jenkins Pipeline features for automation frameworks and talk about lessons learned in several project.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance).
Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code.
At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
http://www.meetup.com/BruJUG/events/228994900/
During this session, you will presented a solution to the problem of scalability of continuous delivery in Jenkins, when your organisation has to deal with thousands of jobs, by introducing a self-service approach based on the "pipeline as code" principles.
A brief introduction to containerization, Docker, and getting started with your first containerized Rails application. Source code can be found at https://github.com/rheinwein/rails-demo-apps
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Building Efficient Parallel Testing Platforms with DockerLaura Frank Tacho
We often use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, and Compose can work together to make your tests fast.
Building kubectl plugins with Quarkus | DevNation Tech TalkRed Hat Developers
We all know how flexible Kubernetes extensions can be - Tekton and Knative are examples. But did you know it's also pretty easy to extend kubectl, the Kubernetes superstar CLI? In this session we see how a kubectl plugin is designed and then from scratch, we will build our own plugin using Quarkus. That will give us the opportunity to discover the command mode of Quarkus, rediscover how native compilation can create super fast binaries, and see how the Kubernetes-client extensions make it super easy to interact with a Kubernetes cluster.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
Virtual Flink Forward 2020: How Streaming Helps Your Staging Environment and ...Flink Forward
In this session, we will look at how Apache Flink can be used to stream anonymized API request and response data from a production environment to make sure staging environments are up-to-date and reflect the most recent features (and bugs) that comprise a service. The talk will also examine how to deal with issues of data retention, throttling, and persistence, finishing with recommendations for how to use these sandbox environments to rapidly prototype and test new features and fixes.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance).
Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code.
At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
http://www.meetup.com/BruJUG/events/228994900/
During this session, you will presented a solution to the problem of scalability of continuous delivery in Jenkins, when your organisation has to deal with thousands of jobs, by introducing a self-service approach based on the "pipeline as code" principles.
A brief introduction to containerization, Docker, and getting started with your first containerized Rails application. Source code can be found at https://github.com/rheinwein/rails-demo-apps
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Building Efficient Parallel Testing Platforms with DockerLaura Frank Tacho
We often use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, and Compose can work together to make your tests fast.
Building kubectl plugins with Quarkus | DevNation Tech TalkRed Hat Developers
We all know how flexible Kubernetes extensions can be - Tekton and Knative are examples. But did you know it's also pretty easy to extend kubectl, the Kubernetes superstar CLI? In this session we see how a kubectl plugin is designed and then from scratch, we will build our own plugin using Quarkus. That will give us the opportunity to discover the command mode of Quarkus, rediscover how native compilation can create super fast binaries, and see how the Kubernetes-client extensions make it super easy to interact with a Kubernetes cluster.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
Virtual Flink Forward 2020: How Streaming Helps Your Staging Environment and ...Flink Forward
In this session, we will look at how Apache Flink can be used to stream anonymized API request and response data from a production environment to make sure staging environments are up-to-date and reflect the most recent features (and bugs) that comprise a service. The talk will also examine how to deal with issues of data retention, throttling, and persistence, finishing with recommendations for how to use these sandbox environments to rapidly prototype and test new features and fixes.
Data Transformations on Ops Metrics using Kafka Streams (Srividhya Ramachandr...confluent
How Priceline uses Kafka Streams technology to effectively save TBs on daily licenses of our monitoring systems. Kafka Streams powers a big part of our analytics and monitoring pipelines and delivers operational metrics transformations in real time. All logs and operational metrics from all of the APIs of Priceline’s products flow into Kafka and is ingested into our Monitoring System Splunk for Alerting and Monitoring. We have now implemented data transformations, aggregations and summarizations using Kafka Streams technologies to effectively eliminate PCI/PII violations on the log data; do aggregations on metrics to avoid ingesting sub-second metrics and ingest metrics only at the granularity that we need to. We will cover the need for custom Serdes, custom partitioners, and why we don’t use the confluent registry. You will also learn how Priceline uses a self service model to configure its streams, topics and consumers using Data Collection Console, which is our UI for managing the Kafka streaming pipelines.
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022HostedbyConfluent
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022
Azure Event Hubs is a hyperscale PaaS event stream broker with protocol support for HTTP, AMQP, and Apache Kafka RPC that accepts and forwards several trillion (!) events per day and is available in all global Azure regions. This session is a look behind the curtain where we dive deep into the architecture of Event Hubs and look at the Event Hubs cluster model, resource isolation, and storage strategies and also review some performance figures.
What is the State of my Kafka Streams Application? Unleashing Metrics. | Neil...HostedbyConfluent
"Just as the Apache Kafka Brokers provide JMX metrics to monitor your cluster's health, Kafka Streams provides a rich set of metrics for monitoring your application's health and performance. The metrics to observe for a given use-case of Kafka Streams will vary significantly from application to application. Learning how to build and customize monitoring of those applications will help you maintain a healthy Kafka Streams ecosystem.
Takeaways
* An analysis and overview of the provided metrics, including the new end-to-end metrics of Kafka Streams 2.7.
* See how to extract metrics from your application using existing JMX tooling.
* Walkthrough how to build a dashboard for observing those metrics.
* Explore options of how to add additional JMX resources and Kafka Stream metrics to your application.
* How to verify you built your dashboard correctly by creating a data control set to validate your dashboard.
* Go beyond what you can collect from the Kafka Stream metrics."
My @TriangleDevops talk from 2013-10-17. I covered the work that led us to @NetflixOSS (Acme Air), the work we did on the cloud prize (NetflixOSS on IBM SoftLayer/RightScale) and the @NetflixOSS platform (Karyon, Archaius, Eureka, Ribbon, Asgard, Hystrix, Turbine, Zuul, Servo, Edda, Ice, Denominator, Aminator, Janitor/Conformity/Chaos Monkeys of the Simian Army).
In this presentation you will learn about:
• CloudFormation 101
– The building block of Infrastructure as Code
• CodePipeline and CodeCommit 101
– Tools for our IaC pipeline
• Review of an example IaC Pipeline
– Automated validation
– Least privilege enforcement
– Manual review/approval
NOTE: This was converted to Powerpoint from Keynote. Slideshare does not play the embedded videos. You can download the powerpoint from slideshare and import it into keynote. The videos should work in the keynote.
Abstract:
In this presentation, we will describe the "Spark Kernel" which enables applications, such as end-user facing and interactive applications, to interface with Spark clusters. It provides a gateway to define and run Spark tasks and to collect results from a cluster without the friction associated with shipping jars and reading results from peripheral systems. Using the Spark Kernel as a proxy, applications can be hosted remotely from Spark.
TechChat - What’s New in Sumo Logic 7/21/15Sumo Logic
Learn how Sumo Logic continues to innovate its service to meet and exceed customer needs. In this session you will gain deep insights into new features, which also addresses popular customer requests.
Cleaning Up the Dirt of the Nineties - How New Protocols are Modernizing the WebSteffen Gebert
About HTTP/2, QUIC, and Multipath TCP.
Download of PDF file recommended (Slideshare screws backgrounds up)
Talk at the TYPO3camp Vienna
Vienna, Austria, 06.-08.05.2016
Interner Git-Power-Workshop am Lehrstuhl für Informatik III
Dauer: 2,45 h
Teilnehmer hatten die Gelegenheit, Gelerntes direkt am eigenen Laptop auszuprobieren.
Neuigkeiten aus dem TYPO3-Projekt - der aktuelle Stand von TYPO3 CMS 6.0, TYPO3 Neos und TYPO3 Flow.
Vortrag auf der 1. CMS Night Nürnberg, im Rahmen der Nürnberg Web Week
Nürnberg, 23.10.2012
What the TYPO3 Server Admin Team does for the community and how we work inside the team.
Presentation at TYPO3 Camp Mallorca
14 - 16.09.2012, Palma de Mallorca, Spain
Gerrit is the review software used in the TYPO3 ecosphere.
These slides were used to introduce the participants of the workshop into the Gerrit workflow and the concept of software reviews.
Workshop at the TYPO3 Developer Days Munich, 2012
http://t3dd12.typo3.org
Der Community-gesteuerte Entwicklungsprozess von TYPO3. Über die Abläufe und Teams sowie Werbung zum Mitmachen!
Präsentation beim TYPO3 User Group Austria (TUGA) Treffen am 19. Dezember 2011 in Wien
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. Insights into the inner workings of an application
become crucial latest when performance and
scalability issues are encountered. This becomes
especially challenging in distributed systems, like
when using Akka cluster.
A popular open-source solution for monitoring on the
JVM in general, and Akka in particular, is Kamon. With
its recently reached 1.0 milestone, it features means
for both metrics collection and tracing of Akka
applications, running both standalone or distributed.
This talk gives an introduction to Kamon 1.0 with a
focus on its metrics features. The basic setup using
Prometheus and Grafana will be described, as well as
an overview over the different modules and its APIs
for implementing custom metrics. The resulting setup
allows to record both, automatically exposed metrics
about Akka’s actor systems, as well as metrics tailored
to the monitored application’s domain and service
level indicators.
Finally, learnings from a first-time user experience of
getting started with Kamon will be reported. The
example of adding instrumentation to EMnify’s
mobile core application will illustrate, how easy it is to
get started and how to kill the Prometheus on a daily
basis.
Abstract
3. • Steffen
• has a heart beating for infrastructure
• writes code at EMnify
• PhD in computer science, topic: software-based networks
• EMnify
• MVNO focussed on IoT
• runs virtualized mobile core network
• Würzburg/Berlin, Germany
About Me & Us
@StGebert
Slides available at st-g.de/speaking
10. • Tracing
• Per-request call graph
• Context propagation across nodes
• Exemplary objectives:
• Request profiling
• Understanding call graph
• Metrics
• Time series data
• Counters / gauges / distributions
• Exemplary objectives:
• Function call counts and latency
• Open DB connections
• User logins
• Generated revenue
Kamon: Feature Set
11. • Custom Metrics
• added to your code where it
makes sense
• Automatic Instrumentation
• integrations into Akka,
Akka HTTP, Play, JDBC, Servlet
• system and JVM metrics
Metrics
12. • Counter
• function calls
• customer buying our product
• Gauge
• number of open DB connections
• mailbox size
Custom Metric Types
t
t
13. • Histogram
• latencies
• shopping cart total prices
• Timer
• latencies
• RangeSampler
• number of open DB connections
• mailbox size
Custom Metric Types (2)
histogram
(single sample)
observations
value10 20 30 40 50
15. • Actor system metrics
• processed messages
• active actors
• unhandled messages
• dead letters
• Per actor performance metrics
• processing time (per message)
• time in mailbox
• mailbox sizes
• errors
Kamon Akka
Mailbox
Actor A
Mailbox
Actor B
Mailbox
Actor C
Message
16. • Metrics related to
• routers
• dispatchers
• executors
• actor groups
• remoting (with kamon-akka-remote)
• Requirement (AOP)
• AspectJ Weaver or
• Kanela (Kamon Agent)
Kamon Akka (2)
18. Related Projects
Targets Time Series DB Dashboard
simple_client
DropWizard Metrics
Micrometer
Commercial Tools
Datadog, Dynatrace, Instana, NewRelic, etc.
19. • Time Series Database
• collection, storage & query of metrics data
• based on Google's Borgmon, CNCF project
• Pull-based model
• scrapes configured targets
• HTTP endpoints on monitored targets
• Easy deployment
• statically linked Golang binaries
• single YAML config file
• Alertmanager.. for alerting ;-)
Prometheus
20. • Integrated time series database
• on disk, no external dependency
• fixed retention period, no long-term storage / downsampling
• very efficient storage [1]
• query language PromQL
Prometheus TSDB
[1] Storing 16 bytes at scale, Fabian Reinartz @ PromCon 2017
25. • Tick interval (Kamon) and scrape frequency (Prometheus)
• both should match!
• usually (?) 30s or 60s
• for load tests, we went for 5s
• hope to go for 15s in production
• Deployment [for development / load tests]
• EC2 instances tagged in CloudFormation plus EC2 service discovery
• started simple (stupid): Prometheus in container on AWS ECS with EFS
Our Experiences with Kamon+Prometheus
Docker automated build config github.com/EMnify/prometheus-docker
26. • Little CPU resources + NFS storage + high cardinality =
• High cardinality?
• akka_actor_processing_time_seconds_bucket{⏎
class="com.example.SomethingFrequentlyUsed", ⏎
le="0.33", …⏎
path="mystem/some-supervisor/$aX"}
How to Kill Prometheus (Regularly)
27. • Define actor groups
kamon.akka.actor-groups += "mygroup"
kamon.util.filters {
"akka.tracked-actor" {
excludes = ["mysystem/some-supervisor/*"]
}
mygroup {
includes = ["mysystem/some-supervisor/*"]
}
}
• Delete Prometheus data to recover
• Continue to watch out for metrics with unnamed actors
How to Fix Kamon to Not Kill Prometheus
28. • Limit the number of samples per scrape:
<scrape_config>
# Per-scrape limit on number of scraped samples that will be accepted.
[ sample_limit: <int> | default = 0 ]
• Watch for limit kicking in:
prometheus_target_scrapes_exceeded_sample_limit_total
How to Fix Prometheus to Not Kill Itself
30. • Hosted service
• by Kamon developers
• currently in private beta
• no price tags, yet
• Great user experience for us
• tailored to Akka monitoring
• distributions over time
• still, few rough edges
Kamino Hosted Service
Targets Time Series DB Dashboard
33. • Kamon offers wide range of APM features
• customized and automated metric collection
• works with both on-prem/OSS and SaaS "backends"
• super friendly community, thanks Ivan!
• distributed tracing
• Monitor your application (from the inside!)
• now!
• better start small
Summary & Conclusion
34. Find me at the Speaker‘s Roundtable
Questions, please!
38. Setup with Kamon
JVM
Your ApplicationPort 80
Kamon
Kamon-prometheus Port 9095
Prometheus
Storage
Retrieval PromQL
Port 9090
Node Exporter Port 9100
scrapes
Grafana
*magic*
Prometheus Data Source
41. • Kamon core trackable values
• highest trackable values for range sampler / histogram
• can be adjusted per metric
• Default Prometheus histogram buckets might not fit
• global default can be adjusted
• PR pending for overriding per metric [1]
Adjusting Value Ranges / Aggregation
[1] kamon-io/kamon-prometheus#12
42. Histograms
histogram
over timevalue
t
10
30
50
observations
0 max
histogram
(single sample)
observations
value10 20 30 40 50
• Better describe values than
avg/min/max does
• Can be aggregated across nodes
• Usually percentiles/quantiles computed
• Xth percentile: X% of the values lower than <n>
• Median (=50th percentile)
• SLO/SLA candidates 90/95/99th percentile of
response times