This document discusses monitoring Java Virtual Machine (JVM) performance. It covers monitoring garbage collection, the JIT compiler, class loading, and Java applications. It provides details on metrics, commands, and tools to monitor these aspects of the JVM and Java applications, including JConsole, VisualVM, and JMX. Monitoring is important for tuning performance and troubleshooting issues.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
Thread dumps provide snapshots of a Java application's threads and their states. When a slowdown occurs, get multiple thread dumps over time to analyze thread activity and identify potential issues like:
1) Lock contention between threads waiting to enter synchronized methods or blocks.
2) Deadlocks from circular wait conditions that can hang applications.
3) Threads waiting for I/O responses from databases or networks.
4) High CPU usage by specific threads as shown through monitoring tools.
Analyzing thread dumps helps locate performance bottlenecks and fix synchronization, resource contention, or inefficient code issues degrading application speed.
An introduction into the Garbage First (G1) garbage collector for the JVM. The session covers general GC concepts, the fundamentals of G1 and how to setup and tune the JVM for G1.
How to monitor Java application and JVM performance with Flight Recorder and Mission Control. Starts with a discussion of general JVM performance considerations like GC, JIT and threads.
Вячеслав Блинов «Java Garbage Collection: A Performance Impact»Anna Shymchenko
This document discusses Java garbage collection and its performance impact. It provides an overview of garbage collection, including that garbage collectors reclaim memory from objects no longer in use. It describes the different Java GC algorithms like serial, parallel, CMS, and G1 collectors and how to choose between them based on factors like heap size and CPU availability. It also gives guidance on basic GC tuning techniques like sizing the heap and generations as well as using adaptive sizing controls.
This document summarizes changes to the Java programming language from JDK 9 to JDK 16, including new features and modules. Some key points:
- Java has moved to a six-month release cycle, delivering more features faster than before.
- Modules were introduced in JDK 9 to improve modularity. Modules group related code and dependencies.
- Incubator modules and preview features allow testing of non-final APIs before inclusion in the Java SE platform.
- Local variable type inference using 'var' was added in JDK 10 for simpler declaration of local variables when types can be inferred.
- Modules, the module system, and tools like jlink and jdeps help manage dependencies
The document discusses performance tuning of Java applications. It covers identifying bottlenecks in Java applications, techniques for performance engineering like defining problems, breaking down into sections, isolating issues and finding bottlenecks. It also provides examples of common bottlenecks like lock contention, deadlocks and waiting for I/O responses. Specific cases discussed include threads waiting for locks, circular waiting conditions causing hangs, and threads blocked waiting for database or network responses.
The document discusses tuning the Java Virtual Machine (JVM) and garbage collection (GC) to improve performance and prevent issues like memory leaks and out-of-memory errors. It explains how the JVM manages memory using different generations like the young and old generations and different GC algorithms for each. It also provides recommendations on how to select GC settings and monitor performance based on factors like throughput, pause times, and hardware.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
Thread dumps provide snapshots of a Java application's threads and their states. When a slowdown occurs, get multiple thread dumps over time to analyze thread activity and identify potential issues like:
1) Lock contention between threads waiting to enter synchronized methods or blocks.
2) Deadlocks from circular wait conditions that can hang applications.
3) Threads waiting for I/O responses from databases or networks.
4) High CPU usage by specific threads as shown through monitoring tools.
Analyzing thread dumps helps locate performance bottlenecks and fix synchronization, resource contention, or inefficient code issues degrading application speed.
An introduction into the Garbage First (G1) garbage collector for the JVM. The session covers general GC concepts, the fundamentals of G1 and how to setup and tune the JVM for G1.
How to monitor Java application and JVM performance with Flight Recorder and Mission Control. Starts with a discussion of general JVM performance considerations like GC, JIT and threads.
Вячеслав Блинов «Java Garbage Collection: A Performance Impact»Anna Shymchenko
This document discusses Java garbage collection and its performance impact. It provides an overview of garbage collection, including that garbage collectors reclaim memory from objects no longer in use. It describes the different Java GC algorithms like serial, parallel, CMS, and G1 collectors and how to choose between them based on factors like heap size and CPU availability. It also gives guidance on basic GC tuning techniques like sizing the heap and generations as well as using adaptive sizing controls.
This document summarizes changes to the Java programming language from JDK 9 to JDK 16, including new features and modules. Some key points:
- Java has moved to a six-month release cycle, delivering more features faster than before.
- Modules were introduced in JDK 9 to improve modularity. Modules group related code and dependencies.
- Incubator modules and preview features allow testing of non-final APIs before inclusion in the Java SE platform.
- Local variable type inference using 'var' was added in JDK 10 for simpler declaration of local variables when types can be inferred.
- Modules, the module system, and tools like jlink and jdeps help manage dependencies
The document discusses performance tuning of Java applications. It covers identifying bottlenecks in Java applications, techniques for performance engineering like defining problems, breaking down into sections, isolating issues and finding bottlenecks. It also provides examples of common bottlenecks like lock contention, deadlocks and waiting for I/O responses. Specific cases discussed include threads waiting for locks, circular waiting conditions causing hangs, and threads blocked waiting for database or network responses.
The document discusses tuning the Java Virtual Machine (JVM) and garbage collection (GC) to improve performance and prevent issues like memory leaks and out-of-memory errors. It explains how the JVM manages memory using different generations like the young and old generations and different GC algorithms for each. It also provides recommendations on how to select GC settings and monitor performance based on factors like throughput, pause times, and hardware.
A technical presentation on how Zing changes parts of the JVM to eliminate GC pauses, generate more heavily optimised code from the JIT and reduce the warm up time.
The workshop is based on several Nikita Salnikov-Tarnovski lectures + my own research. The workshop consists of 2 parts. The first part covers:
- different Java GCs, their main features, advantages and disadvantages;
- principles of GC tuning;
- work with GC Viewer as tool for GC analysis;
- first steps tuning demo;
- comparison primary GCs on Java 1.7 and Java 1.8
The second part covers:
- work with Off-Heap: ByteBuffer / Direct ByteBuffer / Unsafe / MapDB;
- examples and comparison of approaches;
The off-heap-demo: https://github.com/moisieienko-valerii/off-heap-demo
Use open stack to run java programs inside a Docker containerMiano Sebastiano
This document discusses running Java programs inside Docker containers on OpenStack. It proposes modifying OpenStack to launch Java Archive (JAR) files inside Docker containers with a JVM instead of migrating entire virtual machines. The implementation would involve changes to OpenStack components like Nova to support this new object type and integrate a C library bridge to forward network traffic between virtual network interfaces. Performance analysis shows the potential benefits of migrating applications rather than whole virtual machines.
The document provides an overview of Java Virtual Machine (JVM) memory management and garbage collection strategies. It discusses the basic architecture of the JVM and how memory is divided into generations (young and old). The young generation uses strategies like mark-and-sweep and mark-and-copy for fast garbage collection of short-lived objects. The old generation uses mark-and-compact for longer-lived objects, which has higher overhead. It also describes different garbage collector implementations and considerations for selecting and tuning collectors based on application needs.
Discuss about java 9 with latest featuresNexSoftsys
The upcoming version of java will be transformational for the platform but developers can comfort that java 9 officially launch in 2017. In this presentation also discuss about latest features included in java 9.
This document provides an overview of the kubeadm tool for bootstrapping Kubernetes clusters. It discusses initializing the control plane with kubeadm init and joining nodes to the cluster with kubeadm join. Key phases of the kubeadm init command include preflight checks, generating certs, configuring the control plane and etcd, and uploading certificates. The kubeadm join phases validate the joining node and connect it to the control plane. Additional documentation links are also provided.
GlassFish v3 Prelude is a lightweight, modular application server featuring enhancements such as modular OSGi architecture, dynamic deployment capabilities, and support for Java EE 6 technologies. It provides simplified development features like auto redeployment and session retention. The lightweight server can be used for Java, Groovy, Ruby on Rails, and other applications and includes tools like the update center and embedded usage.
This document discusses tools and techniques for profiling Java applications, including the Solaris Studio Performance Analyzer and NetBeans IDE Profiler. It provides details on collecting profiling data, viewing results through the analyzer GUI or command line, and analyzing metrics like method execution times, memory usage, and thread activity to identify performance bottlenecks.
Sua aplicação deu crash ? Consumindo muita memória ? Lentidão ? Vamos falar sobre como a JVM funciona, como coletar métricas e realizar o tuning na performance de aplicações utilizando as ferramentas nativas da JVM. Além de detecção e correção de problemas como memory leaks ou freezing causado pelo Garbage Collector
The document describes the step-by-step methodology for tuning the Java Virtual Machine (JVM). It discusses determining system requirements and performance goals, choosing a JVM deployment model and runtime configuration, tuning garbage collection fundamentals, analyzing memory footprint and tuning for latency/responsiveness. Specific techniques covered include sizing the young and old generations, tuning survivor space size, CMS initiation occupancy and pause time.
Oplægget blev holdt ved InfinIT-arrangementet "Temadag: Java for real-time and embedded systems", der blev afholdt hhv. den 12. og 13. september 2013. Læs mere om arrangementet her: http://infinit.dk/dk/arrangementer/tidligere_arrangementer/temadag_java_for_real-time_and_embedded_systems.htm
Java is finally elastic! OpenJDK improvements and new features in Garbage Collection technology resulted in enhancing Java vertical scaling and resource consumption. Now JVM can promptly return unused memory and, as result it can go up and down automatically. In this presentation, we cover the main achievements in vertical scaling direction, as well as share peculiarities and tuning details of different GCs. Find out how to make your Java environments more elastic to follow the load and lower down the total cost of ownership at a large scale.
Java Performance and Using Java Flight RecorderIsuru Perera
Slides used for an internal training. Explains why throughput and latency are important when measuring performance. How Java Flight Recording can be used to analyze performance issues.
This document provides an overview of the Java Virtual Machine (JVM). It discusses the key components of the HotSpot JVM including the architecture, runtime environment, class loading process, bytecode interpretation, exception handling, synchronization, thread management, and Java Native Interface. The runtime environment is responsible for command line parsing, the JVM lifecycle such as loading, linking and initialization of classes, bytecode interpretation, and thread management. The document also describes class loading in the JVM, bytecode verification, and class data sharing. It provides details on synchronization approaches like object monitor mapping and biased locks. Finally, it discusses thread management aspects including JVM internal threads, safe points, and thread lifecycles.
Software Profiling: Understanding Java Performance and how to profile in JavaIsuru Perera
Guest lecture at University of Colombo School of Computing on 27th May 2017
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
This document discusses benchmarking Java applications. It covers challenges like warmup time, garbage collection, and Java time APIs. It also discusses designing experiments through clearly stating questions and hypotheses testing. Statistical methods for benchmarking are presented, including averaging, standard deviation, confidence intervals, and hypothesis testing.
After many years of development, Oracle finally published GraalVM and sparkled a lot of interest in the community. GraalVM is a high-performance polyglot VM with a number of potentially interesting traits we can take advantage of like increased performance and lowered cost. It can also tackle shortcomings of JVM/Scala we are struggling for years like slow-startup times or large jars. Lastly, thanks to its polyglot nature it can open interesting doors we may want to discover. On the other hand, GraalVM may still be bleeding edge technology and having a hard time to deliver the promised features. In this talk, I’d like to discuss advantages and disadvantages of adopting GraalVM, provide you guidance if you decide to do so and also share our story in this area including various samples, and recommendations. This talk is focused on JVM and Scala but should be beneficial for everyone with interested in this topic.
This session brings to your attention how several millions of dollars are wasted and what you can do to save money. Optimizing garbage collection performance not only saves money, but also improves the overall customer experience as well.
A technical presentation on how Zing changes parts of the JVM to eliminate GC pauses, generate more heavily optimised code from the JIT and reduce the warm up time.
The workshop is based on several Nikita Salnikov-Tarnovski lectures + my own research. The workshop consists of 2 parts. The first part covers:
- different Java GCs, their main features, advantages and disadvantages;
- principles of GC tuning;
- work with GC Viewer as tool for GC analysis;
- first steps tuning demo;
- comparison primary GCs on Java 1.7 and Java 1.8
The second part covers:
- work with Off-Heap: ByteBuffer / Direct ByteBuffer / Unsafe / MapDB;
- examples and comparison of approaches;
The off-heap-demo: https://github.com/moisieienko-valerii/off-heap-demo
Use open stack to run java programs inside a Docker containerMiano Sebastiano
This document discusses running Java programs inside Docker containers on OpenStack. It proposes modifying OpenStack to launch Java Archive (JAR) files inside Docker containers with a JVM instead of migrating entire virtual machines. The implementation would involve changes to OpenStack components like Nova to support this new object type and integrate a C library bridge to forward network traffic between virtual network interfaces. Performance analysis shows the potential benefits of migrating applications rather than whole virtual machines.
The document provides an overview of Java Virtual Machine (JVM) memory management and garbage collection strategies. It discusses the basic architecture of the JVM and how memory is divided into generations (young and old). The young generation uses strategies like mark-and-sweep and mark-and-copy for fast garbage collection of short-lived objects. The old generation uses mark-and-compact for longer-lived objects, which has higher overhead. It also describes different garbage collector implementations and considerations for selecting and tuning collectors based on application needs.
Discuss about java 9 with latest featuresNexSoftsys
The upcoming version of java will be transformational for the platform but developers can comfort that java 9 officially launch in 2017. In this presentation also discuss about latest features included in java 9.
This document provides an overview of the kubeadm tool for bootstrapping Kubernetes clusters. It discusses initializing the control plane with kubeadm init and joining nodes to the cluster with kubeadm join. Key phases of the kubeadm init command include preflight checks, generating certs, configuring the control plane and etcd, and uploading certificates. The kubeadm join phases validate the joining node and connect it to the control plane. Additional documentation links are also provided.
GlassFish v3 Prelude is a lightweight, modular application server featuring enhancements such as modular OSGi architecture, dynamic deployment capabilities, and support for Java EE 6 technologies. It provides simplified development features like auto redeployment and session retention. The lightweight server can be used for Java, Groovy, Ruby on Rails, and other applications and includes tools like the update center and embedded usage.
This document discusses tools and techniques for profiling Java applications, including the Solaris Studio Performance Analyzer and NetBeans IDE Profiler. It provides details on collecting profiling data, viewing results through the analyzer GUI or command line, and analyzing metrics like method execution times, memory usage, and thread activity to identify performance bottlenecks.
Sua aplicação deu crash ? Consumindo muita memória ? Lentidão ? Vamos falar sobre como a JVM funciona, como coletar métricas e realizar o tuning na performance de aplicações utilizando as ferramentas nativas da JVM. Além de detecção e correção de problemas como memory leaks ou freezing causado pelo Garbage Collector
The document describes the step-by-step methodology for tuning the Java Virtual Machine (JVM). It discusses determining system requirements and performance goals, choosing a JVM deployment model and runtime configuration, tuning garbage collection fundamentals, analyzing memory footprint and tuning for latency/responsiveness. Specific techniques covered include sizing the young and old generations, tuning survivor space size, CMS initiation occupancy and pause time.
Oplægget blev holdt ved InfinIT-arrangementet "Temadag: Java for real-time and embedded systems", der blev afholdt hhv. den 12. og 13. september 2013. Læs mere om arrangementet her: http://infinit.dk/dk/arrangementer/tidligere_arrangementer/temadag_java_for_real-time_and_embedded_systems.htm
Java is finally elastic! OpenJDK improvements and new features in Garbage Collection technology resulted in enhancing Java vertical scaling and resource consumption. Now JVM can promptly return unused memory and, as result it can go up and down automatically. In this presentation, we cover the main achievements in vertical scaling direction, as well as share peculiarities and tuning details of different GCs. Find out how to make your Java environments more elastic to follow the load and lower down the total cost of ownership at a large scale.
Java Performance and Using Java Flight RecorderIsuru Perera
Slides used for an internal training. Explains why throughput and latency are important when measuring performance. How Java Flight Recording can be used to analyze performance issues.
This document provides an overview of the Java Virtual Machine (JVM). It discusses the key components of the HotSpot JVM including the architecture, runtime environment, class loading process, bytecode interpretation, exception handling, synchronization, thread management, and Java Native Interface. The runtime environment is responsible for command line parsing, the JVM lifecycle such as loading, linking and initialization of classes, bytecode interpretation, and thread management. The document also describes class loading in the JVM, bytecode verification, and class data sharing. It provides details on synchronization approaches like object monitor mapping and biased locks. Finally, it discusses thread management aspects including JVM internal threads, safe points, and thread lifecycles.
Software Profiling: Understanding Java Performance and how to profile in JavaIsuru Perera
Guest lecture at University of Colombo School of Computing on 27th May 2017
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
This document discusses benchmarking Java applications. It covers challenges like warmup time, garbage collection, and Java time APIs. It also discusses designing experiments through clearly stating questions and hypotheses testing. Statistical methods for benchmarking are presented, including averaging, standard deviation, confidence intervals, and hypothesis testing.
After many years of development, Oracle finally published GraalVM and sparkled a lot of interest in the community. GraalVM is a high-performance polyglot VM with a number of potentially interesting traits we can take advantage of like increased performance and lowered cost. It can also tackle shortcomings of JVM/Scala we are struggling for years like slow-startup times or large jars. Lastly, thanks to its polyglot nature it can open interesting doors we may want to discover. On the other hand, GraalVM may still be bleeding edge technology and having a hard time to deliver the promised features. In this talk, I’d like to discuss advantages and disadvantages of adopting GraalVM, provide you guidance if you decide to do so and also share our story in this area including various samples, and recommendations. This talk is focused on JVM and Scala but should be beneficial for everyone with interested in this topic.
This session brings to your attention how several millions of dollars are wasted and what you can do to save money. Optimizing garbage collection performance not only saves money, but also improves the overall customer experience as well.
IBM Monitoring and Diagnostic Tools - GCMV 2.8Chris Bailey
Overview of IBM Monitoring and Diagnostics Tools - Garbage Collection and Memory Visualizer 2.8, which provides offline memory and Garbage Collection monitoring for Java and Node.js applications
This document provides an overview and agenda for the "Busy Java Developer's Guide to WebSphere Debugging & Troubleshooting" presentation. The presentation covers various WebSphere Application Server components, troubleshooting tools like IBM Support Assistant, JVM troubleshooting tools, problem determination tools, common problem scenarios, how customers run into trouble, and includes a demo and Q&A section. It provides an in-depth look at debugging and resolving issues with WebSphere Application Server.
Are you a Java developer wondering what it means to have your application running in the cloud. This session will provide a peek into how the JVM is adapting to running in the cloud and what Java developers need to be aware to ensure they get the most of running in the cloud.
The session will pick an example spring application and tune it stage by stage at the end of which we have an application that is fully optimized and takes advantage of every aspect of the running in a cloud
JRuby on Rails Deployment: What They Didn't Tell Youelliando dias
This document summarizes a presentation on deploying JRuby on Rails applications. It discusses:
1) The mechanics of running Rails applications on JRuby and the Java virtual machine, including concurrency and threading considerations.
2) Preparations for deployment such as installing necessary gems, configuring databases, and examining dependencies.
3) Packaging applications into WAR files using the Warbler gem and configuring settings like the runtime pool size.
4) Additional post-deployment considerations for logging, sessions, caching, and performance.
Elastic JVM for Scalable Java EE Applications Running in Containers #Jakart...Jelastic Multi-Cloud PaaS
Being configured smartly, Java can be scalable and cost-effective for all ranges of projects — from cloud-native startups to legacy enterprise applications. During this session, we will share our experiences in tuning RAM usage in a Java process to make it more elastic and gain the benefits of faster scaling and lower total cost of ownership (TCO). With microservices, cloud hosting, and vertical scaling in mind, we'll compare the top Java garbage collectors to see how efficiently they handle memory resources. The provided results of testing G1, Parallel, ConcMarkSweep, Serial, Shenandoah, ZGC and OpenJ9 garbage collectors while scaling Java EE applications vertically will help you to make the right choice for own projects.
More details about Garbage Collector types https://jelastic.com/blog/garbage-collection/
Free registration at Jelastic https://jelastic.com/
The document discusses CloudStack test automation and continuous integration using Jenkins. It describes using the Marvin testing framework to automate deploying CloudStack infrastructure and running tests. The continuous integration process involves building CloudStack, deploying it to hypervisors and storage, then using Marvin to run integration tests on the deployed environment. Jenkins is used as the continuous integration server to trigger builds, deployments, and tests on a schedule or with each code change. The goal is to automate testing to speed up the process and catch issues early in development.
This document discusses Java performance and contains sections on Java Persistence and EJB performance. It provides details on the Java Persistence API and its reference implementation, monitoring and tuning the EJB container including the thread pool and transaction isolation levels. It also outlines best practices for Enterprise Java Beans and Java Persistence including using appropriate fetch types, bulk updates and inheritance strategies.
This document discusses benchmarking multitiered Java applications. It covers challenges in benchmarking these applications, considerations for enterprise benchmarking including defining the system under test and performance metrics. It also discusses application server monitoring and profiling enterprise applications.
This document discusses strategies for optimizing web services performance. It covers topics like XML performance, using the appropriate XML parsing API, validating XML efficiently, optimizing external entity resolution, partial XML processing, web service benchmarking, factors affecting performance like message size and schema complexity, and best practices such as using MTOM, custom providers, and Fast Infoset.
This document discusses strategies for improving web application performance in Java. It covers monitoring and tuning web containers, including optimizing thread pool parameters, connection queues, and request handling. The document also provides best practices for Java servlets, JSPs, caching content, managing sessions, leveraging HTTP server file caches, and analyzing access logs to enhance performance.
This document discusses strategies for improving Java application performance, including reducing system CPU usage, addressing lock contention, optimizing volatile usage, resizing data structures efficiently, increasing parallelism, and other tips. Specific strategies covered in more depth include using Java NIO for non-blocking operations to reduce system CPU, identifying and fixing lock contention issues, replacing unnecessary volatile usage, ensuring data structures resize appropriately, keeping threads busy by addressing idle threads, and various other performance tuning techniques.
This document discusses strategies for monitoring operating system performance on Java applications. It covers monitoring CPU utilization, memory usage, disk I/O, network I/O, and more using tools like Windows Performance Monitor, vmstat, mpstat, and sar. The document provides guidance on measuring performance on Windows, Linux, Solaris and SPARC systems. It is part of a larger document on Java performance that also discusses profiling Java applications, tuning the JVM, and benchmarking.
This document outlines an introduction to Java performance, including 12 chapters on strategies, approaches and methodologies for Java performance. It discusses forces at play in performance, top-down versus bottom-up approaches, choosing the right platform and CPU architecture, and evaluating system performance. It also provides an overview of operating system and JVM monitoring, Java application profiling tips, tuning the JVM, benchmarking Java applications, and analyzing web and web service performance.
This document compares containers and virtual machines for application deployment architecture. It outlines the architectural differences between containers running applications directly on the host operating system compared to virtual machines running isolated operating systems. A hybrid approach is discussed that uses both containers and virtual machines. Finally, it summarizes that virtual machines are better suited for legacy applications while containers are better for new web applications, and that a combined approach can utilize the advantages of both technologies.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
6. JVM Performance Monitoring
6
SNo Description Metric Type
1 Garbage Collector Type
2 Java Heap Size
3 Young Generation & Old Generation Space Size
4 Permanent Generation Space Size
5 Minor Garbage Collection Duration
6 Minor Garbage Collection Frequency
7 Minor Garbage Collection Reclaimed Space
8 Full Garbage Collection Duration
9 Full Garbage Collection Frequency
10 Concurrent Garbage Collection Space Reclaimed
11 Java Heap before & after Garbage Collection Occupancy
12 Young & Old Gen before & after Garbage Collection Occupancy
13 Perm Gen space before & after Garbage Collection Occupancy
14 Old or Perm Gen trigger Full Garbage Collection Boolean
15 Explicit GC Calls Boolean
Garbage Collection
7. JVM Performance Monitoring
7
Garbage Collection
-XX+PrintGCDetails
Minor GC
Young Generation Occupancy (before & after) Young Generation Size
Total Occupancy (Before & After)
Total Heap Size
Old Gen Size = Total Heap Size - Young Gen Size (764672-109312)=655360K
13. JVM Performance Monitoring
13
Garbage Collection
-XX+TenuringDistribution
Utilization > Target
Max #GC cycles before promotion
New Tenuring
Threshold
14. JVM Performance Monitoring
14
Garbage Collection
-XX+PrintGCTimeStamps
With the inclusion of date and/or time stamps, calculate an expected frequency that minor
and full garbage collections occur
26. JVM Performance Monitoring
26
Garbage Collection
Graphical Tools (JConsole)
1 Used
The amount of memory currently used, including the memory
occupied by all Java objects, both reachable and unreachable
2 Committed
The amount of memory guaranteed to be available for use by
the JVM
3 Max
The maximum amount of memory that can be used for memory
management
4 GC Time
The cumulative time spent in stop-the-world garbage collections
and the total number of garbage collection invocations including
concurrent garbage collection cycles
40. JVM Performance Monitoring
40
Garbage Collection
Dump File
JAVA 5 jmap -dump:format=b, file=<filename> <jvm pid>
JAVA 6 jmap -heap:format=b, file=<filename> <jvm pid>
A binary heap dump is a snapshot of all the objects in the JVM heap at the time
when the heap dump is taken
Next we will discuss the topic of JVM Performance Monitoring
what is performance monitoring?
monitoring data is collected for the following areas garbage collection, jit compiler and class loading, we will also discuss application level monitoring.
Lets first look at garbage collection monitoring
List of metrics to monitor in the garbage collector
This slide shows an entry in the gc log by enabling the printgcdetails flag. It shows the type of GC. the young gen occupancy
Before and after gc and the total size of young gen, it also shows the total heap occupancy before and after gc and total heap size.
0.0631991 secs indicates the elapsed time for the garbage collection. [Times: user=0.83 sys=0.00, real=0.06 secs] provides CPU usage and
elapsed time information.
The value to the right of user is the CPU time used by the garbage collection executing instructions outside the operating system. In this
example, the garbage collector used 0.06 seconds of user CPU time.The user time includes total user mode cpu time across all cpus
and hence is higher than wall clock time
This slide highlights another example of print gc details. In this example the gc is a full gc and the livedata size is the occupancy of old gen after full gc. Note
That the occupancy of old gen is almost equal to the size of the old generation in this case. The log shows 1.117891 secs were spent on full gc
The user mode sys and wall clock time for gc are shown below
This is an example of young generation gc using the CMS garbage collector. In this example the minor gc took 0.083 sec
Note that total user time is 0.02 secs includes time for all the threads on different cpu taken together
0.0838519 secs indicates the elapsed time for the minor garbage collection including the time it took to garbage collect the young generation space and promote
any objects to old generation along with any remaining final cleanup work
This slide shows another example of cms gc with the mark and sweep phases indicated on the slide
In this slide note tha there is little change in occupancy of old gen space before and after gc.
If there is little change in occupancy between the start and end of the CMS concurrent sweep phase, then either few objects are being garbage collected,
meaning the CMS garbage collection cycles are finding few unreachable objects to garbage collect and as a result are wasting CPU, or
objects are being promoted into the old generation space at a rate that is equal to or greater than the rate at which the CMS concurrent sweep phase is able to garbage collect them
In this slide we look at the output of the JVM option Tenuring Distribution
This will prints details of the heap space used by objects of different ages.
The first line tells us that the target utilization of the “To” survivor space is about 75 MB. It also shows information about the “tenuring threshold”,
which represents the number of GCs that an object may stay in the young generation before it is moved into the old generation (i.e., the maximum age of the object before it gets promoted). In this example, we see that the current tenuring threshold is 15 and that its maximum value is 15 as well.
In this slide not that the total heap space utilization ~83MB is greater than target utilization of 75MB resulting in survivor space overflow.
This slide shows an example of using the JVM option PrintGCTimeStamps
The format of time printed is as described below:
/*
YYYY is the four-digit year.
MM is the two-digit month; single-digit months are prefixed with 0.
DD is the two-digit day of the month; single-digit days are prefixed with 0.
T is a literal that denotes a date to the left of the literal and a time of day to the
right.
HH is the two-digit hour; single-digit hours are prefixed with 0.
MM is the two-digit minute; single-digit minutes are prefixed with 0.
SS is the two-digit second; single-digit seconds are prefixed with 0.
mmm is the three-digit milliseconds; single- and two-digit milliseconds are pre-
fixed with 00 and 0, respectively.
TZ is the time zone offset from GMT
*/
Let us now discuss 2 additional commandline options
PrintGCApplicationStoppedTime shows how much time the application was stopped at safepoint
PrintGCApplicationConcurrentTime is how much time the application worked without stopping, i.e. the time between two successive safepoints
The output in this slide shows shows that the application ran for approximately .53 and .91 seconds with minor garbage collection pauses of approximately .046 seconds. That equates to about 5% to 8% overhead for minor garbage collections
This slide show the System keyword is used to identify an explicit call for garbage collection
This slide shows the usage of the offline gchisto tool to identify issues with the garbage collector.
Take note of the overhead column and try to tune the gc to achieve < 10% overhead
And max gc time less than sla.
The garbage collection overhead (the Overhead % column) is an indicator of how
well the garbage collector is tuned. As a general guideline, concurrent garbage collec-
tion overhead should be less than 10%. It may be possible to achieve 1% to 3%. For
the throughput garbage collector, garbage collection overhead near 1% is considered
as having a well-tuned garbage collector. 3% or higher can be an indication that tun-
ing the garbage collector may improve the application’s performance. It is important
to understand there is a relationship between garbage collection overhead and the
size of the Java heap
This slide shows the histogram of pause times for the young generation where x-axis is the duration and y axis is the count.
A wide distribution indicates swings in object allocation rates or promotion rates. In case wide swings are observed looking at the
gc timeline tab can help identify peaks in gc activity
This slide shows timeline view of the gc histo tool.
The default view for the GC Timeline shows all garbage collection pauses through the entire time line. To see time stamps at the bottom of the graph
(the x-axis), you must have garbage collection statistics that include either -XX:+PrintGCTimeStamps, -XX:+PrintGCDateStamps,
A tick is put on the graph duration and time when gc pause occurred
Selecting only full garbage collections as the pause type is useful for this analysis. With the timeline you can observe when the full garbage collections
occur relative to the start of the JVM to get a sense of when they occurred
Selecting only minor garbage collections as the pause type to show allows you to observe peaks, or possibly repeating peaks, in garbage collection duration over time.
Any observed peaks or repeating patterns can be mapped back to application logs to get a sense of what is happening in the system at that time when the peaks occur. The
use cases being executed at those time periods can be candidates to further explore for object allocation and object retention reduction opportunities. Reducing object
allocation and object retention during these busiest garbage collection activity time periods reduces the frequency of minor garbage collections and potentially reduces
the frequency of full garbage collections
This slide shows the zoom feature of GC timeline tab
Zooming in allows you to narrow the focus of the time line to a specific area to see
each garbage collection pause. You can zoom back out by pressing the right mouse
button anywhere in the graph and selecting Auto Range > Both Axes from the context
sensitive menu
GCHisto also provides the capability to load more than one garbage collection log
at a time via the Trace Management tab
When multiple garbage collection logs are
loaded, there is a separate tab for each garbage collection log, which allows you to
easily switch between logs. This can be useful when you want to compare garbage
collection logs between different Java heap configurations or between different appli-
cation loads
This slide shows coomand to start the Demo application on Linux and Windows
This slide shows the Jconsole tool startup screen. When JConsole is launched it automatically discovers and provides the oppor
tunity to connect to Java applications running locally or remotely
This slide shows the new connection dialog screen in jconsole.
Select Name and PID of the application from the list and click the Connect button.
To monitor an application on a remote system, the application to be monitored must be started with remote management enabled. Enabling remote management involves identifying a port number to communicate with the monitored application and establishing password authentication along with optionally using SSL for security.
This slide shows graphical heap memory usage in jconsole Memory tab.
Once a JConsole is connected to an application it will load six tabs. The default JConsole display between Java 5 and Java 6 differs. Java 6’s JConsole displays a
graphical representation of heap memory, thread, classes, and CPU usage. In contrast, Java 5’s JConsole displays the same information but in a textual form. For the
purposes of monitoring JVM garbage collection, the Memory tab is the most useful. The Memory tab in both Java 5 and Java 6 JConsole are the same. Figure shows
the JConsole Memory tab
A pattern to watch for is whether the survivor space remains full for an extended
period of time. This is an indication that survivor spaces are overflowing and objects
are getting promoted into the old generation space before they have an opportunity
to age
Tuning the young generation space can address survivor spaces overflowing
This slide shows difference between used commited max and GCtime labels shown in jconsole memory tab
In this slide we introduce visualvm tool
This is a second generation of the JConsole tool and integrates with several existing JDK software tools and lightweight memory monitoring tools such as JConsole along with adding profiling capabilities found in the popular NetBeans Profiler. The tool utilizes the NetBeans plug-in architecture, which allows the ability to easily add components, add plug-ins, or extend VisualVM’s existing components or plug-ins to performance monitor or profile any application
This slide shows the command to launch visualvm
VisualVM can be launched from Windows, Linux, or Solaris using the following
command line. (Note the command name is jvisualvm, not just visualvm)
This slide shows the applications panel of VisualVM that has three major nodes in an expandable
tree. The first major node, Local, contains a list of local Java applications VisualVM
can monitor. The second node, Remote, contains a list of remote hosts and Java
applications on each remote host VisualVM can monitor. The third node, Snapshots,
contains a list of snapshot files. With VisualVM you can take a snapshot of a Java
application’s state. When a snapshot is taken, the Java application’s state is saved
to a file and listed under the Snapshots node. Snapshots can be useful
to capture important state about the application or to compare it against
a different snapshot
This slide depicts the setup to configure visualvm for remote monitoring jvm
The remote system must be configured to run the jstatd daemon
The jstatd daemon launches a Java RMI server application that watches for the
creation and termination of HotSpot VMs and provides an interface to allow remote
monitoring tools such as VisualVM to attach and monitor Java applications remotely.
The jstatd daemon must be run with the same user credentials as those of the Java
applications to be monitored
This slide shows a sample policy file for remote monitoring setup.
Example policy file that can be used with jstatd
This policy is less liberal than granting all permissions to all codebases but is
more liberal than a policy that grants the minimal permissions to run the jstatd server
Example of command to start the jstatd daemon
jps is a command that lists the Java applications that can be monitored. When jps is supplied a hostname, it attempts
to connect to the remote system’s jstatd daemon to discover which Java applications can be monitored remotely
Figure shows VisualVM with a remote system configured and the Java applications it can monitor.
To monitor an application double-click an application name or icon under the Local or Remote node or right-click on the application
name or icon and select Open. Any of these actions opens a window tab in the right panel of VisualVM
This slide shows the Overview window which provides a high level overview of the monitored application by showing the
process id,
host name where the application is running,
main Java
class name,
any arguments passed to the application, the JVM name,
the path to the JVM,
any JVM flags used,
whether heap dump on out of memory error is enabled or disabled, number of thread dumps or heap dumps have been taken, and, if available, the monitored application’s system properties
This slide shows the monitor subtab of visualvm
The Monitor window displays
Heap usage,
permanent generation space usage,
classes loaded information,
and number of threads.
An example of the Monitor window monitoring an application running remotely under Java 6 is shown in Figure
Link to documentation to configure VisualVM with JMX
This slide depicts a remote connection in visualvm
After a JMX connection is configured, an additional icon is displayed in VisualVM Application panel representing that a remote JMX connection has been configured
to the remote application. Configuring a JMX connection for remote applications in VisualVM increases the monitoring capabilities. For example, the Monitor window
also shows CPU usage by the application and the ability to induce a full garbage collection or heap dump, as shown in Figure
In addition to more capabilities in the Monitor window, an additional Threads window is also available
This slide describes the threads tab in visualvm.
The Threads window offers insight into which threads are most active, those that
are involved in acquiring and releasing locks
The Threads window can be useful to observing specific thread behavior in an application, especially when operating
system monitoring suggests the application may be experiencing lock contention
An additional option available in the Threads window is the ability to create a thread dump by clicking the Thread Dump button. When a thread dump is requested,
VisualVM adds a window tab displaying the thread dump and also appends a thread dump entry to the monitored application entry in the Application’s window below
the application being monitored. It is important to note that thread dumps are not persisted or available once VisualVM has been closed unless they are saved.
Thread dumps can be saved by right-clicking on the thread dump icon or label below the application listed in the Applications panel. Thread dumps can be reloaded in
VisualVM at a later time by selecting the File > Load menu item and traversing to the directory where the thread dump was saved
VisualVM also offers profiling capabilities to both local and remote applications. Local profiling capabilities include both CPU and memory profiling for Java 6 applications
Being able to monitor CPU utilization while an application is running can provide information as to which methods are the busiest during times when specific events are occurring. For example, a GUI application may exhibit performance issues only in a specific view. Hence, being able
to monitor the GUI application when it is in that view can be helpful in isolating the root cause
Remote profiling requires a JMX connection to be configured and is limited to CPU profiling. It does not include memory profiling.
But heap dumps can be generated from the Sampler window. They can also be generated from the Threads window. Heap dumps can be loaded into VisualVM to analyze memory usage
Figure shows the Sampler window after clicking the CPU button. The view of CPU utilization is presented with the method name consuming the most time at the top. The second column, Self Time %, provides a histogram view of the method time spent per method relative to the time spent in other methods. The Self Time column represents the amount of wall clock time the method has consumed. The remaining column, Self Time (CPU), reports the amount of CPU time the method has consumed. Any of the columns can be sorted in ascending or descending order by clicking on the column name. A second click on a column causes the ordering to toggle back and forth between ascending or descending
In the snapshot window, the call tree showing the call stacks for all threads in the
captured snapshot are displayed. Each call tree can be expanded to observe the call
stack and method consuming the most time and CPU. At the bottom of the snapshot
window you can also view Hot Spots, which is a listing of methods with the method
consuming the most Self Time at the top of the table. A combined view of the Call
Tree and Hot Spots is also available. In the combined view, as you click on a call stack
in the Call Tree, the table of Hot Spot methods is updated to show only the methods
in the selected call stack
VisualVM also has the capability to load binary heap dumps generated using jmap, JConsole, or upon reaching an OutOfMemoryError and using the -XX:+Heap
DumpOnOutOfMemoryError HotSpot VM command line option
VisualGC is a plug-in for VisualVM. VisualGC can monitor garbage collection, class loader, and JIT compilation activities. It was originally developed as a standalone
GUI program. It can be used as both a standalone GUI or as a plug-in for VisualVM to monitor 1.4.2, Java 5, and Java 6 JVMs. When VisualGC was ported to a VisualVM
plug-in some additional enhancements were made to make it easier to discover and connect to JVMs
After the VisualGC plug-in has been added to VisualVM, when you monitor an application listed in the Applications panel, an additional window tab is displayed
in the right panel labeled VisualGC
VisualGC displays two or three panels depending on the garbage collector being used. When the throughput garbage collector is used, VisualGC shows two panels:
the Spaces and Graphs panels. When the concurrent or serial garbage collector is used a third panel is shown below the Spaces and Graphs panels called Histogram.
Figure shows VisualGC with all panel spaces
The Spaces panel provides a graphical view of the garbage collection spaces and their space utilization. This panel is divided into three vertical sections, one for
each of the garbage collection spaces: Perm (Permanent) space, Old (or Tenured) space, and the young generation space consisting of eden, and two survivor spaces,
S0 and S1.
The screen areas representing these garbage collection spaces are sized proportionately to the maximum capacities of the spaces as they are allocated by
the JVM
Uncommitted memory (virtual allocation) is represented by a light gray colored portion of the grid, whereas committed memory (physical ram) is represented by a
darker gray colored portion
The two survivor spaces are usually identical in size and its memory space fully committed. The eden space may be only partially committed, especially earlier in an application’s life cycle
WatchFor :-
survivor spaces overflowing. Survivor spaces overflowing can be identified by observing their occupancies at minor garbage collections
Each rise and fall represents a minor gc
Although the result of JIT compilation results in a faster running application, JIT compilation
requires computing resources such as CPU cycles and memory to do its work Hence, it is useful to observe JIT compiler behavior
The title bar of the display shows the total number of JIT compilation tasks and the accumulated amount of time spent performing compilation activity
Next we will look at classloading
The reporting of classes being unloaded during the full garbage collection provides evidence the permanent generation space may need to be sized larger, or its initial size may need to be larger
Set the -XX:PermSize and -XX:MaxPermSize command line options
Jconsole Classes Tab shows → Total loaded classes and current loaded classes
VisualVM Monitor tab show classes loaded in shared memory over a time window information
VisualGC ClassLoader Panel shows the number of classes loaded, the number of classes unloaded, and the accumulated class loading time since the start of the application
Reader thread 33 acquired lock with hex address 0x22e88b10
Writer thread 29 is waiting to lock 0x22e88b10
Observation of multiple thread stack traces trying to lock the same lock address is an indication the application is experiencing lock contention
It is important to note that the stack trace output of jmap provides the specific source code location of the contendedlock
Next we will talk about java application monitoring
Monitoring at the application level usually involves observing application logs that
contain events of interest or instrumentation that provides some level of informa-
tion about the application’s performance.
Some applications also build-in monitoring and management capabilities using MBeans via Java SE’s monitoring and manage-
ment APIs. These MBeans can be viewed and monitored using JMX compliant tools such as JConsole or using the VisualVM-MBeans plug-in within VisualVM
Figure shows a portion of the many Tomcat MBeans in the MBeans window in VisualVM using the VisualVM-MBeans plug-in
Example of jps & jstack command usage for lock detection