Slides used for an internal training. Explains why throughput and latency are important when measuring performance. How Java Flight Recording can be used to analyze performance issues.
Using Java Mission Control & Java Flight RecorderIsuru Perera
This presentation explains the Java Mission Control (JMC) and how to use it.
Java Mission Control has two main tools: JMX Console & Java Flight Recorder (JFR). These are very powerful tools provided by the Oracle JDK.
Software Profiling: Java Performance, Profiling and FlamegraphsIsuru Perera
Guest lecture at University of Colombo School of Computing on 30th May 2018
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
Slides used for an internal training. This explains how to generate Flame Graphs using Java Flight Recorder dumps. There is also an example to use Linux "perf_events" to generate a Java Mixed-Mode Flame Graph.
Presentation slides used at the 17th Java Colombo Meetup.
http://www.meetup.com/java-colombo/events/218658123/
This presentation explains the JMX Console & Java Flight Recorder (JFR) tools in Java Mission Control (JMC)
Software Profiling: Understanding Java Performance and how to profile in JavaIsuru Perera
Guest lecture at University of Colombo School of Computing on 27th May 2017
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
Java Colombo Meetup on 22nd March 2018
Speaker: Isuru Perera, Technical Lead at WSO2
Flame graphs are a visualization of profiled software and it was developed by Brendan Gregg, an industry expert in computing performance and cloud computing. Finding out why CPUs are busy is an important task when troubleshooting performance issues and we often use a sampling profiler to see which code-paths are hot. However, a profiler will dump a lot of data with thousands of lines and it is not easy to go through all data. With Flame Graphs, we can identify the most frequent code-paths quickly and accurately. Basically, a Flame Graph can simply visualize the stack traces output of a sampling profiler.
There are many ways to profile Java applications and Java Flight Recorder (JFR) is a really good tool to profile a Java application with a very low overhead. I will show how we can generate a Flame Graph from a Java Flight Recording using the JFR Flame Graph tool (https://github.com/chrishantha/jfr-flame-graph) I developed.
Since Flame Graphs can visualize any stack profiles, we can also use a Linux system profiler (perf) and create a Java Mixed-Mode Flame Graph, which will show how much CPU time is spent in Java methods, system libraries and the kernel. We can troubleshoot performance issues related to high CPU usage easily with a flame graph showing profile information from both system code paths and Java code paths. I will discuss how we can use the -XX:+PreserveFramePointer option in JDK and the perf system profiler to generate a Java Mixed-mode flame graph.
Using Java Mission Control & Java Flight RecorderIsuru Perera
This presentation explains the Java Mission Control (JMC) and how to use it.
Java Mission Control has two main tools: JMX Console & Java Flight Recorder (JFR). These are very powerful tools provided by the Oracle JDK.
Software Profiling: Java Performance, Profiling and FlamegraphsIsuru Perera
Guest lecture at University of Colombo School of Computing on 30th May 2018
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
Slides used for an internal training. This explains how to generate Flame Graphs using Java Flight Recorder dumps. There is also an example to use Linux "perf_events" to generate a Java Mixed-Mode Flame Graph.
Presentation slides used at the 17th Java Colombo Meetup.
http://www.meetup.com/java-colombo/events/218658123/
This presentation explains the JMX Console & Java Flight Recorder (JFR) tools in Java Mission Control (JMC)
Software Profiling: Understanding Java Performance and how to profile in JavaIsuru Perera
Guest lecture at University of Colombo School of Computing on 27th May 2017
Covers following topics:
Software Profiling
Measuring Performance
Java Garbage Collection
Sampling vs Instrumentation
Java Profilers. Java Flight Recorder
Java Just-in-Time (JIT) compilation
Flame Graphs
Linux Profiling
Java Colombo Meetup on 22nd March 2018
Speaker: Isuru Perera, Technical Lead at WSO2
Flame graphs are a visualization of profiled software and it was developed by Brendan Gregg, an industry expert in computing performance and cloud computing. Finding out why CPUs are busy is an important task when troubleshooting performance issues and we often use a sampling profiler to see which code-paths are hot. However, a profiler will dump a lot of data with thousands of lines and it is not easy to go through all data. With Flame Graphs, we can identify the most frequent code-paths quickly and accurately. Basically, a Flame Graph can simply visualize the stack traces output of a sampling profiler.
There are many ways to profile Java applications and Java Flight Recorder (JFR) is a really good tool to profile a Java application with a very low overhead. I will show how we can generate a Flame Graph from a Java Flight Recording using the JFR Flame Graph tool (https://github.com/chrishantha/jfr-flame-graph) I developed.
Since Flame Graphs can visualize any stack profiles, we can also use a Linux system profiler (perf) and create a Java Mixed-Mode Flame Graph, which will show how much CPU time is spent in Java methods, system libraries and the kernel. We can troubleshoot performance issues related to high CPU usage easily with a flame graph showing profile information from both system code paths and Java code paths. I will discuss how we can use the -XX:+PreserveFramePointer option in JDK and the perf system profiler to generate a Java Mixed-mode flame graph.
This is an overview about Java Mission Control and Java Flight Recorder which is part of the Oracle JDK since JDK 7u40. The purpose of JFR is to have a continuous recording about the behavior of the JVM and the Java application at the same time. You can walk back in time and find out whats going on, to discover a specific problem situation in history
How to monitor Java application and JVM performance with Flight Recorder and Mission Control. Starts with a discussion of general JVM performance considerations like GC, JIT and threads.
JCConf 2020 - New Java Features Released in 2020Joseph Kuo
In 2020, Java 14 and 15 are released with many great features, including ZGC, Shenandoah GC, helpful NullPointerExceptions, pattern matching for instanceof, switch expressions, text blocks, records, hidden classes, and sealed classes. They not only improve performance of GC and Java applications, but also introduce new syntax to ease our effort to write more readable and efficient code. Let's take a look at those features!
https://cyberjos.blog/java/seminar/jcconf-2020-new-java-features-released-in-2020/
Are you a Java developer wondering what it means to have your application running in the cloud. This session will provide a peek into how the JVM is adapting to running in the cloud and what Java developers need to be aware to ensure they get the most of running in the cloud.
The session will pick an example spring application and tune it stage by stage at the end of which we have an application that is fully optimized and takes advantage of every aspect of the running in a cloud
For More information, refer to Java EE 7 performance tuning and optimization book:
The book is published by Packt Publishing:
http://www.packtpub.com/java-ee-7-performance-tuning-and-optimization/book
Performance has always been a major concern in software development and should not be taken lightly even when commodity computers have multicore CPUs and a few gigabytes of RAM. One of the most handy, simple tools for performance testing are microbenchmarks. Unfortunately, developing correct Java microbenchmarks is a complex task with many pitfalls on the way. This presentation is about the Do's and Don'ts of Java microbenchmarking and about what tools are out there to help with this tricky task.
Delivered as plenary at USENIX LISA 2013. video here: https://www.youtube.com/watch?v=nZfNehCzGdw and https://www.usenix.org/conference/lisa13/technical-sessions/plenary/gregg . "How did we ever analyze performance before Flame Graphs?" This new visualization invented by Brendan can help you quickly understand application and kernel performance, especially CPU usage, where stacks (call graphs) can be sampled and then visualized as an interactive flame graph. Flame Graphs are now used for a growing variety of targets: for applications and kernels on Linux, SmartOS, Mac OS X, and Windows; for languages including C, C++, node.js, ruby, and Lua; and in WebKit Web Inspector. This talk will explain them and provide use cases and new visualizations for other event types, including I/O, memory usage, and latency.
This is an overview about Java Mission Control and Java Flight Recorder which is part of the Oracle JDK since JDK 7u40. The purpose of JFR is to have a continuous recording about the behavior of the JVM and the Java application at the same time. You can walk back in time and find out whats going on, to discover a specific problem situation in history
How to monitor Java application and JVM performance with Flight Recorder and Mission Control. Starts with a discussion of general JVM performance considerations like GC, JIT and threads.
JCConf 2020 - New Java Features Released in 2020Joseph Kuo
In 2020, Java 14 and 15 are released with many great features, including ZGC, Shenandoah GC, helpful NullPointerExceptions, pattern matching for instanceof, switch expressions, text blocks, records, hidden classes, and sealed classes. They not only improve performance of GC and Java applications, but also introduce new syntax to ease our effort to write more readable and efficient code. Let's take a look at those features!
https://cyberjos.blog/java/seminar/jcconf-2020-new-java-features-released-in-2020/
Are you a Java developer wondering what it means to have your application running in the cloud. This session will provide a peek into how the JVM is adapting to running in the cloud and what Java developers need to be aware to ensure they get the most of running in the cloud.
The session will pick an example spring application and tune it stage by stage at the end of which we have an application that is fully optimized and takes advantage of every aspect of the running in a cloud
For More information, refer to Java EE 7 performance tuning and optimization book:
The book is published by Packt Publishing:
http://www.packtpub.com/java-ee-7-performance-tuning-and-optimization/book
Performance has always been a major concern in software development and should not be taken lightly even when commodity computers have multicore CPUs and a few gigabytes of RAM. One of the most handy, simple tools for performance testing are microbenchmarks. Unfortunately, developing correct Java microbenchmarks is a complex task with many pitfalls on the way. This presentation is about the Do's and Don'ts of Java microbenchmarking and about what tools are out there to help with this tricky task.
Delivered as plenary at USENIX LISA 2013. video here: https://www.youtube.com/watch?v=nZfNehCzGdw and https://www.usenix.org/conference/lisa13/technical-sessions/plenary/gregg . "How did we ever analyze performance before Flame Graphs?" This new visualization invented by Brendan can help you quickly understand application and kernel performance, especially CPU usage, where stacks (call graphs) can be sampled and then visualized as an interactive flame graph. Flame Graphs are now used for a growing variety of targets: for applications and kernels on Linux, SmartOS, Mac OS X, and Windows; for languages including C, C++, node.js, ruby, and Lua; and in WebKit Web Inspector. This talk will explain them and provide use cases and new visualizations for other event types, including I/O, memory usage, and latency.
A long time ago in a galaxy far, far away...
Java open source developers managed to the see the previously secret plans to the Empire's ultimate weapon, the JAVA™ COLLECTIONS FRAMEWORK.
Evading the dreaded Imperial Starfleet, a group of freedom fighters investigate common developer errors and bugs to help protect their vital software. In addition, they investigate the performance of the Empire’s most popular weapon: HashMap. With this new found knowledge they strike back!
Pursued by the Empire's sinister agents, JDuchess races home aboard her JVM, investigating proposed future changes to the Java Collections and other options such as Immutable Persistent Collections which could save her people and restore freedom to the galaxy....
Jump Start with Apache Spark 2.0 on DatabricksDatabricks
Apache Spark 2.0 has laid the foundation for many new features and functionality. Its main three themes—easier, faster, and smarter—are pervasive in its unified and simplified high-level APIs for Structured data.
In this introductory part lecture and part hands-on workshop you’ll learn how to apply some of these new APIs using Databricks Community Edition. In particular, we will cover the following areas:
What’s new in Spark 2.0
SparkSessions vs SparkContexts
Datasets/Dataframes and Spark SQL
Introduction to Structured Streaming concepts and APIs
Spark Summit Europe 2016 Keynote - Databricks CEO Databricks
Machine learning algorithm itself is rarely the main barrier in building AI applications. Instead, the real culprit is the set of complex systems that prepares large-scale training and test data for the ML algorithms.
Apache Spark is a huge leap forward in democratizing AI. However, it does not solve all the problems. Databricks CEO Ali Ghodsi explains how Databricks democratizes AI by making it easier to build end-to-end machine learning pipelines with Apache Spark.
Sudarshan Kadambi presented this talk at the Bay Area Spark Meetup @ Bloomberg. He covered Bloomberg Apache Spark Server and contributions to Apache Spark. The talk also talked about challenges of doing high-volume online analytics while still observing high-levels of SLAs
Spark Summit EU 2016: The Next AMPLab: Real-time Intelligent Secure ExecutionDatabricks
Committed to the goal of building open-source frameworks, tools, and algorithms that make building real-time applications decisions on live data with stronger security, The RISELab is set to innovate and enhance Spark
Spark Summit EU 2016 Keynote - Simplifying Big Data in Apache Spark 2.0Databricks
Apache Spark 2.0 was released this summer and is already being widely adopted. In this presentation Matei talks about how changes in the API have made it easier to write batch, streaming and realtime applications. The Dataset API, which is now integrated with DataFrames, makes it possible to benefit from powerful optimizations such as pushing queries into data sources, while the Structured Streaming extension to this API makes it possible to run many of the same computations in a streaming fashion automatically.
Insights Without Tradeoffs: Using Structured StreamingDatabricks
Apache Spark 2.0 introduced Structured Streaming which allows users to continually and incrementally update your view of the world as new data arrives while still using the same familiar Spark SQL abstractions. Michael Armbrust from Databricks talks about the progress made since the release of Spark 2.0 on robustness, latency, expressiveness and observability, using examples of production end-to-end continuous applications.
Speaker: Michael Armbrust
Video: http://go.databricks.com/videos/spark-summit-east-2017/using-structured-streaming-apache-spark
This talk was originally presented at Spark Summit East 2017.
Slides for JavaOne 2015 talk by Brendan Gregg, Netflix (video/audio, of some sort, hopefully pending: follow @brendangregg on twitter for updates). Description: "At Netflix we dreamed of one visualization to show all CPU consumers: Java methods, GC, JVM internals, system libraries, and the kernel. With the help of Oracle this is now possible on x86 systems using system profilers (eg, Linux perf_events) and the new JDK option -XX:+PreserveFramePointer. This lets us create Java mixed-mode CPU flame graphs, exposing all CPU consumers. We can also use system profilers to analyze memory page faults, TCP events, storage I/O, and scheduler events, also with Java method context. This talk describes the background for this work, instructions generating Java mixed-mode flame graphs, and examples from our use at Netflix where Java on x86 is the primary platform for the Netflix cloud."
Keeping Spark on Track: Productionizing Spark for ETLDatabricks
ETL is the first phase when building a big data processing platform. Data is available from various sources and formats, and transforming the data into a compact binary format (Parquet, ORC, etc.) allows Apache Spark to process it in the most efficient manner. This talk will discuss common issues and best practices for speeding up your ETL workflows, handling dirty data, and debugging tips for identifying errors.
Speakers: Kyle Pistor & Miklos Christine
This talk was originally presented at Spark Summit East 2017.
Making Structured Streaming Ready for ProductionDatabricks
In mid-2016, we introduced Structured Steaming, a new stream processing engine built on Spark SQL that revolutionized how developers can write stream processing application without having to reason about having to reason about streaming. It allows the user to express their streaming computations the same way you would express a batch computation on static data. The Spark SQL engine takes care of running it incrementally and continuously updating the final result as streaming data continues to arrive. It truly unifies batch, streaming and interactive processing in the same Datasets/DataFrames API and the same optimized Spark SQL processing engine.
The initial alpha release of Structured Streaming in Apache Spark 2.0 introduced the basic aggregation APIs and files as streaming source and sink. Since then, we have put in a lot of work to make it ready for production use. In this talk, Tathagata Das will cover in more detail about the major features we have added, the recipes for using them in production, and the exciting new features we have plans for in future releases. Some of these features are as follows:
- Design and use of the Kafka Source
- Support for watermarks and event-time processing
- Support for more operations and output modes
Speaker: Tathagata Das
This talk was originally presented at Spark Summit East 2017.
Parallelizing Existing R Packages with SparkRDatabricks
R is the latest language added to Apache Spark, and the SparkR API is slightly different from PySpark. With the release of Spark 2.0, the R API officially supports executing user code on distributed data. This is done through a family of apply() functions. In this talk, Hossein Falaki gives an overview of this new functionality in SparkR. Using this API requires some changes to regular code with dapply(). This talk will focus on how to correctly use this API to parallelize existing R packages. Most important topics of consideration will be performance and correctness when using the apply family of functions in SparkR.
Speaker: Hossein Falaki
This talk was originally presented at Spark Summit East 2017.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
Introducing apache prediction io (incubating) (bay area spark meetup at sales...Databricks
PredictionIO cofounder and creator Donald Szeto presents what, why and how of Apache PredictionIO as a Machine Learning framework running on top of Apache Spark.
Apache® Spark™ MLlib 2.x: migrating ML workloads to DataFramesDatabricks
In the Apache Spark 2.x releases, Machine Learning (ML) is focusing on DataFrame-based APIs. This webinar is aimed at helping users take full advantage of the new APIs. Topics will include migrating workloads from RDDs to DataFrames, ML persistence for saving and loading models, and the roadmap ahead.
Migrating ML workloads to use Spark DataFrames and Datasets allows users to benefit from simpler APIs, plus speed and scalability improvements. As the DataFrame/Dataset API becomes the primary API for data in Spark, this migration will become increasingly important to MLlib users, especially for integrating ML with the rest of Spark data processing workloads. We will give a tutorial covering best practices and some of the immediate and future benefits to expect.
ML persistence is one of the biggest improvements in the DataFrame-based API. With Spark 2.0, almost all ML algorithms can be saved and loaded, even across languages. ML persistence dramatically simplifies collaborating across teams and moving ML models to production. We will demonstrate how to use persistence, and we will discuss a few existing issues and workarounds.
At the end of the webinar, we will discuss major roadmap items. These include API coverage, major speed and scalability improvements to certain algorithms, and integration with structured streaming.
Exceptions are the Norm: Dealing with Bad Actors in ETLDatabricks
Stable and robust data pipelines are a critical component of the data infrastructure of enterprises. Most commonly, data pipelines ingest messy data sources with incorrect, incomplete or inconsistent records and produce curated and/or summarized data for consumption by subsequent applications.
In this talk, we go over new and upcoming features in Spark that enabled it to better serve such workloads. Such features include isolation of corrupt input records and files, useful diagnostic feedback to users and improved support for nested type handling which is common in ETL jobs.
Speaker: Sameer Agarwal
This talk was originally presented at Spark Summit East 2017.
Java is one of the most popular languages and it's very important to understand the performance of Java servers. Modern JVMs compile the Java code in runtime using Just-In-Time (JIT) compiler and such JIT compiled code runs very close to optimized native code in terms of speed.
When understanding performance, it's important to know how Java works and we can also measure the performance using key metrics like Throughput and Latency. After measuring the performance, we can use profilers to understand the application behavior and find performance bottlenecks.
In this session, we will look at how Java manages the memory and how it optimizes the Java code using JIT compilation. We will also look at how we can use the Java Flight Recorder (JFR) to profile the JVM and find performance bottlenecks.
Finally, we can look at how "Flame Graphs" can be used to identify the most frequent code-paths quickly and accurately.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
DevoxxUK: Optimizating Application Performance on KubernetesDinakar Guniguntala
Now that you have your apps running on K8s, wondering how to get the response time that you need ? Tuning a polyglot set of microservices to get the performance that you need can be challenging in Kubernetes. The key to overcoming this is observability. Luckily there are a number of tools such as Prometheus that can provide all the metrics you need, but here is the catch, there is so much of data and metrics that is difficult make sense of it all. This is where Hyperparameter tuning can come to the rescue to help build the right models.
This talk covers best practices that will help attendees
1. To understand and avoid common performance related problems.
2. Discuss observability tools and how they can help identify perf issues.
3. Look closer into Kruize Autotune which is a Open Source Autonomous Performance Tuning Tool for Kubernetes and where it can help.
Java is finally elastic! OpenJDK improvements and new features in Garbage Collection technology resulted in enhancing Java vertical scaling and resource consumption. Now JVM can promptly return unused memory and, as result it can go up and down automatically. In this presentation, we cover the main achievements in vertical scaling direction, as well as share peculiarities and tuning details of different GCs. Find out how to make your Java environments more elastic to follow the load and lower down the total cost of ownership at a large scale.
Oplægget blev holdt ved InfinIT-arrangementet "Temadag: Java for real-time and embedded systems", der blev afholdt hhv. den 12. og 13. september 2013. Læs mere om arrangementet her: http://infinit.dk/dk/arrangementer/tidligere_arrangementer/temadag_java_for_real-time_and_embedded_systems.htm
Elastic JVM for Scalable Java EE Applications Running in Containers #Jakart...Jelastic Multi-Cloud PaaS
Being configured smartly, Java can be scalable and cost-effective for all ranges of projects — from cloud-native startups to legacy enterprise applications. During this session, we will share our experiences in tuning RAM usage in a Java process to make it more elastic and gain the benefits of faster scaling and lower total cost of ownership (TCO). With microservices, cloud hosting, and vertical scaling in mind, we'll compare the top Java garbage collectors to see how efficiently they handle memory resources. The provided results of testing G1, Parallel, ConcMarkSweep, Serial, Shenandoah, ZGC and OpenJ9 garbage collectors while scaling Java EE applications vertically will help you to make the right choice for own projects.
More details about Garbage Collector types https://jelastic.com/blog/garbage-collection/
Free registration at Jelastic https://jelastic.com/
Optimizing your java applications for multi core hardwareIndicThreads
Session Presented at 5th IndicThreads.com Conference On Java held on 10-11 December 2010 in Pune, India
WEB: http://J10.IndicThreads.com
------------
Rising power dissipation in microprocessor chips is leading to a trend towards increasing the number of cores on a chip (multi-core processors) rather than increasing clock frequency as the primary basis for increasing system performance. Consequently the number of threads in commodity hardware has also exploded. This leads to complexity in designing and configuring high performance Java applications that make effective use of new hardware. In this talk we provide a summary of the changes happening in the multi-core world and subsequently discuss about some of the JVM features which exploit the multi-core capabilities of the underlying hardware. We also explain techniques to analyze and optimize your application for highly concurrent systems. Key topics include an overview of Java Virtual Machine features & configuration, ways to correctly leverage java.util.concurrent package to achieve enhanced parallelism for applications in a multi-core environment, operating system issues, virtualization, Java code optimizations and useful profiling tools and techniques.
Takeaways for the Audience
Attendees will leave with a better understanding of the new multi-core world, understanding of Java Virtual Machine features which exploit mulit-core and the techniques they can apply to ensure their Java applications run well in mulit-core environment.
Maxim Salnikov - Service Worker: taking the best from the past experience for...Codemotion
There is no doubt that 2018 is the year when Progressive Web Apps will get the really broad adoption and recognition by all the involved parties: browser vendors (finally, all the major ones), developers, users. And the speed and smoothness of this process heavily depend on how correctly we, developers, use the power of new APIs. In my session based on the accumulated experience of developing and maintaining PWAs we go through the list of advanced tips & tricks, showcase best practices, learn how to avoid common pitfalls and have a look at the latest browser support and known limitations.
Adopting GraalVM - Scale by the Bay 2018Petr Zapletal
After many years of development, Oracle finally published GraalVM and sparkled a lot of interest in the community. GraalVM is a high-performance polyglot VM with a number of potentially interesting traits we can take advantage of like increased performance and lowered cost. It can also tackle shortcomings of JVM/Scala we are struggling for years like slow-startup times or large jars. Lastly, thanks to its polyglot nature it can open interesting doors we may want to discover. On the other hand, GraalVM may still be bleeding edge technology and having a hard time to deliver the promised features. In this talk, I’d like to discuss advantages and disadvantages of adopting GraalVM, provide you guidance if you decide to do so and also share our story in this area including various samples, and recommendations. This talk is focused on JVM and Scala but should be beneficial for everyone with interested in this topic.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
2. Measuring Performance
We need a way to measure the performance:
o To understand how the system behaves
o To see performance improvements after doing
any optimizations
There are two key performance metrics.
o Latency
o Throughput
3. What is Throughput?
Throughput measures the number of messages
that a server processes during a specific time
interval (e.g. per second).
Throughput is calculated using the equation:
Throughput = number of requests / time to
complete the requests
5. Tuning Java Applications
We need to have a very high throughput and very
low latency values.
There is a tradeoff between throughput and
latency. With more concurrent users, the
throughput increases, but the average latency
will also increase.
7. Latency Distribution
When measuring latency, it’s important to look at
the latency distribution: min, max, avg, median,
75th percentile, 98th percentile, 99th percentile
etc.
8. Longtail latencies
When high percentiles
have values much
greater than the average
latency
Source:
https://engineering.linkedin.com/performance/who-moved-m
y-99th-percentile-latency
9. Latency Numbers Every Programmer
Should Know
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
10. Java Garbage Collection
Java automatically allocates memory for our
applications and automatically deallocates
memory when certain objects are no longer
used.
"Automatic Garbage Collection" is an important
feature in Java.
11. Marking and Sweeping Away Garbage
GC works by first marking all used objects in the
heap and then deleting unused objects.
GC also compacts the memory after deleting
unreferenced objects to make new memory
allocations much easier and faster.
12. GC roots
o JVM references GC roots, which refer the
application objects in a tree structure. There are
several kinds of GC Roots in Java.
o Local Variables
o Active Java Threads
o Static variables
o JNI references
o When the application can reach these GC roots,
the whole tree is reachable and GC can
determine which objects are the live objects.
13. Java Heap Structure
Java Heap is divided into generations based on
the object lifetime.
Following is the general structure of the Java
Heap. (This is mostly dependent on the type of
collector).
14. Young Generation
o Young Generation usually has Eden and
Survivor spaces.
o All new objects are allocated in Eden Space.
o When this fills up, a minor GC happens.
o Surviving objects are first moved to survivor
spaces.
o When objects survives several minor GCs
(tenuring threshold), the relevant objects are
eventually moved to the old generation.
15. Old Generation
o This stores long surviving objects.
o When this fills up, a major GC (full GC)
happens.
o A major GC takes a longer time as it has to
check all live objects.
16. Permanent Generation
o This has the metadata required by JVM.
o Classes and Methods are stored here.
o This space is included in a full GC.
17. Java 8 and PermGen
In Java 8, the permanent generation is not a part
of heap.
The metadata is now moved to native memory to
an area called “Metaspace”
There is no limit for Metaspace by default
18. "Stop the World"
o For some events, JVM pauses all application
threads. These are called Stop-The-World
(STW) pauses.
o GC Events also cause STW pauses.
o We can see application stopped time with GC
logs.
19. GC Logging
o There are JVM flags to log details for each GC.
o -XX:+PrintGC - Print messages at garbage collection
o -XX:+PrintGCDetails - Print more details at garbage
collection
o -XX:+PrintGCTimeStamps - Print timestamps at garbage
collection
o -XX:+PrintGCApplicationStoppedTime - Print the
application GC stopped time
o -XX:+PrintGCApplicationConcurrentTime - Print the
application GC concurrent time
o The GCViewer is a great tool to view GC logs
20. Java Memory Usage
Init - initial amount of memory that the JVM
requests from the OS for memory management
during startup.
Used - amount of memory currently used
Committed - amount of memory that is
guaranteed to be available for use by the JVM
Max - maximum amount of memory that can be
used for memory management.
21. JDK Tools and Utilities
o Basic Tools (java, javac, jar)
o Security Tools (jarsigner, keytool)
o Java Web Service Tools (wsimport, wsgen)
o Java Troubleshooting, Profiling, Monitoring and
Management Tools (jcmd, jconsole, jmc,
jvisualvm)
22. Java Troubleshooting, Profiling, Monitoring
and Management Tools
o jcmd - JVM Diagnostic Commands tool
o jconsole - A JMX-compliant graphical tool for
monitoring a Java application
o jvisualvm – Provides detailed information about the
Java application. It provides CPU & Memory profiling,
heap dump analysis, memory leak detection etc.
o jmc – Tools to monitor and manage Java applications
without introducing performance overhead
23. Java Experimental Tools
o Monitoring Tools
o jps – JVM Process Status Tool
o jstat – JVM Statistics Monitoring Tool
o Troubleshooting Tools
o jmap - Memory Map for Java
o jhat - Heap Dump Browser
o jstack – Stack Trace for Java
jstat -gcutil <pid>
sudo jmap -heap <pid>
sudo jmap -F -dump:format=b,file=/tmp/dump.hprof <pid>
jhat /tmp/dump.hprof
24. Java Ergonomics and JVM Flags
Java Virtual Machine can tune itself depending on
the environment and this smart tuning is referred
to as Ergonomics.
When tuning Java, it's important to know which
values were used as default for Garbage
collector, Heap Sizes, Runtime Compiler by Java
Ergonomics
25. Printing Command Line Flags
We can use "-XX:+PrintCommandLineFlags" to
print the command line flags used by the JVM.
This is a useful flag to see the values selected by
Java Ergonomics.
eg:
$ java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=128884992 -XX:MaxHeapSize=2062159872 -XX:+PrintCommandLineFlags
-XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
26. Use following command to see the default values
java -XX:+PrintFlagsInitial -version
Use following command to see the final values.
java -XX:+PrintFlagsFinal -version
The values modified manually or by Java
Ergonomics are shown with “:=”
java -XX:+PrintFlagsFinal -version |
grep ':='
http://isuru-perera.blogspot.com/2015/08/java-ergonomics-and-jvm-flags.html
Printing Initial & Final JVM Flags
27. What is Profiling?
Here is what wikipedia says:
In software engineering, profiling ("program profiling",
"software profiling") is a form of dynamic program
analysis that measures, for example, the space
(memory) or time complexity of a program, the usage of
particular instructions, or the frequency and duration of
function calls. Most commonly, profiling information
serves to aid program optimization.
https://en.wikipedia.org/wiki/Profiling_(computer_programming)
28. What is Profiling?
Here is what wikipedia says:
Profiling is achieved by instrumenting either the program
source code or its binary executable form using a tool
called a profiler (or code profiler). Profilers may use a
number of different techniques, such as event-based,
statistical, instrumented, and simulation methods.
https://en.wikipedia.org/wiki/Profiling_(computer_programming)
29. Why do we need Profiling?
o Improve throughput (Maximizing the
transactions processed per second)
o Improve latency (Minimizing the time taken to
for each operation)
o Find performance bottlenecks
30. Java Profiling Tools
Survey by RebelLabs in 2015:
http://pages.zeroturnaround.com/RebelLabs---All-Report-Landers_Developer-Productivity-Report-2015.html
31. Java Profiling Tools
Java VisualVM - Available in JDK
Java Mission Control - Available in JDK
JProfiler - A commercially licensed Java profiling
tool developed by ej-technologies
32. Java Mission Control
o A set of powerful tools running on the Oracle
JDK to monitor and manage Java applications
o Free for development use (Oracle Binary Code
License)
o Available in JDK since Java 7 update 40
o Supports Plugins
o Two main tools
o JMX Console
o Java Flight Recorder
33. Profiling Applications with Java VisualVM
CPU Profiling: Profile the performance of the
application.
Memory Profiling: Analyze the memory usage of
the application.
34. Measuring Methods for CPU Profiling
Sampling: Monitor running code externally and
check which code is executed
Instrumentation: Include measurement code into
the real code
35. Sampling vs. Instrumentation
Sampling:
o Overhead depends on the sampling interval
o Can see execution hotspots
o Can miss methods, which returns faster than
the sampling interval.
Instrumentation:
o Precise measurement for execution times
o More data to process
36. Sampling vs. Instrumentation
o Java VisualVM uses both sampling and
instrumentation
o Java Flight Recorder uses sampling for hot
methods
o JProfiler supports both sampling and
instrumentation
37. Problems with Profiling
o Runtime Overhead
o Interpretation of the results can be difficult
o Identifying the "crucial“ parts of the software
o Identifying potential performance improvements
38. Java Flight Recorder (JFR)
o A profiling and event collection framework built
into the Oracle JDK
o Gather low level information about the JVM and
application behaviour without performance
impact (less than 2%)
o Always on Profiling in Production Environments
o Engine was released with Java 7 update 4
o Commercial feature in Oracle JDK
39. JFR Events
o JFR collects data about events.
o JFR collects information about three types of
events:
o Instant events – Events occurring instantly
o Sample (Requestable) events – Events with a user
configurable period to provide a sample of system
activity
o Duration events – Events taking some time to occur.
The event has a start and end time. You can set a
threshold.
40. Java Flight Recorder Architecture
JFR is comprised of the following components:
o JFR runtime - The recording engine inside the
JVM that produces the recordings.
o Flight Recorder plugin for Java Mission Control
(JMC)
41. Enabling Java Flight Recorder
Since JFR is a commercial feature, we must
unlock commercial features before trying to run
JFR.
So, you need to have following arguments.
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
42. Dynamically enabling JFR
If you are using Java 8 update 40 (8u40) or later,
you can now dynamically enable JFR.
This is useful as we don’t need to restart the
server.
43. Improving the accuracy of JFR Method
Profiler
o An important feature of JFR Method Profiler is
that it does not require threads to be at safe
points in order for stacks to be sampled.
o Generally, the stacks will only be walked at safe
points.
o HotSpot JVM doesn’t provide metadata for
non-safe point parts of the code. Use following
to improve the accuracy.
o -XX:+UnlockDiagnosticVMOptions
-XX:+DebugNonSafepoints
44. Running Java Flight Recorder
You can run multiple recordings concurrently and
have different settings for each recording.
However, the JFR runtime will use same buffers
and resulting recording contains the union of all
events for all recordings active at that particular
time.
This means that we might get more than we
asked for. (but not less)
45. JFR Recording Types
o Time Fixed Recordings
o Fixed duration
o The recording will be opened automatically in JMC
at the end (If the recording was started by JMC)
o Continuous Recordings
o No end time
o Must be explicitly dumped
46. JFR Event Settings
o There are two event settings by default in
Oracle JDK.
o Files are in $JAVA_HOME/jre/lib/jfr
o Continuous - default.jfc
o Profiling - profile.jfc
47. Running Java Flight Recorder
There are few ways we can run JFR.
o Using the JFR plugin in JMC
o Using the command line
o Using the Diagnostic Command
48. Running JFR from JMC
o Right click on JVM and select “Start Flight
Recording”
o Select the type of recording: Time fixed /
Continuous
o Select the “Event Settings” template
o Modify the event options for the selected flight
recording template (Optional)
o Modify the event details (Optional)
49. Running JFR from Command Line
o To produce a Flight Recording from the
command line, you can use “-
XX:StartFlightRecording” option. Eg:
o -XX:StartFlightRecording=delay=20s,dura
tion=60s,name=Test,filename=recording.j
fr,settings=profile
o Settings are in $JAVA_HOME/jre/lib/jfr
o Use following to change log level
o -XX:FlightRecorderOptions=loglevel=info
50. Continuous recording from Command Line
o You can also start a continuous recording from
the command line using
-XX:FlightRecorderOptions.
o -XX:FlightRecorderOptions=defaultrecord
ing=true,disk=true,repository=/tmp,maxa
ge=6h,settings=default
51. The Default Recording
o Use default recording option to start a
continuous recording
o -XX:FlightRecorderOptions=defaultrecord
ing=true
o Default recording can be dumped on exit
o Only the default recording can be used with the
dumponexit and dumponexitpath parameters
o -XX:FlightRecorderOptions=defaultrecord
ing=true,dumponexit=true,dumponexitpath
=/tmp/dumponexit.jfr
52. Running JFR using Diagnostic Commands
o The command “jcmd” can be used
o Start Recording Example:
o jcmd <pid> JFR.start delay=20s duration=60s
name=MyRecording
filename=/tmp/recording.jfr
settings=profile
o Check recording
o jcmd <pid> JFR.check
o Dump Recording
o jcmd <pid> JFR.dump filename=/tmp/dump.jfr
name=MyRecording
53. Analyzing Flight Recordings
o JFR runtime engine dumps recorded data to
files with *.jfr extension
o These binary files can be viewed from JMC
o There are tab groups showing certain aspects
of the JVM and the Java application runtime
such as Memory, Threads, I/O etc.
54. JFR Tab Groups
o General – Details of the JVM, the system, and
the recording.
o Memory - Information about memory & garbage
collection.
o Code - Information about methods, exceptions,
compilations, and class loading.
55. JFR Tab Groups
o Threads - Information about threads and locks.
o I/O: Information about file and socket I/O.
o System: Information about environment
o Events: Information about the event types in the
recording
56. Java Just-In-Time (JIT) compiler
Java code is usually compiled into platform
independent bytecode (class files)
The JVM is able to load the class files and
execute the Java bytecode via the Java
interpreter.
Even though this bytecode is usually interpreted,
it might also be compiled into native machine
code using the JVM's Just-In-Time (JIT)
compiler.
57. Java Just-In-Time (JIT) compiler
Unlike the normal compiler, the JIT compiler
compiles the code (bytecode) only when required.
With JIT compiler, the JVM monitors the methods
executed by the interpreter and identifies the “hot
methods” for compilation. After identifying the Java
method calls, the JVM compiles the bytecode into
a more efficient native code.
58. JITWatch
The JITWatch tool can analyze the compilation
logs generated with the “-XX:+LogCompilation”
flag.
The logs generated by LogCompilation are
XML-based and has lot of information related to
JIT compilation. Hence these files are very large.
https://github.com/AdoptOpenJDK/jitwatch
59. Flame Graphs
Flame graphs are a visualization of profiled
software, allowing the most frequent code-paths
to be identified quickly and accurately.
Brendan Gregg created the open source program
to generate flame graphs:
https://github.com/brendangregg/FlameGraph
60. Java CPU Flame Graphs
Helps to understand Java
CPU Usage
With Flame Graphs, we
can see both java and
system profiles
Can profile GC as well
61. Flame Graphs with Java Flight Recordings
We can generate CPU Flame Graphs from a Java
Flight Recording
Program is available at GitHub:
https://github.com/chrishantha/jfr-flame-graph
The program uses the (unsupported) JMC Parser