Devoxx 2012 talk by Jaromir Hamala, C2B2 Senior Consultant.
The Garbage-First (G1) is the latest garbage collector in the JVM, aiming to be the long-term replacement for CMS. Targeted for machines with large memories and multiple processors. Promising low and more predictable pause times while achieving high throughput.
The session will introduce the architecture and design of G1. Then the main focus of the talk will be the performance characteristics observed under different loads; tuning capabilities and common pitfalls. With the aim of answering the question can you run big heaps and achieve low pauses.
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Monica Beckwith
Learn what you need to know to experience nirvana in the evaluation of G1 GC even if your are migrating from Parallel GC to G1, or CMS GC to G1 GC
You also get a walk through of some case study data
G1 GC
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Monica Beckwith
G1 GC Presentation @ JavaOne 2013
Sneak a peek under the hood of the latest and coolest garbage collector, Garbage-First!
Dive deep into G1's adaptability and ergonomics
Discuss the future of G1's adaptability
Java Garbage Collectors – Moving to Java7 Garbage First (G1) CollectorGurpreet Sachdeva
One of the key strengths of JVM is automatic memory management (Garbage Collection). Its understanding can help in writing better applications. This becomes all the more important as enterprise server applications have large amount of live heap data and significant parallel threads. Until recently, main collectors were parallel collector and concurrent-mark-sweep (CMS) collector. This presentation introduces the various Garbage Collectors and compares the CMS collector against its replacement, a new implementation in Java7 i.e. G1. It is characterized by a single contiguous heap which is split into same-sized regions. In fact if your application is still running on the 1.5 or 1.6 JVM, a compelling argument to upgrade to Java 7 is to leverage G1.
Apache Spark has rocked the big data landscape, quickly becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some unique Java runtime capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will show how solutions, previously infeasible with regular Java programming, become possible with a high performance Spark core runtime, enabling you to solve problems smarter and faster.
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Monica Beckwith
Learn what you need to know to experience nirvana in the evaluation of G1 GC even if your are migrating from Parallel GC to G1, or CMS GC to G1 GC
You also get a walk through of some case study data
G1 GC
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Monica Beckwith
G1 GC Presentation @ JavaOne 2013
Sneak a peek under the hood of the latest and coolest garbage collector, Garbage-First!
Dive deep into G1's adaptability and ergonomics
Discuss the future of G1's adaptability
Java Garbage Collectors – Moving to Java7 Garbage First (G1) CollectorGurpreet Sachdeva
One of the key strengths of JVM is automatic memory management (Garbage Collection). Its understanding can help in writing better applications. This becomes all the more important as enterprise server applications have large amount of live heap data and significant parallel threads. Until recently, main collectors were parallel collector and concurrent-mark-sweep (CMS) collector. This presentation introduces the various Garbage Collectors and compares the CMS collector against its replacement, a new implementation in Java7 i.e. G1. It is characterized by a single contiguous heap which is split into same-sized regions. In fact if your application is still running on the 1.5 or 1.6 JVM, a compelling argument to upgrade to Java 7 is to leverage G1.
Apache Spark has rocked the big data landscape, quickly becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some unique Java runtime capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will show how solutions, previously infeasible with regular Java programming, become possible with a high performance Spark core runtime, enabling you to solve problems smarter and faster.
G1 Garbage Collector: Details and TuningSimone Bordet
The G1 Garbage Collector is the low-pause replacement for CMS, available since JDK 7u4. This session will explore in details how the G1 garbage collector works (from region layout, to remembered sets, to refinement threads, etc.), what are its current weaknesses, what command line options you should enable when you use it, and advanced tuning examples extracted from real world applications.
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...Monica Beckwith
Monica Beckwith is a JVM/GC Performance Engineer and has worked with the HotSpot VM for more than a decade. In this presentation, she will talk about GC Performance Engineering while providing GC facts and examples. She will also discuss various OpenJDK HotSpot GC algorithms and will provide advice on this topic.
The Performance Engineer's Guide to Java (HotSpot) Virtual MachineMonica Beckwith
Monica Beckwith has worked with the Java Virtual Machine for more than a decade not just optimizing the JVM heuristics, but also improving the Just-in-time (JIT) code quality for various processor architectures as well as working with the garbage collectors and improving garbage collection for server systems.
During this talk, Monica will cover a few JIT and Runtime optimizations and she will dive into the HotSpot garbage collection and provide an overview of the various garbage collectors available in HotSpot.
The Performance Engineer's Guide To HotSpot Just-in-Time CompilationMonica Beckwith
Adaptive compilation and runtime in the OpenJDK Hotspot VM offers significant performance enhancements for our tools and applications in Java and other JVM languages. Understanding how it works provides developers with critical information on the Java HotSpot JIT compilation and runtime techniques such as vectorization, compressed OOPs etc., to assist in understanding performance for both client and server applications. We will focus on the internals of OpenJDK 8, the reference implementation for Java SE 8.
Nowadays people usually talk more about big data, internet of things, and other buzzwords on various conferences. However, sometimes developers tend to not pay enough attention to the core things such as garbage collection. After having a short discussion with many somewhat experienced Java developers I came to a conclusion that most of them do not know how many garbage collectors there are in the latest JVM, and under what circumstances each of them should be enabled. This presentation is aimed to improve or refresh people’s knowledge on this core topic, and share a real use case when it helped us to resolve production issue.
Shenandoah GC: Java Without The Garbage Collection Hiccups (Christine Flood)Red Hat Developers
Just like a spoon full of sugar will cure your hiccups, running your JVM with -XX:+UseShenandoahGC will cure your Java garbage collection hiccups. Shenandoah GC is a new garbage collector algorithm developed for OpenJDK at Red Hat, which will produce much better pause times than the currently-available algorithms without a significant decrease in throughput. In this session, we'll explain how Shenandoah works and compare it to the currently-available OpenJDK garbage collectors.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
G1 Garbage Collector: Details and TuningSimone Bordet
The G1 Garbage Collector is the low-pause replacement for CMS, available since JDK 7u4. This session will explore in details how the G1 garbage collector works (from region layout, to remembered sets, to refinement threads, etc.), what are its current weaknesses, what command line options you should enable when you use it, and advanced tuning examples extracted from real world applications.
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...Monica Beckwith
Monica Beckwith is a JVM/GC Performance Engineer and has worked with the HotSpot VM for more than a decade. In this presentation, she will talk about GC Performance Engineering while providing GC facts and examples. She will also discuss various OpenJDK HotSpot GC algorithms and will provide advice on this topic.
The Performance Engineer's Guide to Java (HotSpot) Virtual MachineMonica Beckwith
Monica Beckwith has worked with the Java Virtual Machine for more than a decade not just optimizing the JVM heuristics, but also improving the Just-in-time (JIT) code quality for various processor architectures as well as working with the garbage collectors and improving garbage collection for server systems.
During this talk, Monica will cover a few JIT and Runtime optimizations and she will dive into the HotSpot garbage collection and provide an overview of the various garbage collectors available in HotSpot.
The Performance Engineer's Guide To HotSpot Just-in-Time CompilationMonica Beckwith
Adaptive compilation and runtime in the OpenJDK Hotspot VM offers significant performance enhancements for our tools and applications in Java and other JVM languages. Understanding how it works provides developers with critical information on the Java HotSpot JIT compilation and runtime techniques such as vectorization, compressed OOPs etc., to assist in understanding performance for both client and server applications. We will focus on the internals of OpenJDK 8, the reference implementation for Java SE 8.
Nowadays people usually talk more about big data, internet of things, and other buzzwords on various conferences. However, sometimes developers tend to not pay enough attention to the core things such as garbage collection. After having a short discussion with many somewhat experienced Java developers I came to a conclusion that most of them do not know how many garbage collectors there are in the latest JVM, and under what circumstances each of them should be enabled. This presentation is aimed to improve or refresh people’s knowledge on this core topic, and share a real use case when it helped us to resolve production issue.
Shenandoah GC: Java Without The Garbage Collection Hiccups (Christine Flood)Red Hat Developers
Just like a spoon full of sugar will cure your hiccups, running your JVM with -XX:+UseShenandoahGC will cure your Java garbage collection hiccups. Shenandoah GC is a new garbage collector algorithm developed for OpenJDK at Red Hat, which will produce much better pause times than the currently-available algorithms without a significant decrease in throughput. In this session, we'll explain how Shenandoah works and compare it to the currently-available OpenJDK garbage collectors.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
Software Developers Guide to Fun in the Workplace: Euphoria Despite the DespairHolly Cummins
An in-depth look at what makes software development a roller coaster where the highs of 0 compiler warnings are quickly cancelled out by the pain of long hours, bad requirements, endless configuration, clueless managers and a plethora of other issues which make death by a thousand cuts seem like a good idea…. They will answer questions such as: “Why is programming often called an art despite having its underpinnings in formal logic?” “How can I rediscover the delight I felt when I first started coding?” “What’s that rush I feel when my test passes? Am I addicted to TDD?” Combining Psychology, Philosophy and Computer Science, Dr Holly Cummins and Martijn Verburg will present a series of practical tips to help you rediscover the euphoria that you felt the very first time a metal box in front of you came to life and cried out “Hello World”.
How to Become a Winner in the JVM Performance-Tuning BattleCapgemini
Has your system been touched by a JVM performance problems? If the answer is yes, this presentation is for you.
You need to be careful if you have to track issues in a production system. One mistake, too much data gathered, and your system will slow down. The root cause of the problem can be application code or the JVM garbage collector.
This presentation describes a real production example of how to measure, tune, and fix a problem with a slow enterprise Java application. To make this job effective, open source tools are used to analyze the JVM’s condition: GC, threads, and code bottlenecks. Measurement in the lab is with open source tools such as Zorka, GCViewer, and JVisual VM. To make the tuning task realistic, real production problems are simulated.
First presented at Oracle OpenWorld 2015.
http://www.capgemini.com/oracle
GARBAGE COLLECTOR Automatic garbage collection is the process of looking at heap memory, identifying which objects are in use and which are not, and deleting the unused objects. An in use object, or a referenced object, means that some part of your program still maintains a pointer to that object. An unused object, or unreferenced object, is no longer referenced by any part of your program. So the memory used by an unreferenced object can be reclaimed. In a programming language like C, allocating and deallocating memory is a manual process. In Java, process of deallocating memory is handled automatically by the garbage collector.
Azul Product Manager Matt Schuetze's presentation on JVM memory details to the Philadelphia Java User Group.
This session dovetails with the March, 2014 PhillyJUG deep dive session topic focused on Java compiler code transformation and JVM runtime execution. That session exposes myths that Java is slow and Java uses too much memory. In this session we will take a deeper look at Java memory management. The dreaded Out of Memory (OOM) error is one problem. Garbage collector activity and spikes leading to long pauses is another. He covers the foundations of garbage collection and why historically Java gets a bad rap, even though GC provides a marvelous memory management paradigm.
Garbage collection is the most famous (infamous) JVM mechanism and it dates back to Java 1.0. Every Java developer knows about its existence yet most of the time we wish we can ignore its behavior and assume it works perfectly. Unfortunately this is not the case and if you are ignoring it, GC may hit you really hard.... in production. Furthermore the information that you may find on the web can be a lot of times misleading. In this event we will try to demystify some of the misconceptions around GC by understanding how different GC mechanisms work and how to make the right decisions in order to make them work for you.
Are you building high throughput, low latency application? Are you trying to figure out perfect JVM heap size? Are you struggling to choose right garbage collection algorithm and settings? Are you striving to achieve pause less GC? Do you know the right tools & best practices to tame the GC? Do you know to troubleshoot memory problems using GC logs? You will get complete answers to several such questions in this presentation.
An introduction into the Garbage First (G1) garbage collector for the JVM. The session covers general GC concepts, the fundamentals of G1 and how to setup and tune the JVM for G1.
Virtual machines don't have to be slow, they don't even have to be slower than running native code.
All you have to do is write your code, lay back and let the JVM do its magic !
Learn about various JVM runtime optimizations and why is it considered one of the best VMs in the world.
Hadoop Meetup Jan 2019 - Dynamometer and a Case Study in NameNode GCErik Krogen
Erik Krogen of LinkedIn presents regarding Dynamometer, a system open sourced by LinkedIn for scale- and performance-testing HDFS. He discusses one major use case for Dynamometer, tuning NameNode GC, and discusses characteristics of NameNode GC such as why it is important, and how it interacts with various current and future GC algorithms.
This is taken from the Apache Hadoop Contributors Meetup on January 30, hosted by LinkedIn in Mountain View.
Jvm & Garbage collection tuning for low latencies applicationQuentin Ambard
G1, CMS, Shenandoah, or Zing? Heap size at 8GB or 31GB? compressed pointers? Region size? What is the maximum break time? Throughput or Latency... What gain? MaxGCPauseMillis, G1HeapRegionSize, MaxTenuringThreshold, UnlockExperimentalVMOptions, ParallelGCThreads, InitiatingHeapOccupancyPercent, G1RSetUpdatingPauseTimePercent, which parameters have the most impact?
There are at least 40 to 50 different formats of GC logs. Here, we explained the commonly used GC log formats, tricks, patterns and tools to analyze them effectively.
Are you building high throughput, low latency application? Are you trying to figure out perfect JVM heap size? Are you struggling to choose right garbage collection algorithm and settings? Are you striving to achieve pause less GC? Do you know the right tools & best practices to tame the GC? Do you know to troubleshoot memory problems using GC logs? You will get complete answers to several such questions in this PPT.
Jvm tuning for low latency application & CassandraQuentin Ambard
G1, CMS, Shenandoah, or Zing? Heap size at 8GB or 31GB? compressed pointers? Region size? What is the maximum break time? Throughput or Latency... What gain? MaxGCPauseMillis, G1HeapRegionSize, MaxTenuringThreshold, UnlockExperimentalVMOptions, ParallelGCThreads, InitiatingHeapOccupancyPercent, G1RSetUpdatingPauseTimePercent, which parameters have the most impact?
This session brings to your attention how several millions of dollars are wasted and what you can do to save money. Optimizing garbage collection performance not only saves money, but also improves the overall customer experience as well.
“Show Me the Garbage!”, Garbage Collection a Friend or a FoeHaim Yadid
“Just leave the garbage outside and we will take care of it for you”. This is the panacea promised by garbage collection mechanisms built into most software stacks available today. So, we don’t need to think about it anymore, right? Wrong! When misused, garbage collectors can fail miserably. When this happens they slow down your application and lead to unacceptable pauses. In this talk we will go over different garbage collectors approaches and understand under which conditions they function well.
Similar to G1 Garbage Collector - Big Heaps and Low Pauses? (20)
Presented by Matt Brasier, C2B2 Principal Consultant, at the Oracle User Group Scotland Conference on the 10th of June 2015
Find out more about C2B2 Oracle SOA Suite servcies here: http://www.c2b2.co.uk/soa
Advanced queries on the Infinispan Data Grid C2B2 Consulting
Infinispan isn't just a scalable key/value grid platform: it simplifies execution of distributed query tasks in Map/Reduce style, and it integrates a powerful indexing engine to run full-text searches and efficiently extract information from your largest data collections.
Infinispan's search engine is built around on Apache Lucene and directly exposes the Lucene API to its users to build powerful search applications which run on data sets residing in-memory.
In this talk Navin - author of the Infinispan Query module - will demonstrate how you can leverage the strength of full-text queries to efficiently perform searches on your data.
Slides by Matt Brasier, Principal Consultant at C2B2, from the Hands on Lab delivered at the JavaOne 2014 conference.
ABOUT HOL:
This session will demonstrate the depth and breadth of the information available via the JMX API. We will tools that come with the JDK to peer deep into the workings of the JVM and understand how to identify and solve common performance bottlenecks or other problems. Attendees will get examples of using tools like VisualVM and Jstat to interrogate the JVM, and how to interpret the data returned, and learn how to add JMX instrumentation to their own applications and expose this to monitoring tools.
This will be a session that allows attendees to understand the power available to them in some of the overlooked core JVM features The session will use a combination of slides and examples that the attendees can code-along with on their own laptops, and will focus on what JMX is, how it works, and also how you can expose your own application MBeans and use these to monitor the application. In my work as a Java performance consultant, I have found that JMX and the basic JVM tooling that uses it, is not understood by developers, so this session is about raising awareness of these tools and allowing developers to get inside the JVM and their application to understand how it works (and write better code). In my experience, once developers understand the power of JMX and VisualVM, they find it very interesting, and often the best way to demonstrate it is by allowing people to work with it. The fact that the base JDK is all that is required to demonstrate this means that there are few pre-requisites for this session, and because it is based on a low level technology, it is of interest to people working on all aspects of Java. I think this could make popular talk which will help developers understand the magic and power behind the Java Virtual Machine.
Presentation by Matt Brasier, C2B2 Principal Consultant, delivered at DOAG German Oracle User Group Conference on the 20th of November 2014 in Nuremberg.
One of the most often overlooked, and yet most powerful features of WebLogic is WLST, the WebLogic Scripting Toolkit.
This jython based library allows you to connect to and interact with a weblogic domain; making configuration changes, deploying applications, creating new servers and doing anything else that you might do through the console.
By scripting our domain builds and deployments, we can control our configuration and provide increased confidence in our ability to rebuild an environment in a DR scenario, and with automated deployment tools like puppet gaining traction in DevOps circles, the ability to completely script both the creation of the domain, and deployment of the application gives us great power.
This presentation will look at how WLST can be used to build a domain, deploy an application, and then provide runtime monitoring.
Hands-on Performance Workshop - The science of performanceC2B2 Consulting
Mike presented this Hands-on workshop at JAX London, 2014. Mike outlines the environment setup and discusses performance overview, collecting data and how to interpret the data. If you would like any more information, feel free to comment and Mike will get back to you.
Dr. Low Latency or: How I Learned to Stop Worrying about Pauses and Love the ...C2B2 Consulting
Presented by Jaromir Hamala, C2B2 Senior Consultant, at JDays Conference on the 27th of November 2013.
Cheap RAM means we can use more memory than an average disk size was not that long time ago. It's very tempting to increase size of our JVM heap and store more data in-memory. As always, there is a trade-off: Large heaps lead to higher latencies due to Garbage Collector overhead. This session will show different strategies for accessing off-heap memory and using the advantage of large RAM without paying the performance penalty.
Presentation delivered by Matt Brasier at DOAG German Oracle User Group Conference in Nuremberg, 19-21 November 2013.
This lab will demonstrate the depth and breadth of the information available via the JMX API. We will use the JVM tools to peer deep into the workings of the JVM and understand how to identify and solve common performance bottlenecks. Attendees will get hands-on experience of using tools like VisualVM and Jstat to interrogate the JVM, and how to interpret the data returned.
This lab will be a hands-on session that allows attendees to understand the power available to them in some of the overlooked core JVM tools (jstack, jstat, visualvm). The session will use a combination of slides and examples that the attendees can code-along with on their own laptops, and will focus primarily on how the tools can be used to identify performance bottlenecks, although we will also look at how you can expose your own application MBeans and use these to monitor the application. In my work as a Java performance consultant, I have found that JMX and the basic JVM tooling that uses it, is not understood by developers, so this session is about raising awareness of these tools and allowing developers to get inside the JVM and their application to understand how it works (and write better code). In my experience, once developers understand the power of JMX and VisualVM, they find it very interesting, and often the best way to demonstrate it is by allowing people to work with it. The fact that the base JDK is all that is required to demonstrate this means that there are few pre-requisites for this session, and because it is based on a low level technology, it is of interest to people working on all aspects of Java. I think this could make popular talk which will help developers understand the magic and power behind the Java Virtual Machine.
Oracle Coherence & WebLogic 12c Web Sockets: Delivering Real Time Push at ScaleC2B2 Consulting
Presentation delivered by Steve Millidge at DOAG German Oracle User Group Conference in Nuremberg, 19-21 November 2013.
The real time web is coming with Websockets in HTML 5. The big question is how to deliver event driven architectures for websockets at scale. This session delivered by a member of the JSR 347 Data Grids expert group provides an insight on how combining Oracle Coherence with the new Websockets support in WebLogic 12c can deliver enterprise scale push to web devices. The session first provides an introduction to websockets and delves into typical Oracle Coherence architectures and how they deliver linear scalability and high availability. We then look at the event capabilities inherent in Oracle Coherence that when hooked up to the new WebLogic 12c Web Sockets server can deliver Coherence grid updates in real time to HTML 5 devices.
The presentation will be a mixture of of animated graphical slides depicting how WebLogic Web Sockets and Oracle Coherence work, combined with code snippets. We will then provide a demo hosted on amazon EC2 of the described architecture for delegates to browse to and interact with to show the capabilities of websockets on their devices. Demos will again use Oracle Coherence and WebLogic 12c.
During this half hour webinar, C2B2's Expert Middleware Support team reviews real world customer problems we see in production JBoss, WebLogic, Tomcat or GlassFish environments. This overview will help you understand how to deal with server slowdown issues, out of memory errors, problematic JMS and more.
If you have any questions to our support team, please don't hesitate to contact us - please send your questions and feedback to webinar@c2b2.co.uk.
Please note that we can only answer the questions related to the following products: http://www.c2b2.co.uk/supported_products
Fast Data: Parallel Processing on the Grid.
You may think of in-memory datagrids as a big cache, a key value store accessed using put and get. However to really reap the power and scale of modern data grids you need to move beyond cache semantics and turn your architecture on its head. Moving the processing to the data rather than pulling the data across the grid massively increases parallelism and reduces network latency, leading to huge increases in throughput. In this session, we will explore why traditional cache semantics sometimes struggles to scale and how through using parallel processing and event processing on the grid we can rearchitect our systems for massive scalability and utilise all our grid cores for parallel processing.
Oracle SOA Suite Performance Tuning- UKOUG Application Server & Middleware SI...C2B2 Consulting
Matt Brasier, C2B2 Head of Consulting speaking at the UK Oracle User Group App Server & Middleware Special Interest Group Event on Wednesday, the 9th of October 2013.
'Deploying with GlassFish & Docker' by Shanny Anoep, Senior Consultant at C2B2. Lightening talk delivered at the first London GlassFish User Group (GUG) event on the 18th of September 2013.
'JMS @ Data Grid? Hacking the Glassfish messaging for fun & profit' C2B2 Consulting
'JMS @ Data Grid? Hacking the Glassfish messaging for fun & profit' by Jaromir Hamala, Senior Consultant at C2B2. Lightening talk delivered at the first London GlassFish User Group (GUG) event on the 18th of September 2013.
'New JMS features in GlassFish 4.0' by Nigel DeakinC2B2 Consulting
Presentation by Nigel Deakin (Oracle) delivered at the first London GlassFish User Group (GUG) event on the 18th of September 2013.
GlassFish 4.0 is the application server to support the new Java EE 7 standard. One of the most significant components of Java EE 7 is JMS 2.0, which is the first revision to the JMS (Java Message Service) API for over a decade and which introduces a new and much simpler API. This talk will review the new features of JMS 2.0 and other recent messaging-related changes in GlassFish 4.0.
JUDCon 2013- JBoss Data Grid and WebSockets: Delivering Real Time Push at ScaleC2B2 Consulting
JUDCon 2013 Presentation by Mark Addy, C2B2 Senior Consultant- JBoss Data Grid and WebSockets: Delivering Real Time Push at Scale
The real time web is coming with WebSockets in HTML 5. The big question is how to deliver event driven architectures for WebSockets at scale. This session delivered by the experienced middleware consultant provides an insight on how combining JBoss Data Grid with WebSockets can deliver enterprise scale push to web devices. The session first provides an introduction to WebSockets and delves into typical JBoss Data Grid architectures and how they deliver linear scalability and high availability. We then look at the event capabilities inherent in JBoss Data Grid that when hooked up to a WebSockets server can deliver data grid updates in real time to HTML 5 mobile devices.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
5. Manual Memory Management I
Not all the memory is yours!
You need to ask for it
malloc() -> sbrk()
And return when you no
longer need it
free() -> sbrk()
5
6. Manual Memory Management II
Asking for something is easy
Returning is way more
complicated!
(we all know it!)
6
8. A little bit of history
3000 B.C. – “The first landfill is
developed when Knossos,
Crete digs large holes for
refuse. Garbage is dumped
and filled with dirt at various
levels.”
In LISP from end of „50s!
In Java from the 1st version
8
9. Requirements on GC
High Throughput
Low Latency
Minimal Footprint
Reliable
Easy to use/tune
11. Generational Garbage Collector
Split the heap into regions
Create new objects in Young
Move mature objects to Old
Different strategies for different regions
21. How CMS works in old generation
Main phases of CMS:
Initial Mark (STW)
Concurrent Mark
Remark (STW)
Concurrent Sweep
Features:
Continuous block of memory
No compacting!
23. What‟s the catch?
Fragmentation
Remark phase (STW) “It’s working very well except
Very Large Heaps when it’s not”. –M. Brasier
Promotion Failure
Concurrent Mode Failure
26. G1 Goals
Low Latency
Better Predictability
Easy to use & tune
27. G1 Heap Layout
Heap divided into +- 2000 regions of equal
size
No physical separation between young /
old generation
Each region can be either young or old
Object moved between regions during
collections
Humongous Region
For large objects
Expensive collecting as of now!
29. G1 Old Generation GC I.
Combination of CMS and Parallel Compacting GC
Concurrent Marking Phase
Remark still STW
1929.299: [GC remark 1929.299: [GC ref-proc, 0.0048180 secs], 0.0258670 secs]
[Times: user=0.03 sys=0.00, real=0.02 secs]
Initial Marking part of evacuation pause
62.583: [GC pause (young) (initial-mark), 0.09812400 secs]
Calculating statistics data
Ratio of living objects inside region
Who references our region
30. G1 Old Generation GC II.
Regions with no living
objects can be
reclaimed immediately
(after remark)
31. G1 Old Generation GC III.
Regions with smallest
ratio of living objects
chosen to evacuate
during next GC
53.695: [GC pause
(mixed), 0.05568800 secs]
32. G1 Old Generation GC IV.
Concurrent Marking Phase
Calculate liveness ratio
Find best regions to evacuate
Reclaming Regions
After Remark
During evacuation pause
33. Remember Set
Structure to track references from other
regions
One per region
Allows to collects objects without tracking
whole heap to find references
Maintenance based on write barrier
There is no free lunch!
34. Gotcha! (Part I.)
More inter-region references -> Bigger
Remember Set
Large Remember Set -> Slow G1
Remember: Large Object -> Humongous
Region -> Collecting not optimized as yet
35. G1 Tuning Fundamentals
Main goal of G1 is a low latency -XX:+UseCMSInitiatingOccupancyOnly
If you can tolerate latency use Parallel GC -XX:CMSInitiatingOccupancyFraction=60
Another goal is simplicity (of tuning) -XX:CMSTriggerRatio=60
-XX:CMSWaitDuration=1500
-XX:ConcGCThreads=2
Most important tuning option: -XX:SurvivorRatio=8
-XX:MaxGCPauseMillis=300 -Xmn1g
Influence maximum amount of work per ….
collection
No hard promise
Best effort only
36. Other Import Tuning Options I
-XX:InitiatingHeapOccupancyPercent=n
When to start a concurrent GC cycle
Percent of an entire heap, not just old
geneneration!
Too Low -> Unnecessary GC overhead
Too High -> “Space Overflow” -> Full GC
37. Other Import Tuning Options II
-XX:G1OldCSetRegionLiveThresholdPercent=n
Threshold for region to be included in a
Collection Set
To High -> More aggressive collecting
More live objects to copy
To Low -> Wasting some heap
38. Other Import Tuning Options III
-XX:G1MixedGCCountTarget=n
How many Mixed GC / Concurrent Cycle
To High -> Unnecessary overhead
To Low -> Longer pauses
39. Considerations Before Tuning
Some options cause your
PauseTimeTarget to be ignored!
-Xmn
Fixed young generation size
40. Logging the GC Activity
Verbose GC is your friend!
-XX:+PrintGC -XX:+PrintGCDetails -
XX:+PrintGCTimeStamps
Always use it! Even in production.
Very low overhead. There is no excuse!
Will help you analyze help your GC is
performing
42. G1 GC Log
1.297: [GC pause (young), 0.02828800 secs]
Young GC
Started 1.297s from the start of JVM
Total elapsed time was 0.02828800s
[Parallel Time: 24.2 ms]
Total elapsed time spent by parallel GC thread
43. G1 GC Log
[GC Worker Start (ms): 1296.7 1302.7
Avg: 1299.7, Min: 1296.7, Max: 1302.7, Diff: 6.0]
Two threads and starting times
[Ext Root Scanning (ms): 16.5 2.9
Avg: 9.7, Min: 2.9, Max: 16.5, Diff: 13.6]
Time spent by each thread by scanning the roots
44. G1 GC Log
[Update RS (ms): 0.0 0.0
Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0]
[Processed Buffers : 0 9
Sum: 9, Avg: 4, Min: 0, Max: 9, Diff: 9]
Time spent by updating Remember Sets regions
Update Buffers is a structure tracking changes in references
[Scan RS (ms): 0.0 0.1
Avg: 0.0, Min: 0.0, Max: 0.1, Diff: 0.1].
Time spent by scanning areas from Remember Set
45. G1 GC Log
[Object Copy (ms): 7.6 15.1
Avg: 11.3, Min: 7.6, Max: 15.1, Diff: 7.5]
Time spent by copying objects to other regions
Usually significant portion of a total time
[Termination (ms): 0.0 0.0
Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0]
[Termination Attempts : 1 1
Sum: 2, Avg: 1, Min: 1, Max: 1, Diff: 0]
Time spent by offering to terminate
Threads are stealing work from the others
46. G1 Logging Options I
-XX:+PrintAdaptiveSizePolicy
53.628: [GC pause (mixed) 53.628: [G1Ergonomics (CSet Construction) start choosing CSet, predicted
base time: 54.79 ms, remaining time: 145.21 ms, target pause time: 200.00 ms]
53.628: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 51 regions, survivors:
2 regions, predicted young region time: 9.75 ms]
53.628: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: old CSet
region num reached max, old: 27 regions, max: 27 regions]
53.628: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 51 regions, survivors: 2
regions, old: 27 regions, predicted pause time: 131.56 ms, target pause time: 200.00 ms]
53.700: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason:
still doing mixed collections, occupancy: 127926272 bytes, allocation request: 0 bytes, threshold:
125986365 bytes (45.00 %), source: end of GC]
47. G1 Evacuation Failure
Evacuation Failure Happens when we do not have empty regions
Heap is already at maximum size
Very expensive operation!
Search for “to-space overflow”
Remedy:
Increase to heap size
-XX:InitiatingHeapOccupancyPercent=
More marking threads
48. Is G1 Production Ready?
It depends!
Well, some people use it in production
The days of G1 crashing JVM are hopefully over
Still quite significant changes
Bugs are still being found
Bug 7181612 : G1: Premature Evacuation (7143858) strikes again
Bug 7143858 : G1: Back to back young GCs with the second GC having a minimally
sized eden
You should probably evaluate/use G1 when everything else is failing
And you have no alternative!
49. GC Decisioning: Rules of Thumb I
Do I really need a Low Latency
No? Use Parallel GC
Do I really need a Big Heap?
Yes? Think again!
Do I really need a Big Heap?
No? Use small heap & Parallel GC
Yes? Try CMS 1st
50. GC Decisioning: Rules of Thumb II
Is CMS performing well?
Yes? Done!
No? Tune it!
Is tuned CMS performing well?
Yes? Done!
No? Tune it more!
Is CMS performing well now?
Yes? Done!
No? Do you really need such big heap?
51. GC Decisioning: Rules of Thumb III
Yes, I really need a Big Heap a Low Pauses!
This is the moment where you should start considering to use G1!
In Nutshell:
For now treat G1 as a last resort
When other GCs are failing
Be aware of gotchas:
Very Large Objects
Large Remember Sets
Footprint?
Test carefully before using in a production!
53. G1 – Recent Commits – Conclusion?
Do not use anything before 7u4
Use 7u9 or later
The latest version in the best case
Do not use G1 from Java 6
54. What Are The Alternatives?
Workaround / Hacks:
Off Heap Allocations
DirectBuffers
JNI
Object Pooling
55. Further Resources
JavaOne 2012 G1 Talk, Charlie Hunt, Monica Beckwith
hotstop-gc-use mailinglist
http://www.quora.com/What-is-the-status-of-the-implementation-of-the-G1-garbage-
collector-for-the-JVM#
….