A 2015 presentation to introduce users to Java profiling. The Yourkit Profiler is used for concrete examples. The following topics are covered:
1) When to profile
2) Profiler sampling
3) Profiler instrumentation
4) Where to Start
5) Macro vs micro benchmarking
LinuxCon Europe, 2014. Video: https://www.youtube.com/watch?v=SN7Z0eCn0VY . There are many performance tools nowadays for Linux, but how do they all fit together, and when do we use them? This talk summarizes the three types of performance tools: observability, benchmarking, and tuning, providing a tour of what exists and why they exist. Advanced tools including those based on tracepoints, kprobes, and uprobes are also included: perf_events, ktap, SystemTap, LTTng, and sysdig. You'll gain a good understanding of the performance tools landscape, knowing what to reach for to get the most out of your systems.
Web technologies are evolving at such a frenetic pace that it becomes almost mandatory to learn on your own. A lot of us still depend on other people to do this learning for us, and we tend to use their answers to solve our everyday problems.
Inconsistent implementations, rapidly evolving specs, questionable performance impacts and maintenance implications mean we cannot always depend on others for answers but must involve ourselves actively in the process of developing specifications for new Web technologies. But how do we go about it?
There are some simple rituals we can all do, which can have us be better-informed and also better inform the people and groups who are most directly involved in the development of new Web technologies.
LinuxCon Europe, 2014. Video: https://www.youtube.com/watch?v=SN7Z0eCn0VY . There are many performance tools nowadays for Linux, but how do they all fit together, and when do we use them? This talk summarizes the three types of performance tools: observability, benchmarking, and tuning, providing a tour of what exists and why they exist. Advanced tools including those based on tracepoints, kprobes, and uprobes are also included: perf_events, ktap, SystemTap, LTTng, and sysdig. You'll gain a good understanding of the performance tools landscape, knowing what to reach for to get the most out of your systems.
Web technologies are evolving at such a frenetic pace that it becomes almost mandatory to learn on your own. A lot of us still depend on other people to do this learning for us, and we tend to use their answers to solve our everyday problems.
Inconsistent implementations, rapidly evolving specs, questionable performance impacts and maintenance implications mean we cannot always depend on others for answers but must involve ourselves actively in the process of developing specifications for new Web technologies. But how do we go about it?
There are some simple rituals we can all do, which can have us be better-informed and also better inform the people and groups who are most directly involved in the development of new Web technologies.
Where'd all my memory go? SCALE 12x SCALE12xJoshua Miller
Insufficient memory is a regular problem for systems, and finding what is using up our memory can be tricky. In this session we look at the linux kernel memory system : where memory is consumed, why, and what to do about it. We'll explore memory metrics through utilities like top, ps, vmstat, pmap, and slabinfo. We'll start with the basics of memory in the Linux kernel - overviewing the relevant fields in top, looking at per process statistics in ps, but then quickly work up to more complex matters. Topics will include paging, swapping, caches, buffers, the Linux VFS, and shared memory. Throughout the presentation we'll look at sample cases which highlight particular components, the circumstances in which that component might become involved in using a significant portion of a system's memory, and discuss how and whether tunables should be used to manage how the kernel manages its resources.
Kernel Recipes 2019 - RCU in 2019 - Joel FernandesAnne Nicolas
RCU has seen lots of changes in the last 2 years. Of note is the RCU flavor consolidation and tree RCU’s lock contention improvements. There have been also improvements with static checking, fixes to scheduler deadlocks and improvements to RCU-based linked lists. This talk starts with an introduction of RCU along with presenting the recent Improvements and changes in RCU’s behavior.
Key recovery attacks against commercial white-box cryptography implementation...CODE BLUE
White-box cryptography aims to protect cryptographic primitives and keys in software implementations even when the adversary has a full control to the execution environment and an access to the implementation of the cryptographic algorithm. It combines mathematical transformation with obfuscation techniques so it’s not just obfuscation on a data and a code level but actually algorithmic obfuscation.
In the white-box implementation, cryptographic keys are mathematically transformed so that never revealed in a plain form, even during execution of cryptographic algorithms. With such security in the place, it becomes extremely difficult for attackers to locate, modify, and extract the cryptographic keys. Although all current academic white-box implementations have been practically broken by various attacks including table-decomposition, power analysis attack, and fault injection attacks, There are no published reports of successful attacks against commercial white-box implementations to date. When I have assessed Commercial white box implementations to check if they were vulnerable to previous attacks, I found out that previous attacks failed to retrieve a secret key protected with the commercial white-box implementation. Consequently, I modified side channel attacks to be available in academic literature and succeeded in retrieving a secret key protected with the commercial white-box cryptography implementation. This is the first report that succeeded to recover secret key protected with commercial white-box implementation to the best of my knowledge in this industry. In this talk, I would like to share how to recover the key protected with commercial white-box implementation and present security guides on applying white-box cryptography to services more securely.
Kernel Recipes 2016 - Understanding a Real-Time System (more than just a kernel)Anne Nicolas
The PREEMPT_RT patch turns Linux into a hard Real-Time designed operating system. But it takes more than just a kernel to make sure you can meet all your requirements. This talk explains all aspects of the system that is being used for a mission critical project that must be considered. Creating a Real-Time environment is difficult and there is no simple solution to make sure that your system is capable to fulfill its needs. One must be vigilant with all aspects of the system to make sure there are no surprises. This talk will discuss most of the “gotchas” that come with putting together a Real-Time system.
You don’t need to be a developer to enjoy this talk. If you are curious to know how your computer is an unpredictable mess you should definitely come to this talk.
Steven Rostedt - Red Hat
Secrets of building a debuggable runtime: Learn how language implementors sol...Dev_Events
Bjørn Vårdal, J9VM Software Developer, IBM, @bvaardal
New language runtimes appear all the time, but most of them die young. Failure can be attributed to
different reasons, but an important factor is that lack of support can limit the community’s and
industry’s willingness to adopt the new language.
Quicker development and improved serviceability allows emerging languages to overcome this obstacle.
By building on the proven technology available in Eclipse OMR, language developers can get more than
performance and stability; you also get tools that help you quickly debug your language runtime,
allowing you to provide competitive serviceability.
From this presentation, you will learn how to enable Eclipse OMR’s mature debugging features in your
language runtime, and also how Eclipse OMR can assist with development and debugging
Performance analysis in a multitenant cloud environment Using Hadoop Cluster ...Orgad Kimchi
Analyzing the performance of a virtualized multitenant cloud environment can be challenging because of the layers of abstraction. This article shows how to use Oracle Solaris 11 to overcome those limitations.
For more information see:
http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-analysis-multitenant-cloud-2082193.html
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
Profiling PyTorch for Efficiency & Sustainabilitygeetachauhan
From my talk at the Data & AI summit - latest update on the PyTorch Profiler and how you can use it for optimizations for efficiency. Talk also dives into the future and what we need to do together as an industry to move towards Sustainable AI
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
MeetBSDCA 2014 Performance Analysis for BSD, by Brendan Gregg. A tour of five relevant topics: observability tools, methodologies, benchmarking, profiling, and tracing. Tools summarized include pmcstat and DTrace.
OSMC 2015: Linux Performance Profiling and Monitoring by Werner FischerNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
Where'd all my memory go? SCALE 12x SCALE12xJoshua Miller
Insufficient memory is a regular problem for systems, and finding what is using up our memory can be tricky. In this session we look at the linux kernel memory system : where memory is consumed, why, and what to do about it. We'll explore memory metrics through utilities like top, ps, vmstat, pmap, and slabinfo. We'll start with the basics of memory in the Linux kernel - overviewing the relevant fields in top, looking at per process statistics in ps, but then quickly work up to more complex matters. Topics will include paging, swapping, caches, buffers, the Linux VFS, and shared memory. Throughout the presentation we'll look at sample cases which highlight particular components, the circumstances in which that component might become involved in using a significant portion of a system's memory, and discuss how and whether tunables should be used to manage how the kernel manages its resources.
Kernel Recipes 2019 - RCU in 2019 - Joel FernandesAnne Nicolas
RCU has seen lots of changes in the last 2 years. Of note is the RCU flavor consolidation and tree RCU’s lock contention improvements. There have been also improvements with static checking, fixes to scheduler deadlocks and improvements to RCU-based linked lists. This talk starts with an introduction of RCU along with presenting the recent Improvements and changes in RCU’s behavior.
Key recovery attacks against commercial white-box cryptography implementation...CODE BLUE
White-box cryptography aims to protect cryptographic primitives and keys in software implementations even when the adversary has a full control to the execution environment and an access to the implementation of the cryptographic algorithm. It combines mathematical transformation with obfuscation techniques so it’s not just obfuscation on a data and a code level but actually algorithmic obfuscation.
In the white-box implementation, cryptographic keys are mathematically transformed so that never revealed in a plain form, even during execution of cryptographic algorithms. With such security in the place, it becomes extremely difficult for attackers to locate, modify, and extract the cryptographic keys. Although all current academic white-box implementations have been practically broken by various attacks including table-decomposition, power analysis attack, and fault injection attacks, There are no published reports of successful attacks against commercial white-box implementations to date. When I have assessed Commercial white box implementations to check if they were vulnerable to previous attacks, I found out that previous attacks failed to retrieve a secret key protected with the commercial white-box implementation. Consequently, I modified side channel attacks to be available in academic literature and succeeded in retrieving a secret key protected with the commercial white-box cryptography implementation. This is the first report that succeeded to recover secret key protected with commercial white-box implementation to the best of my knowledge in this industry. In this talk, I would like to share how to recover the key protected with commercial white-box implementation and present security guides on applying white-box cryptography to services more securely.
Kernel Recipes 2016 - Understanding a Real-Time System (more than just a kernel)Anne Nicolas
The PREEMPT_RT patch turns Linux into a hard Real-Time designed operating system. But it takes more than just a kernel to make sure you can meet all your requirements. This talk explains all aspects of the system that is being used for a mission critical project that must be considered. Creating a Real-Time environment is difficult and there is no simple solution to make sure that your system is capable to fulfill its needs. One must be vigilant with all aspects of the system to make sure there are no surprises. This talk will discuss most of the “gotchas” that come with putting together a Real-Time system.
You don’t need to be a developer to enjoy this talk. If you are curious to know how your computer is an unpredictable mess you should definitely come to this talk.
Steven Rostedt - Red Hat
Secrets of building a debuggable runtime: Learn how language implementors sol...Dev_Events
Bjørn Vårdal, J9VM Software Developer, IBM, @bvaardal
New language runtimes appear all the time, but most of them die young. Failure can be attributed to
different reasons, but an important factor is that lack of support can limit the community’s and
industry’s willingness to adopt the new language.
Quicker development and improved serviceability allows emerging languages to overcome this obstacle.
By building on the proven technology available in Eclipse OMR, language developers can get more than
performance and stability; you also get tools that help you quickly debug your language runtime,
allowing you to provide competitive serviceability.
From this presentation, you will learn how to enable Eclipse OMR’s mature debugging features in your
language runtime, and also how Eclipse OMR can assist with development and debugging
Performance analysis in a multitenant cloud environment Using Hadoop Cluster ...Orgad Kimchi
Analyzing the performance of a virtualized multitenant cloud environment can be challenging because of the layers of abstraction. This article shows how to use Oracle Solaris 11 to overcome those limitations.
For more information see:
http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-analysis-multitenant-cloud-2082193.html
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
Profiling PyTorch for Efficiency & Sustainabilitygeetachauhan
From my talk at the Data & AI summit - latest update on the PyTorch Profiler and how you can use it for optimizations for efficiency. Talk also dives into the future and what we need to do together as an industry to move towards Sustainable AI
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
MeetBSDCA 2014 Performance Analysis for BSD, by Brendan Gregg. A tour of five relevant topics: observability tools, methodologies, benchmarking, profiling, and tracing. Tools summarized include pmcstat and DTrace.
OSMC 2015: Linux Performance Profiling and Monitoring by Werner FischerNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
Analyzing OS X Systems Performance with the USE MethodBrendan Gregg
Talk for MacIT 2014. This talk is about systems performance on OS X, and introduces the USE Method to check for common performance bottlenecks and errors. This methodology can be used by beginners and experts alike, and begins by constructing a checklist of the questions we’d like to ask of the system, before reaching for tools to answer them. The focus is resources: CPUs, GPUs, memory capacity, network interfaces, storage devices, controllers, interconnects, as well as some software resources such as mutex locks. These areas are investigated by a wide variety of tools, including vm_stat, iostat, netstat, top, latency, the DTrace scripts in /usr/bin (which were written by Brendan), custom DTrace scripts, Instruments, and more. This is a tour of the tools needed to solve our performance needs, rather than understanding tools just because they exist. This talk will make you aware of many areas of OS X that you can investigate, which will be especially useful for the time when you need to get to the bottom of a performance issue.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
The objective of this article is to describe what to monitor in and around Alfresco in order to have a good understanding of how the applications are performing and to be aware of potential issues.
OSDC 2015: Georg Schönberger | Linux Performance Profiling and MonitoringNETWAYS
Nowadays system administrators have great choices when it comes down to performance profiling and monitoring. The challenge is to pick the ppropriate tool and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin. The topics will range from simple application profiling over sysstat utilities to low-level tracing methods. Besides traditional Linux methods a short glance at MySQL and Linux containers will be taken, too, as they are widely spread technologies.
At the end the goal is to gather reference points to look at, if you are faced with performance problems. Take the chance to close your knowledge gaps and learn how to get the most out of your system.
EuroBSDcon 2017 System Performance Analysis MethodologiesBrendan Gregg
keynote by Brendan Gregg. "Traditional performance monitoring makes do with vendor-supplied metrics, often involving interpretation and inference, and with numerous blind spots. Much in the field of systems performance is still living in the past: documentation, procedures, and analysis GUIs built upon the same old metrics. Modern BSD has advanced tracers and PMC tools, providing virtually endless metrics to aid performance analysis. It's time we really used them, but the problem becomes which metrics to use, and how to navigate them quickly to locate the root cause of problems.
There's a new way to approach performance analysis that can guide you through the metrics. Instead of starting with traditional metrics and figuring out their use, you start with the questions you want answered then look for metrics to answer them. Methodologies can provide these questions, as well as a starting point for analysis and guidance for locating the root cause. They also pose questions that the existing metrics may not yet answer, which may be critical in solving the toughest problems. System methodologies include the USE method, workload characterization, drill-down analysis, off-CPU analysis, chain graphs, and more.
This talk will discuss various system performance issues, and the methodologies, tools, and processes used to solve them. Many methodologies will be discussed, from the production proven to the cutting edge, along with recommendations for their implementation on BSD systems. In general, you will learn to think differently about analyzing your systems, and make better use of the modern tools that BSD provides."
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
"Impact of front-end architecture on development cost", Viktor Turskyi
Introduction to Java Profiling
1. J AVA P R O F I L I N G
I N T R O D U C T I O N T O
Jerry Yoakum
Expedia Affiliate Network
2. A G E N D A
• When to profile
• Profiler Sampling
• Profiler Instrumentation
• Where to Start
• Examples
• Micro vs Macro Benchmarking
3. W H E N T O P R O F I L E
• When a performance issue is unclear.
• To proactively check that an application is performing as expected.
• To turbo-charge an application?
4. “We should forget about small efficiencies,
say about 97% of the time; premature
optimization is the root of all evil.”
– D O N A L D K N U T H
The point that Knuth is trying to make is that in the end, you should write “clean, straightforward code that is simple to read and understand. In this context, “optimizing”
is understood to mean employing algorithmic and design changes that complicate program structure but provide better performance. Those kind of optimizations indeed
are best left undone until such time as the profiling of a program shows that there is a large benefit from performing them.
6. P R E M AT U R E O P T I M I Z AT I O N S I N C L U D E …
• Manually inlining methods.
• Writing directly in bytecode.
• Allocating public variables and using them as global memory
through out an application.
• And anything else that makes the code unduly difficult to
work with.
7. T O O L S !
• vmstat
• iostat
“Performance analysis is all about visibility—knowing what is going on inside of an application, and in the application’s environment. Visibility is all about tools. And so
performance tuning is all about tools.”
8. O V E R L O A D E D
M A C H I N E
• $ vmstat 1
• ‘r’ column is the run queue length
• the number of all threads that are
running or that could run if there were
an available CPU
• if the run queue length is too high for
any significant period of time, it is an
indication that the machine is
overloaded
9. V M S TAT E X A M P L E F O R A L O W U S A G E S Y S T E M
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 867632 38568 165348 0 0 453 20 236 271 3 5 91 1 0
0 0 0 867632 38568 165348 0 0 0 0 161 247 0 1 99 0 0
0 0 0 867632 38568 165348 0 0 0 0 140 240 0 1 99 0 0
0 0 0 867632 38568 165348 0 0 0 0 152 255 0 1 99 0 0
1 0 0 867632 38568 165348 0 0 0 0 147 240 0 1 99 0 0
10. V M S TAT E X A M P L E F O R A B U S Y S Y S T E M
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
12 0 82596 130020 130816 524228 0 0 0 0 2696 4644 84 12 4 0 0
12 0 83288 149288 129784 517476 32 692 32 692 3722 4536 85 14 1 0 0
14 0 83288 130248 129784 522520 0 0 0 0 2644 5128 87 13 0 0 0
0 2 83288 142548 129788 521936 64 0 64 40 1653 2748 53 8 20 20 0
13 0 86720 127480 125384 519344 32 3436 32 3436 4421 4671 76 12 6 5 0
17 1 87336 141932 124548 515632 64 616 64 632 3110 4302 87 13 1 0 0
14. Examine Disk IO with iostat -xm 5
for a busy system
avg-cpu: %user %nice %system %iowait %steal %idle
16.20 0.00 83.50 0.00 0.10 0.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
vda 30.00 2.40 8.20 1.00 0.15 0.01 36.00 0.05 5.78 3.04 2.80
dm-0 0.00 0.00 0.20 3.20 0.00 0.01 8.00 0.05 35.53 4.00 81.36
dm-1 0.00 0.00 38.00 0.00 0.15 0.00 8.00 0.17 4.49 0.38 1.44
Is a device being used more than others?
15. Examine Disk IO with iostat -xm 5
for a busy system
avg-cpu: %user %nice %system %iowait %steal %idle
16.20 0.00 83.50 0.00 0.10 0.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
vda 30.00 2.40 8.20 1.0 0.15 0.01 36.00 0.05 5.78 3.04 2.80
dm-0 0.00 0.00 0.20 63.2 0.00 0.01 8.00 0.05 35.53 4.00 81.36
dm-1 0.00 0.00 38.00 0.0 0.15 0.00 8.00 0.17 4.49 0.38 1.44
Are the w/s high while the wMB/s is low?
16. Examine Disk IO with iostat -xm 5
for a busy system
avg-cpu: %user %nice %system %iowait %steal %idle
16.20 0.00 83.50 0.00 0.10 0.20
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
vda 30.00 2.40 8.20 1.0 0.15 0.01 36.00 0.05 5.78 3.04 2.80
dm-0 0.00 0.00 0.20 63.2 0.00 0.01 8.00 0.05 35.53 4.00 81.36
dm-1 0.00 0.00 38.00 0.0 0.15 0.00 8.00 0.17 4.49 0.38 1.44
Is await high for a device?
17. P R O F I L E R S A M P L I N G
• Sampling-based profilers are the most common kind of profiler.
• Because of their relatively low profile, sampling profilers introduce fewer
measurement artifacts.
• Different sampling profiles behave differently; each may be better for a
particular application.
Sampling profilers probe the program counter at regular intervals using operating system interrupts. Sampling profilers are less accurate but facilitate a near normal
execution time.
25. S A M P L I N G
S A F E P O I N T S
Sampling profilers in Java can only take the sample of
a thread when the thread is at a safepoint—essentially,
whenever it is allocating memory.
26. P R O F I L E R I N S T R U M E N TAT I O N
• Instrumented profilers yield more information about an application, but
can possibly have a greater effect on the application than a sampling
profiler.
• Instrumented profilers should be set up to instrument small sections of the
code—a few classes or packages. That limits their impact on the
application’s performance.
Instrumented profiler adds additional instructions in the code to gather data about what was executed, when, for how long, etc.
27. I N S T R U M E N TAT I O N I M PA C T
Instrumented code may change the execution profile.
For example, the JVM will inline small methods so that no method invocation is needed when the small-method code is executed. The compiler makes that decision
based on the size of the code; depending on how the code is instrumented, it may no longer be eligible to be inlined. This may cause the instrumented profiler to
overestimate the contribution of certain methods. And inlining is just one example of a decision that the compiler makes based on the layout of the code; in general, the
more the code is instrumented (changed), the more likely it is that its execution profile will change.
28. I N S T R U M E N T E D
main()
prog()
s()
con()
29. I N S T R U M E N T E D
main()
prog()
s()
con()
30. I N S T R U M E N T E D
main()
prog()
s()
con()
31. I N S T R U M E N T E D
main()
prog()
s()
con()
32. I N S T R U M E N T E D
main()
prog()
s()
con()
33. I N S T R U M E N T E D
main()
prog()
s()
con()
34. I N S T R U M E N T E D
main()
prog()
s()
con()
35. I N S T R U M E N T E D
main()
prog()
s()
con()
36. I N S T R U M E N T E D
main()
prog()
s()
con()
37. I N S T R U M E N T E D
main()
prog()
s()
con()
38. I N S T R U M E N T E D
main()
prog()
s()
con()
39. I N S T R U M E N T E D
main()
prog()
s()
con()
40. I N S T R U M E N T E D
main()
prog()
s()
con()
41. I N S T R U M E N T E D
main()
prog()
s()
con()
42. I N S T R U M E N T E D
main()
prog()
s()
con()
The thing to notice is that there is so much instrumentation that it is potentially greater than the con() but since it is added to con() that method appears to have greater
impact.
43. P R O F I L E T H E C P U F I R S T
• CPU time is the first thing to examine when looking at performance of an
application.
• The goal in optimizing code is to drive the CPU usage up (for a shorter
period of time), not down.
• Understand why CPU usage is low before diving in and attempting to tune
an application.
44. P R O F I L E T H E C P U F I R S T
In the heat of battle, in can be tough to choose your targets. I’m sympathetic to that. You see lots of garbage collections with a big heap, you want to profile the memory
right away! But I’m asking you… no, I’m begging you. For the love of Java. People. Profile the CPU. The CPU. This CPU right here! Profile the CPU first!
45. L I M I T WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void waste() {
21 for (Long count = 0L;
count < 500_000_000;
count++) {
22 value += count;
23 }
24 }
46. S TA R T L I M I T WA S T E W I T H A G E N T AT TA C H E D
$ java -agentpath:libyjpagent.jnilib LimitWaste
[YourKit Java Profiler 2015 build 15042]
Log file: /Users/jyoakum/.yjp/log/LimitWaste-4096.log
Press enter to continue.
51. C O N T I N U E P R O C E S S I N G O F L I M I T WA S T E
$ java -agentpath:libyjpagent.jnilib LimitWaste
[YourKit Java Profiler 2015 build 15042]
Log file: /Users/jyoakum/.yjp/log/LimitWaste-4096.log
Press enter to continue.
124999999750000000 after 7827.359 ms
Press enter to finish.
54. L I M I T WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void waste() {
21 for (Long count = 0L;
count < 500_000_000;
count++) {
22 value += count;
23 }
24 }
55. L I M I T A L L O C AT I O N WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void waste() {
21 for (Long count = 0L;
count < 500_000_000;
Long.valueOf(count + 1)) {
22 value = Long.valueOf(value + count);
23 }
24 }
59. Y O U R K I T - P E R F C H A R T F O R A L L O C AT I O N
60. L I M I T A L L O C AT I O N WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void waste() {
21 for (Long count = 0L;
count < 500_000_000;
Long.valueOf(count + 1)) {
22 value = Long.valueOf(value + count);
23 }
24 }
61. L I M I T A L L O C AT I O N WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void lessWaste() {
21 for (long count = 0;
count < 500_000_000;
count++) {
22 value = Long.valueOf(value + count);
23 }
24 }
62. L I M I T WA S T E I M P R O V E D
$ java -agentpath:libyjpagent.jnilib LimitWaste
[YourKit Java Profiler 2015 build 15042]
Log file: /Users/jyoakum/.yjp/log/LimitWaste-4096.log
Press enter to continue.
124999999750000000 after 14833.461 ms
Press enter to continue.
124999999750000000 after 8551.391 ms
Press enter to finish.
63. Y O U R K I T - L I M I T WA S T E I M P R O V E D
64. L I M I T A L L O C AT I O N WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void lessWaste() {
21 for (long count = 0;
count < 500_000_000;
count++) {
22 value = Long.valueOf(value + count);
23 }
24 }
65. L I M I T A L L O C AT I O N WA S T E E X A M P L E
static volatile Long value = 0L;
…
20 private static void haste() {
21 long fastValue = 0L;
22 for (long count = 0;
count < 500_000_000;
count++) {
23 fastValue += count;
24 }
25 value = fastValue;
26 }
66. L I M I T WA S T E - M A K E H A S T E
$ java -agentpath:libyjpagent.jnilib LimitWaste
[YourKit Java Profiler 2015 build 15042]
Log file: /Users/jyoakum/.yjp/log/LimitWaste-4096.log
Press enter to continue.
124999999750000000 after 14833.461 ms
Press enter to continue.
124999999750000000 after 8551.391 ms
Press enter to continue.
124999999750000000 after 266.119 ms
Press enter to finish.
67. Y O U R K I T - L I M I T WA S T E - M A K E H A S T E
68. Y O U R K I T - L I M I T WA S T E - M A K E H A S T E
69. T H R E A D P R O F I L I N G
• Thread profiling is concerned with examining the different thread states.
• If threads are blocked most of the time then execution power is reduced.
70. T H R E A D P R O F I L I N G E X A M P L E
ExecutorService execSvc = Executors.newFixedThreadPool(200);
for (int i = 0; i < 1000; i++) {
execSvc.execute(new SortingThread());
}
execSvc.shutdown();
execSvc.awaitTermination(5, TimeUnit.MINUTES);
71. T H R E A D P R O F I L I N G E X A M P L E
class SortingThread implements Runnable {
@Override
public void run() {
System.out.println("starting...");
int arraySize = 300_000;
int[] bigArray = new int[arraySize];
// populate the array with random numbers
for (int i = 0; i < arraySize; i++) {
bigArray[i] = ThreadLocalRandom.current().nextInt(50_000);
}
Arrays.sort(bigArray);
System.out.println("finished!");
}
}
72. T H R E A D P R O F I L I N G E X A M P L E
$ java -agentpath:libyjpagent.jnilib ThreadExample
[YourKit Java Profiler 2015 build 15042]
Log file: /Users/jyoakum/.yjp/log/ThreadExample-90362.log
Press enter to continue.
starting…
…
finished!
Complete after 9041.103 ms
Press enter to finish.
73. T H R E A D P R O F I L I N G E X A M P L E - Y O U R K I T
The key thing to take notice of here is that the percent of time under run() only adds up to 56%. Leaving 43% as unaccounted…
74. T H R E A D P R O F I L I N G E X A M P L E - Y O U R K I T
75. T H R E A D P R O F I L I N G E X A M P L E - Y O U R K I T
76. T H R E A D P R O F I L I N G E X A M P L E - J M C
• JMC (Java Mission Control)
• Low overhead - built into the JVM
• Commercial feature that requires license agreements for production use
77. T H R E A D P R O F I L I N G E X A M P L E - J M C
$ java -XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
ThreadExample
Press enter to continue.
starting…
…
finished!
Complete after 4965.916 ms
Press enter to finish.
78. T H R E A D P R O F I L I N G E X A M P L E - J M C
79. T H R E A D P R O F I L I N G E X A M P L E - J M C
80. T H R E A D P R O F I L I N G E X A M P L E - J M C
81. T H R E A D P R O F I L I N G E X A M P L E - J M C
82. T H R E A D P R O F I L I N G E X A M P L E - S M A L L E R P O O L
• Originally used a pool size of 200 threads.
• Using a pool size of 40 threads results in nearly the same run time and
some other benefits.
83. T H R E A D P R O F I L I N G E X A M P L E - S M A L L E R P O O L
Before we had multiple threads blocked. Now we have are waiting to create threads.
84. T H R E A D P R O F I L I N G E X A M P L E - S M A L L E R P O O L
Before we used nearly 256 MB of heap. Now we used just over 128 MB of heap.
85. M I C R O B E N C H M A R K S
public void doTest() {
double d;
long then = System.currentTimeMillis();
for (int i = 0; i < nLoops; i++) {
d = fib(15);
}
long now = System.currentTimeMillis();
System.out.println(
"Elapsed time: " + (now - then));
}
private double fib(int n) {
if (n < 0) {
throw new IllegalArgumentException(
"Must be > 0");
}
if (n == 0) { return 0.0d; }
if (n == 1) { return 1.0d; }
double d = fib(n - 2) + fib(n - 1);
if (Double.isInfinite(d)) {
throw new ArithmeticException("Overflow");
}
return d;
}
86. M I C R O B E N C H M A R K S M U S T U S E T H E I R R E S U LT S
A smart compiler will end up executing this code:
long then = System.currentTimeMillis();
long now = System.currentTimeMillis();
System.out.println("Elapsed time: " + (now - then));
Avoid compiler optimizations:
• Read each result.
• Use volatile instance variables.
There is a way around that particular issue: ensure that each result is read, not simply written. In practice, changing the definition of i from a local variable to an instance
variable (declared with the volatile keyword) will allow the performance of the method to be measured.
87. WA R M - U P P E R I O D
For microbenchmarks, a warm-up period is
required; otherwise, the microbenchmark
is measuring the performance of
compilation rather than the code it is
attempting to measure.
88. M A C R O B E N C H M A R K S
No test can give comparable results
to examining an application in production.
The best thing to use to measure performance of an application “is the application itself, in conjunction with any external resources it uses. If the application normally
checks the credentials of a user by making LDAP calls, it should be tested in that mode. Stubbing out the LDAP calls may make sense for module-level testing, but the
application must be tested in its full configuration.
89. S U M M A RY
• When to profile
• Profiler Sampling
• Profiler Instrumentation
• Where to Start
• Examples
• Micro vs Macro Benchmarking
Yes, it is the same slide as the agenda slide.