Nowadays system administrators have great choices when it comes down to performance profiling and monitoring. The challenge is to pick the ppropriate tool and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin. The topics will range from simple application profiling over sysstat utilities to low-level tracing methods. Besides traditional Linux methods a short glance at MySQL and Linux containers will be taken, too, as they are widely spread technologies.
At the end the goal is to gather reference points to look at, if you are faced with performance problems. Take the chance to close your knowledge gaps and learn how to get the most out of your system.
OSMC 2015: Linux Performance Profiling and Monitoring by Werner FischerNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
VMware’s Nathan Small who works as a Staff Engineer at Global Support Services has put together a great presentation about Advanced Root Cause Analysis. The presentation was designed to give you more insight into how a VMware Technical Support Engineer reviews logs, gathers data and performs in-depth analysis. Nathan is hoping to show you the skills they’re using every day to help determine the root cause for an issue in your environment. With this core knowledge, you will become more self-sufficient within your own environment and be able to diagnose an issue as it occurs rather than after the damage has been done.
OSMC 2015: Linux Performance Profiling and Monitoring by Werner FischerNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
OSDC 2017 - Werner Fischer - Linux performance profiling and monitoringNETWAYS
Nowadays system administrators have great choices when it comes down to Linux performance profiling and monitoring. The challenge is to pick the appropriate tools and interpret their results correctly.
This talk is a chance to take a tour through various performance profiling and benchmarking tools, focusing on their benefit for every sysadmin.
More than 25 different tools are presented. Ranging from well known tools like strace, iostat, tcpdump or vmstat to new features like Linux tracepoints or perf_events. You will also learn which tools can be monitored by Icinga and which monitoring plugins are already available for that.
At the end the goal is to gather reference points to look at, whenever you are faced with performance problems.
Take the chance to close your knowledge gaps and learn how to get the most out of your system.
VMware’s Nathan Small who works as a Staff Engineer at Global Support Services has put together a great presentation about Advanced Root Cause Analysis. The presentation was designed to give you more insight into how a VMware Technical Support Engineer reviews logs, gathers data and performs in-depth analysis. Nathan is hoping to show you the skills they’re using every day to help determine the root cause for an issue in your environment. With this core knowledge, you will become more self-sufficient within your own environment and be able to diagnose an issue as it occurs rather than after the damage has been done.
This slide will show you how to use SOFA to do performance analysis of CPU/GPU cooperative programs, especially programs running with deep software stacks like TensorFlow, PyTorch, etc.
source code at:
https://github.com/cyliustack/sofa
One of the great challenges of of monitoring any large cluster is how much data to collect and how often to collect it. Those responsible for managing the cloud infrastructure want to see everything collected centrally which places limits on how much and how often. Developers on the other hand want to see as much detail as they can at as high a frequency as reasonable without impacting the overall cloud performance.
To address what seems to be conflicting requirements, we've chosen a hybrid model at HP. Like many others, we have a centralized monitoring system that records a set of key system metrics for all servers at the granularity of 1 minute, but at the same time we do fine-grained local monitoring on each server of hundreds of metrics every second so when there are problems that need more details than are available centrally, one can go to the servers in question to see exactly what was going on at any specific time.
The tool of choice for this fine-grained monitoring is the open source tool collectl, which additionally has an extensible api. It is through this api that we've developed a swift monitoring capability to not only capture the number of gets, put, etc every second, but using collectl's colmux utility, we can also display these in a top-like formact to see exactly what all the object and/or proxy servers are doing in real-time.
We've also developer a second cability that allows one to see what the Virtual Machines are doing on each compute node in terms of CPU, disk and network traffic. This data can also be displayed in real-time with colmux.
This talk will briefly introduce the audience to collectl's capabilities but more importantly show how it's used to augment any existing centralized monitoring infrastructure.
Speakers
Mark Seger
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
This slide will show you how to use SOFA to do performance analysis of CPU/GPU cooperative programs, especially programs running with deep software stacks like TensorFlow, PyTorch, etc.
source code at:
https://github.com/cyliustack/sofa
One of the great challenges of of monitoring any large cluster is how much data to collect and how often to collect it. Those responsible for managing the cloud infrastructure want to see everything collected centrally which places limits on how much and how often. Developers on the other hand want to see as much detail as they can at as high a frequency as reasonable without impacting the overall cloud performance.
To address what seems to be conflicting requirements, we've chosen a hybrid model at HP. Like many others, we have a centralized monitoring system that records a set of key system metrics for all servers at the granularity of 1 minute, but at the same time we do fine-grained local monitoring on each server of hundreds of metrics every second so when there are problems that need more details than are available centrally, one can go to the servers in question to see exactly what was going on at any specific time.
The tool of choice for this fine-grained monitoring is the open source tool collectl, which additionally has an extensible api. It is through this api that we've developed a swift monitoring capability to not only capture the number of gets, put, etc every second, but using collectl's colmux utility, we can also display these in a top-like formact to see exactly what all the object and/or proxy servers are doing in real-time.
We've also developer a second cability that allows one to see what the Virtual Machines are doing on each compute node in terms of CPU, disk and network traffic. This data can also be displayed in real-time with colmux.
This talk will briefly introduce the audience to collectl's capabilities but more importantly show how it's used to augment any existing centralized monitoring infrastructure.
Speakers
Mark Seger
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
HKG18-TR14 - Postmortem Debugging with CoresightLinaro
Session ID: HKG18-TR14
Session Name: HKG18-TR14 - Postmortem Debugging with Coresight
Speaker: Leo Yan
Track: Training
★ Session Summary ★
For most cases we can easily debug with kernel's oops dumping info, but sometimes we need to know more information for program execution flow before the issue happens. So we can rely on two tracing methods to reproduce the program execution flow, one method is using software tracing which is kernel's pstore method; another method is to rely on Coresight hardware tracing, this method also can avoid extra workload introduced by tracing itself. Coresight has provided two mechanisms for Postmortem debugging, one method is Coresight CPU debug module so we can extract CPU program counter info, this is quite straightforward to debug CPU lockup issue; Another is Coresight panic kdump, we connect kernel kdump mechanism to extract Coresight tracing data so we can reproduce the last execution flow before panic (even hang issue with some tweaking in kernel). This session wants to go through these topics and demonstrate the debugging tools on 96boards Hikey in 25 minutes session.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-tr14/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-tr14.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-tr14.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Training
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
Analyzing OS X Systems Performance with the USE MethodBrendan Gregg
Talk for MacIT 2014. This talk is about systems performance on OS X, and introduces the USE Method to check for common performance bottlenecks and errors. This methodology can be used by beginners and experts alike, and begins by constructing a checklist of the questions we’d like to ask of the system, before reaching for tools to answer them. The focus is resources: CPUs, GPUs, memory capacity, network interfaces, storage devices, controllers, interconnects, as well as some software resources such as mutex locks. These areas are investigated by a wide variety of tools, including vm_stat, iostat, netstat, top, latency, the DTrace scripts in /usr/bin (which were written by Brendan), custom DTrace scripts, Instruments, and more. This is a tour of the tools needed to solve our performance needs, rather than understanding tools just because they exist. This talk will make you aware of many areas of OS X that you can investigate, which will be especially useful for the time when you need to get to the bottom of a performance issue.
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
Запускаем сервер (БД, Web-сервер или что-то свое собственное) и не получаем желаемый RPS. Запускаем top и видим, что 100% выедается CPU. Что дальше, на что расходуется процессорное время? Можно ли подкрутить какие-то ручки, чтобы улучшить производительность? А если параметр CPU не высокий, то куда смотреть дальше?
Мы рассмотрим несколько сценариев проблем производительности, рассмотрим доступные инструменты анализа производительности и разберемся в методологии оптимизации производительности Linux, ответим на вопрос за какие ручки и как крутить.
A 2015 presentation to introduce users to Java profiling. The Yourkit Profiler is used for concrete examples. The following topics are covered:
1) When to profile
2) Profiler sampling
3) Profiler instrumentation
4) Where to Start
5) Macro vs micro benchmarking
Troubleshooting Complex Oracle Performance Problems with Tanel PoderTanel Poder
Troubleshooting Complex Oracle Performance Problems hacking session & presentation by Tanel Poder.
This presentation is about a complex performance issue where the initial symptoms pointed somewhere else than the root cause. Only when systematically following through the troubleshooting drilldown method, we get to the root cause of the problem. This session aims to help you understand (and reason about) the Oracle’s multi-process & multi-layer system behavior, preparing you for independent troubleshooting of such complex performance issues in the future.
Video recordings of this presentation are in my YouTube channel:
1) Hacking Session: https://www.youtube.com/watch?v=INQewGJMdCI
2) Presentation: https://www.youtube.com/watch?v=aaHZ8A8Ygdg
Tanel's blog and training information: https://blog.tanelpoder.com/seminar
Presented at LISA18: https://www.usenix.org/conference/lisa18/presentation/babrou
This is a technical dive into how we used eBPF to solve real-world issues uncovered during an innocent OS upgrade. We'll see how we debugged 10x CPU increase in Kafka after Debian upgrade and what lessons we learned. We'll get from high-level effects like increased CPU to flamegraphs showing us where the problem lies to tracing timers and functions calls in the Linux kernel.
The focus is on tools what operational engineers can use to debug performance issues in production. This particular issue happened at Cloudflare on a Kafka cluster doing 100Gbps of ingress and many multiple of that egress.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
10. 10
vmstat
_ High Level Statistics about
_ Virtual memory
_ Swap/Paging
_ I/O statistics
_ System interrupts and context switches
_ CPU statistics
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 172 371856 137088 3125664 0 0 0 153060 7618 7059 17 9 56 17 0
3 0 172 416596 137096 3125704 0 0 0 163420 8689 7419 11 10 61 17 0
0 0 172 451716 137096 3089916 0 0 0 0 396 1848 3 1 96 0 0
0 0 172 413916 137108 3118796 0 0 0 52 502 2218 9 2 90 0 0
2 0 172 399756 137108 3118860 0 0 284884 0 14830 10941 10 13 66 12 0
1 1 172 364948 137108 3118988 0 0 310792 0 16204 12738 20 13 53 14 0
11. 11
vmstat
_ Memory statistics
_ buff Raw disk blocks like filesystem metadata
_ cache Memory used for data information, pages with actual
contents
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 172 607760 182172 3313684 0 0 159 496 154 222 18 6 76 0 0
0 0 172 607628 182172 3313684 0 0 0 52 387 2008 4 2 95 0 0
0 0 172 607348 182172 3313684 0 0 0 0 397 2034 4 1 95 0 0
0 0 172 606448 182172 3313684 0 0 0 0 378 1896 4 2 94 0 0
$ free
total used free shared buffers cached
Mem: 8056664 7450316 606348 491820 182172 3313684
-/+ buffers/cache: 3954460 4102204
Swap: 1048572 172 1048400
12. 12
vmstat
_ Process related fields
_ r The number of runnable processes (running or waiting for
run time)
_ If high → indicator for saturation
_ b The number of processes in uninterruptible sleep
_ Mostly waiting for I/O
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
[...]
0 1 172 404768 137088 3125664 0 0 0 167524 9029 6955 6 6 70 18 0
0 1 172 399956 137088 3125664 0 0 0 138340 8133 6165 7 7 68 19 0
$ ps -eo ppid,pid,user,stat,pcpu,comm,wchan:32 | grep ext4
[...]
7159 7161 root Ds 3.2 fio ext4_file_write
7159 7162 root Ds 3.2 fio ext4_file_write
7159 7164 root Ds 3.2 fio ext4_file_write
Kernel function process
is sleeping on
Processes doing I/O
can bei in waiting state
18. 18
pidstat
_ How much memory is PID 8461 using?
_ Major faults require I/O operations, good indicator you need
more RAM!
_# pidstat -r -p 8461 1 3
Linux 3.13.0-49-generic (X220) 2015-04-21 _x86_64_ (4 CPU)
10:09:06 UID PID minflt/s majflt/s VSZ RSS %MEM Command
10:09:07 1000 8461 8,00 0,00 2018384 786688 9,76 firefox
10:09:08 1000 8461 11,00 0,00 2018384 786688 9,76 firefox
10:09:09 1000 8461 23,00 0,00 2018448 786892 9,77 firefox
Average: 1000 8461 14,00 0,00 2018405 786756 9,77 firefox
Current used share
of physical memory
Minor and major
page faults
19. 19
iostat
_ I/O subsystem statistics
_ CPU or device utilization report
_ Without argument → summary since boot
_ Skip that with -y option
# iostat
Linux 3.13.0-48-generic (X220) 2015-04-15 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
16,16 0,09 4,79 0,46 0,00 78,50
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 83,80 41,64 531,43 22375057 285581196
20. 20
iostat
_ CPU util report → %iowait
_ Not really reliable → %iowait is some kind of
%idle time
# taskset 1 fio –rw=randwrite [...] &
# iostat -y -c 1 3
[…]
avg-cpu: %user %nice %system %iowait %steal %idle
17,32 0,00 6,56 13,65 0,00 62,47
# taskset 1 sh -c "while true; do true; done" &
# iostat -y -c 1 3
avg-cpu: %user %nice %system %iowait %steal %idle
35,59 0,00 7,02 0,00 0,00 57,39
http://www.percona.com/blog/2014/06/03/trust-vmstat-iowait-numbers/
21. 21
iostat
_ Extended device util report → %util
_ man iostat → … for devices serving requests in parallel, such as
RAID arrays and modern SSDs, this number does not reflect
their performance limits.
_ In theory
_ 94,4% util 23032 IOPS
_ 99,6% util 24300 IOPS
24. 24
iostat
_ avgqu-sz Avg. queue length of requests issued
_ (delta[time_in_queue] / interval) / 1000.0
_ time_in_queue Requets waiting for device, effected by in_flight
_ await Avg. time requests being served
_ delta[read_ticks + write_ticks] / delta[read_IOs +
write_Ios]
_ ticks also effected by in_flight
_ Therefore serving more requests while await is
not increasing, is a good performance indicator
- Monitoring IO Performance using iostat and pt-diskstats
- Block layer statistics
33. 33
top
_ System summary at beginning
_ Per process metrics afterwards
_ Default sorted by CPU usage
$ top -b -n 1| head -15
top - 15:33:50 up 3 days, 19:02, 3 users, load average: 0.13, 0.51, 0.59
Tasks: 668 total, 1 running, 667 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.5%us, 0.3%sy, 0.1%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 132009356k total, 23457172k used, 108552184k free, 1600120k buffers
Swap: 3904444k total, 0k used, 3904444k free, 12682188k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29276 root 20 0 6928 3488 668 S 19 0.0 22:55.72 ossec-syscheckd
1193 gschoenb 20 0 17728 1740 936 R 4 0.0 0:00.02 top
11257 root 20 0 22640 2636 1840 S 4 0.0 70:38.88 openvpn
19907 www-data 20 0 197m 61m 52m S 4 0.0 0:06.18 apache2
775 root 20 0 0 0 0 S 2 0.0 8:03.13 md3_raid10
3712 root 39 19 0 0 0 S 2 0.0 22:45.85 kipmi0
12807 root -3 0 0 0 0 S 2 0.0 6:20.30 drbd2_asender
18653 root 20 0 0 0 0 S 2 0.0 12:40.19 drbd1_receiver
1, 5 and 15 min
load average
34. 34
top
_ Memory usage
_ VIRT The total size of virtual memory for the process
_ Also including e.g. not already mapped heap or swap
_ RES How many blocks are really allocated and mapped to
address space → resident
_ Also includes file-backed memory (like shared libraries, mmap)
_ Can be used concurrently by processes
_ SHR is shared or file-backed memory
_ RES – SHR = anon mem (malloc)
- https://www.linux.com/learn/tutorials/42048-uncover-the-meaning-of-tops-statistics
- http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html
$ cat /proc/17692/statm
1115764 611908 16932 26 0 848936 0
35. 35
top
_ Can consume resources on it's own
_ Toggle f and select fields, e.g. SWAP
_ -u let's you see processes from a user
_ Toggle k to kill a PID
_ Toggle r to renice a PID
_ But
_ top can miss short living processes
_ high %CPU → so what?
_ Keep an eye on the tracing part
37. 37
iotop
_ Simple top like I/O monitor
_ Which process is causing I/O
_ Filtering specific PID is possible
# iotop -o -b
Total DISK READ : 0.00 B/s | Total DISK WRITE : 63.94 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 63.90 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
19153 be/4 root 0.00 B/s 63.89 M/s 0.00 % 75.44 % fio --rw=randwrite --name=test
--filename=test.fio --size=300M --direct=1 --bs=4k
17715 be/4 gschoenb 0.00 B/s 46.18 K/s 0.00 % 0.00 % firefox [mozStorage #1]
# iotop -o -b
Total DISK READ : 69.02 M/s | Total DISK WRITE : 65.92 K/s
Actual DISK READ: 69.02 M/s | Actual DISK WRITE: 345.12 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
19176 be/4 root 69.02 M/s 0.00 B/s 0.00 % 88.28 % fio --rw=read --name=test
--filename=test.fio --size=300M --direct=1 --bs=8k
Show writes, reads
and command in
realtime
38. 38
Bandwidth live usage
_ iftop
_ Per interface usage
_ nethogs
_ Per proces
NetHogs version 0.8.0
PID USER PROGRAM DEV SENT RECEIVED
17692 gschoenb /usr/lib/firefox/firefox eth0 0.162 0.194 KB/sec
16585 root /usr/bin/ssh eth0 0.000 0.000 KB/sec
16611 gschoenb evolution eth0 0.000 0.000 KB/sec
? root unknown TCP 0.000 0.000 KB/sec
TOTAL 0.162 0.194 KB/sec
42. 42
Profiling
_ Create profile about usage characteristics
_ Count specific samples/events
_ Count objects
_ Next slides focus on system profiling
_ ftrace
_ perf_events and perf
_ Collecting statistics about tracepoints
_ Lines of kernel code with defined event
43. 43
ftrace
_ Part of the Linux kernel since 2.6.27 (2008)
_ What is going on inside the kernel
_ Common task is to trace events
_ With ftrace configured, only debugfs is
required
# cat /proc/sys/kernel/ftrace_enabled
1
# mount | grep debug
none on /sys/kernel/debug type debugfs (rw)
/sys/kernel/debug/tracing# cat available_tracers
blk mmiotrace function_graph wakeup_rt wakeup function nop
45. 45
perf_events and perf
_ Used to be called performance counters for
Linux
_ A lot of updates for kernel 4.1
_ https://lkml.org/lkml/2015/4/14/264
_ CPU performance counters, tracepoints,
kprobes and uprobes
_ Per package with linux-tools-common
# which perf
/usr/bin/perf
# dpkg -S /usr/bin/perf
linux-tools-common: /usr/bin/perf
46. 46
perf list
_ perf list
_ Shows supported events
# perf list | wc -l
1779
# perf list | grep Hardware
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
cache-references [Hardware event]
cache-misses [Hardware event]
branch-instructions OR branches [Hardware event]
branch-misses [Hardware event]
bus-cycles [Hardware event]
stalled-cycles-frontend OR idle-cycles-frontend [Hardware event]
stalled-cycles-backend OR idle-cycles-backend [Hardware event]
ref-cycles [Hardware event]
L1-dcache-loads [Hardware cache event]
L1-dcache-load-misses [Hardware cache event]
L1-dcache-stores [Hardware cache event]
L1-dcache-store-misses [Hardware cache event]
This also includes
static tracepoints
47. 47
Raw CPU counters
_ Each CPU has it's own raw counters
_ They should be documented by the hardware manufacturer
_ https://download.01.org/perfmon/
_ libpfm4 is a nice way to find raw masks
# perf list | grep rNNN
rNNN [Raw hardware event descriptor]
# git clone git://perfmon2.git.sourceforge.net/gitroot/perfmon2/libpfm4
# cd libpfm4
# make
# cd examples/
# ./showevtinfo | grep LLC | grep MISSES
Name : LLC_MISSES
[...]
# ./check_events LLC_MISSES | grep Codes
Codes : 0x53412e
# perf stat -e r53412e sleep 5
Now we collect last
level cache misses
with the raw mask
48. 48
Tracepoints
_ perf also has trace functionalities
_ Filesystem
_ Block layer
_ Syscalls
# perf list | grep -i trace | wc -l
1716
# perf stat -e 'syscalls:sys_enter_mmap' ./helloWorld.out
Hello world!
Performance counter stats for './helloWorld.out':
8 syscalls:sys_enter_mmap
0,000556961 seconds time elapsed
49. 49
perf stat
_ Get a counter summary
# perf stat python numpy-matrix.py -i matrix.in
Performance counter stats for 'python numpy-matrix.py -i matrix.in':
576,104221 task-clock (msec) # 0,930 CPUs utilized
319 context-switches # 0,554 K/sec
4 cpu-migrations # 0,007 K/sec
9.738 page-faults # 0,017 M/sec
1.743.664.199 cycles # 3,027 GHz [82,63%]
831.364.029 stalled-cycles-frontend # 47,68% frontend cycles idle [83,75%]
458.760.523 stalled-cycles-backend # 26,31% backend cycles idle [67,26%]
2.793.953.303 instructions # 1,60 insns per cycle
# 0,30 stalled cycles per insn [84,28%]
573.342.473 branches # 995,206 M/sec [83,78%]
3.586.249 branch-misses # 0,63% of all branches [82,70%]
0,619482128 seconds time elapsed
Easy to compare
performance of
different algorithms
50. 50
perf record
_ Record samples to a file
_ Can be post-processed with perf report
_ -a records on all CPUs
_ -g records call graphs
_ Install debug symbols
# perf record -a -g sleep 5
[ perf record: Woken up 4 times to write data ]
[ perf record: Captured and wrote 2.157 MB perf.data (~94254 samples) ]
Nice way to record
what's currently
running on all CPUs
53. 53
perf-tools
_ By Brendan Gregg
_ https://github.com/brendangregg/perf-tools
_ Mostly quick hacks, read Warnings!
_ Using perf_events and ftrace
_ Good examples what can be done with perf and
ftrace
_ iosnoop Shows I/O access for commands, including latency
_ cachestat Linux page cache hit/miss statistics
_ functrace Count kernel functions matching wildcards
Nice, this are simple
bash scripts!
56. 56
Flamegraph
_ Visualization how resources are distributed
among code
Powered by @agentzh, http://agentzh.org/misc/slides/yapc-na-2013-flame-graphs.pdf
59. 59
Linux Container
_ Lightweight „virtual machines“ using features
provided by a modern Linux kernel
_ cgroups Aggregate or partition tasks and their children to
hierarchical groups to isolate resources
_ namespaces Wrap a resource in an abstraction so that it
appears to processes they have their own isolated resource
_ Each container shares the kernel running on
the host
_ Some may refer to it as „native performance“
60. 60
Linux Container
_ cgroups are divided into subsystems, e.g.
_ cpusets
_ blkio
_ memory
Image from Boden Russel, http://de.slideshare.net/BodenRussell/realizing-linux-containerslxc
61. 61
Linux Container
_ cgroup created per container per subsystem
# lxc-ls --fancy
NAME STATE IPV4 IPV6 GROUPS AUTOSTART
-----------------------------------------------------
ubuntu1 RUNNING 10.0.3.119 - - NO
# lxc-info -n ubuntu1
Name: ubuntu1
State: RUNNING
PID: 7548
IP: 10.0.3.119
CPU use: 1.80 seconds
BlkIO use: 22.68 MiB
Memory use: 30.85 MiB
KMem use: 0 bytes
Link: vethC8TJUT
TX bytes: 3.33 KiB
RX bytes: 3.49 KiB
Total bytes: 6.82 KiB
62. 62
Linux Container
_ lxc-info takes cgroups into account
_ cgroups providing further info
_ memory.stat
_ memory.failcnt
_ cpuset.cpus
Value Origin
CPU use lxc-cgroup -n ubuntu1 cpuacct.usage
BlkIO use lxc-cgroup -n ubuntu1 blkio.throttle.io_service_bytes
Memory use lxc-cgroup -n ubuntu1 memory.usage_in_bytes
KMem use lxc-cgroup -n ubuntu1 memory.kmem.usage_in_bytes
Link cat /sys/class/net/veth0EP3QM/statistics/*_bytes
63. 63
Linux Container
_ lxc-top monitors container overall usage
_ Traditional tools without lxcfs do not work!
# lxc-top
Container CPU CPU CPU BlkIO Mem
Name Used Sys User Total Used
ubuntu1 1.94 1.11 0.84 32.22 MB 14.71 MB
ubuntu2 1.43 0.88 0.79 10.61 MB 17.88 MB
TOTAL 2 of 2 3.36 1.99 1.63 42.83 MB 32.59 MB
root@host # lxc-cgroup -n ubuntu1 memory.limit_in_bytes
33554432
root@container # free -h
total used free shared buffers cached
Mem: 489M 202M 287M 488K 26M 115M
65. 65
MySQL
_ Percona provides a lot of good tools
_ First step, generate a summery
# pt-mysql-summary
# Percona Toolkit MySQL Summary Report #######################
# Instances ##################################################
Port Data Directory Nice OOM Socket
===== ========================== ==== === ======
/var/lib/mysql 0 0 /var/lib/mysql/mysql.sock
# MySQL Executable ###########################################
Path to executable | /usr/sbin/mysqld
Has symbols | Yes
# Report On Port 3306 ########################################
User | root@localhost
Time | 2015-04-14 07:49:09 (CEST)
Hostname | mysql1
Databases | 15
Datadir | /var/lib/mysql/
[...]
66. 66
MySQL
_ Extended status prints also counters
_ Can be monitored with pmp-check-mysql-status Plugin
_ Slow Query log
_ Queries exceeding a specific runtime
_ OFF by default, runtime and log file must be defined
_ Query Cache is ignored
# mysqladmin ext | wc -l
345
# mysqladmin ext | grep Threads_running
| Threads_running | 3
# mysqladmin ext | grep Innodb_buffer_pool_pages_free
| Innodb_buffer_pool_pages_free | 12298
67. 67
MySQL
_ Easy way to log all queries → long_query_time 0
_ pt-query-digest
_ Process slow log and generate report
# pt-query-digest mysql-slow.log
[...]
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Exec time 184984s 9s 419s 51s 151s 45s 42s
# Lock time 15s 0 3s 4ms 0 71ms 0
# Rows sent 500.05M 0 2.65M 139.79k 1.69M 491.22k 3.89
# Rows examine 3.23G 0 234.22M 923.81k 2.49M 5.53M 440.37k
# Query size 128.45M 6 2.75M 35.91k 68.96k 136.30k 10.29k
# Profile
# Rank Query ID Response time Calls R/Call V/M Item
# ==== ================== ================ ===== ======== ===== ==========
# 1 0x7A8EB8C13A4A8435 29885.0000 16.2% 305 97.9836 22.08 SELECT
# 2 0xA45C5FB6D066119B 26077.0000 14.1% 369 70.6694 22.49 SELECT
# 3 0x67A347A2812914DF 13737.0000 7.4% 397 34.6020 14.53 SELECT
# 4 0xD7A9797E81785092 11855.0000 6.4% 121 97.9752 22.05 SELECT
68. 68
MySQL – innotop
_ Live analysis of SQL queries
_ Sort by execution time
_ Not only for innodb
69. Thanks for your attention!
_ gschoenberger@thomas-krenn.com
_ @devtux_at
77. # btt -i sda.blktrace.0
==================== All Devices ====================
ALL MIN AVG MAX N
--------------- ------------- ------------- ------------- -----------
Q2Q 0.000016944 0.000022114 0.000042534 6
Q2G 0.000000694 0.000001430 0.000005342 7
G2I 0.000000314 0.000000725 0.000002793 7
I2D 0.000000375 0.000000906 0.000003652 7
D2C 0.000992471 0.001018423 0.001048992 5
Q2C 0.000993887 0.001022085 0.001060779 5
[...]
D2C Driver and device time – the average time from when the actual
IO was issued to the driver until is completed (completion trace) back
to the block IO layer.
Q2C Measures the times for the complete life cycle of IOs during
the run.
78. 78
MySQL pt-query-digest
# Query 1: 0.00 QPS, 0.01x concurrency, ID 0x67A347A2812914DF at byte 61989638
# This item is included in the report because it matches --limit.
# Scores: V/M = 172.18
# Time range: 2012-05-23 00:00:26 to 2015-04-17 00:10:33
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 12 11267
# Exec time 29 462888s 3s 2629s 41s 130s 84s 17s
# Lock time 0 531ms 0 599us 47us 93us 28us 38us
# Rows sent 99 82.85G 306 63.71M 7.53M 46.53M 13.84M 915.49k
# Rows examine 38 82.85G 306 63.71M 7.53M 46.53M 13.84M 915.49k
# Query size 0 683.69k 47 79 62.14 72.65 7.79 56.92
# String:
# Databases XXXXX (10159/90%)... 1 more
# Hosts localhost (9524/84%), XXXXX (1743/15%)
# Users XXXXX (9738/86%), XXXXX (1529/13%)
# Query_time distribution
# 100us
# 1ms
# 10ms
# 100ms
# 1s #################
# 10s+ ################################################################
79. 79
MySQL
_ Performance Schema
_ Per default on in 5.6, older profiling commands are deprecated
with 5.6.7
_ A structured way in SQL to get timing information
_ Runtime and query execution statistics
_ Sys scchema (ps_helper) provides a more
comfortable way
> select * from schema_table_statistics
where table_schema='sbtest' limit 1 G
*************************** 1. row ***************************
table_schema: sbtest
table_name: sbtest
rows_fetched: 158764154
[...]