iostat is a tool for monitoring input/output performance on Linux systems. It provides reports on CPU utilization and device utilization, including metrics like read/write operations per second, throughput, and latency for block devices. The tool can display statistics since system startup or for a specified time interval, and supports standard or extended output formats to include additional metrics on request and queue lengths and service times.
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
Maxscale switchover, failover, and auto rejoinWagner Bianchi
How the MariaDB Maxscale Switchover, Failover, and Rejoin works under the hood by Esa Korhonen and Wagner Bianchi.
You can watch the video of the presentation at
https://www.linkedin.com/feed/update/urn:li:activity:6381185640607809536
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
Maxscale switchover, failover, and auto rejoinWagner Bianchi
How the MariaDB Maxscale Switchover, Failover, and Rejoin works under the hood by Esa Korhonen and Wagner Bianchi.
You can watch the video of the presentation at
https://www.linkedin.com/feed/update/urn:li:activity:6381185640607809536
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
사례로 알아보는 MariaDB 마이그레이션
현대적인 IT 환경과 애플리케이션을 만들기 위해 우리는 오늘도 고민을 거듭합니다. 최근 들어 오픈소스 DB가 많은 업무에 적용되고 검증이 되면서, 점차 무거운 상용 데이터베이스를 가벼운 오픈소스 DB로 전환하는 움직임이 대기업의 미션 크리티컬 업무까지로 확산하고 있습니다. 이는 클라우드 환경 및 마이크로 서비스 개념 확산과도 일치하는 움직임입니다.
상용 DB를 MariaDB로 이관한 사례를 통해 마이그레이션의 과정과 효과를 살펴 볼 수 있습니다.
MariaDB로 이관하는 것은 어렵다는 생각을 막연히 가지고 계셨다면 본 자료를 통해 이기종 데이터베이스를 MariaDB로 마이그레이션 하는 작업이 어렵지 않게 수행될 수 있다는 점을 실제 사례를 통해 확인하시길 바랍니다.
웨비나 동영상
https://www.youtube.com/watch?v=xRsETZ5cKz8&t=52s
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
MySQL InnoDB Cluster - Advanced Configuration & OperationsFrederic Descamps
MySQL InnoDB Cluster is a very easy HA solution to deploy. However it's also a very customizable solution able to respond to most needs. During this session I will give an overview of settings that you may tune like those related to quorum lost, level of consistency, but also some you may not know like how to change recovery system, effect of increasing the event horizon. We will also discus about maintenance operations like how to stream large transactions, how to deal with DDL in multi-primary environments...
Oracle Open World (OOW) 2014 presentation on Oracle Cache Fusion; how it works and how to use it in an optimized fashion to scale an Oracle RAC system.
Troubleshooting tips and tricks for Oracle Database Oct 2020Sandesh Rao
This talk presents 15 different tips and tricks using tools to better troubleshoot and debug problems with Database , Oracle RAC and Oracle Clusterware , ASM and how to get the right pieces of data with the least of commands which today most people do manually. This session will cover tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
We will show how Galera Cluster executes DDLs in a safe, consistent manner across all the nodes in the cluster, and the differences with stand-alone MySQL. We will discuss how to prepare for and successfully carry out a schema upgrade and the considerations that need to be taken into account during the process.
Roger Labahn (University of Rostock, DE): Handwritten Text Recognition. Key concepts
co:op-READ-Convention Marburg
Technology meets Scholarship, or how Handwritten Text Recognition will Revolutionize Access to Archival Collections.
With a special focus on biographical data in archives
Hessian State Archives Marburg Friedrichsplatz 15, D - 35037 Marburg
19-21 January 2016
사례로 알아보는 MariaDB 마이그레이션
현대적인 IT 환경과 애플리케이션을 만들기 위해 우리는 오늘도 고민을 거듭합니다. 최근 들어 오픈소스 DB가 많은 업무에 적용되고 검증이 되면서, 점차 무거운 상용 데이터베이스를 가벼운 오픈소스 DB로 전환하는 움직임이 대기업의 미션 크리티컬 업무까지로 확산하고 있습니다. 이는 클라우드 환경 및 마이크로 서비스 개념 확산과도 일치하는 움직임입니다.
상용 DB를 MariaDB로 이관한 사례를 통해 마이그레이션의 과정과 효과를 살펴 볼 수 있습니다.
MariaDB로 이관하는 것은 어렵다는 생각을 막연히 가지고 계셨다면 본 자료를 통해 이기종 데이터베이스를 MariaDB로 마이그레이션 하는 작업이 어렵지 않게 수행될 수 있다는 점을 실제 사례를 통해 확인하시길 바랍니다.
웨비나 동영상
https://www.youtube.com/watch?v=xRsETZ5cKz8&t=52s
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
MySQL InnoDB Cluster - Advanced Configuration & OperationsFrederic Descamps
MySQL InnoDB Cluster is a very easy HA solution to deploy. However it's also a very customizable solution able to respond to most needs. During this session I will give an overview of settings that you may tune like those related to quorum lost, level of consistency, but also some you may not know like how to change recovery system, effect of increasing the event horizon. We will also discus about maintenance operations like how to stream large transactions, how to deal with DDL in multi-primary environments...
Oracle Open World (OOW) 2014 presentation on Oracle Cache Fusion; how it works and how to use it in an optimized fashion to scale an Oracle RAC system.
Troubleshooting tips and tricks for Oracle Database Oct 2020Sandesh Rao
This talk presents 15 different tips and tricks using tools to better troubleshoot and debug problems with Database , Oracle RAC and Oracle Clusterware , ASM and how to get the right pieces of data with the least of commands which today most people do manually. This session will cover tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
We will show how Galera Cluster executes DDLs in a safe, consistent manner across all the nodes in the cluster, and the differences with stand-alone MySQL. We will discuss how to prepare for and successfully carry out a schema upgrade and the considerations that need to be taken into account during the process.
Roger Labahn (University of Rostock, DE): Handwritten Text Recognition. Key concepts
co:op-READ-Convention Marburg
Technology meets Scholarship, or how Handwritten Text Recognition will Revolutionize Access to Archival Collections.
With a special focus on biographical data in archives
Hessian State Archives Marburg Friedrichsplatz 15, D - 35037 Marburg
19-21 January 2016
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
Talk for USENIX/LISA2014 by Brendan Gregg, Netflix. At Netflix performance is crucial, and we use many high to low level tools to analyze our stack in different ways. In this talk, I will introduce new system observability tools we are using at Netflix, which I've ported from my DTraceToolkit, and are intended for our Linux 3.2 cloud instances. These show that Linux can do more than you may think, by using creative hacks and workarounds with existing kernel features (ftrace, perf_events). While these are solving issues on current versions of Linux, I'll also briefly summarize the future in this space: eBPF, ktap, SystemTap, sysdig, etc.
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
Video: http://joyent.com/blog/linux-performance-analysis-and-tools-brendan-gregg-s-talk-at-scale-11x ; This talk for SCaLE11x covers system performance analysis methodologies and the Linux tools to support them, so that you can get the most out of your systems and solve performance issues quickly. This includes a wide variety of tools, including basics like top(1), advanced tools like perf, and new tools like the DTrace for Linux prototypes.
EBS in an hour: Build a Vision instance - FAST - in Oracle Virtualboxjpiwowar
Slides from OAUG Connection Point conference in Pittsburgh, July 2013. Presentation discussed how to create an EBS Vision instance in Oracle Virtualbox, using OVM templates to avoid some of the pain of installation and patching. Content based on this blog post: http://www.pythian.com/blog/build-ebs-sandbox-1hr/ , with some minor modifications: resulting EBS instance is single-node, not two-node, instance.
Slides by themselves are of questionable value, since much of the presentation was a live demo. Still, I believe in sharing, so here they are. ;)
Linux io introduction-fudcon-2015-with-demo-slidesKASHISH BHATIA
Linux provide facilities to expose emulated LUNs to initiators using Linux-IO (LIO) scsi target implementation . LIO not only support exposing conventional block devices but also supports other storage interfaces like file or memory based LUNs. Also it supports multiple fabric interfaces - FC, FCoE, iscsi and many more.
LIO can be used in SAN environments with minimal storage resources.
Native support for LIO in linux hypervisors and in Openstack make it a good storage option for cloud deployments.
This presentation includes demo slides with LIO iscsi target implementation.
Security of Oracle EBS - How I can Protect my System (UKOUG APPS 18 edition)Andrejs Prokopjevs
Nowadays having a proper security configuration is a huge challenge, especially looking at the global hacks and personal data leak incidents that happened in IT a while back. Oracle EBS is not perfect and has lots of vulnerabilities covered by Oracle almost every quarter. A very small percent of Apps DBAs know all the features and options available, and usually, do not go over firewall/reverse proxy layer.
This presentation is going to cover an overview and recommendations of options and security features that are available and can be used out-of-the-box, and some of the non-trivial configurations that can help to keep your Oracle EBS system protected, per our experience.
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
In this PowerPoint, learn how a security policy can be your first line of defense. Servers running AIX and other operating systems are frequent targets of cyberattacks, according to the Data Breach Investigations Report. From DoS attacks to malware, attackers have a variety of strategies at their disposal. Having a security policy in place makes it easier to ensure you have appropriate controls in place to protect mission-critical data.
Automated Out-of-Band management with Ansible and RedfishJose De La Rosa
Ansible is an open source automation engine that automates complex IT tasks such as cloud provisioning, application deployment and a wide variety of system administration tasks. It is a one-to-many agentless mechanism where complex deployment tasks can be controlled and monitored from a central control machine.
Redfish is an open industry-standard specification and schema designed for modern and secure management of platform hardware. On Dell EMC PowerEdge servers the Redfish management APIs are available via the integrated Dell Remote Access Controller (iDRAC), which can be used by IT administrators to easily monitor and manage at scale their entire infrastructure using a wide array of clients on devices such as laptops, tablets and smart phones.
Together, Ansible and Redfish can be used by system administrators to fully automate at large scale server monitoring, provisioning and update tasks from one central location, significantly reducing complexity and helping improve the productivity and efficiency of IT administrators.
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. About Me
• Ben Mildren
• Team Technical Lead at Pythian
• Over 10 years experience as a production
DBA.
• Experience of MySQL, SQL Server, and
MongoDB.
Email: mildren@pythian.com
LinkedIn: benmildren
Twitter: @productiondba
Slideshare: www.slideshare.net/benmildren
2
3. • Recognized Leader:
– Global industry-leader in remote database administration services and
consulting for Oracle, Oracle Applications, MySQL and Microsoft SQL Server
– Work with over 250 multinational companies such as Forbes.com, Fox Sports,
Nordion and Western Union to help manage their complex IT deployments
• Expertise:
– Pythian’s data experts are the elite in their field. We have the highest
concentration of Oracle ACEs on staff—10 including 2 ACE Directors—and 2
Microsoft MVPs.
– Pythian holds 7 Specializations under Oracle Platinum Partner program,
including Oracle Exadata, Oracle GoldenGate & Oracle RAC
• Global Reach & Scalability:
– Around the clock global remote support for DBA and consulting, systems
administration, special projects or emergency response
Why Pythian?
3
5. Monitoring IO on Linux
• There are a number of tools available to monitor IO on Linux. This
presentation looks at two tools, iostat and pt-diskstats. These tools
look to provide an overview of block devices iops, throughput and
latency.
• You can dive deeper:
– Tools such as iotop and atop can be used to expose process
level IO performance by gathering data from /proc/[process]/io.
– blktrace can be used with blkparse, either independently or by
using btrace. Output can also be analysed using seekwatcher.
– Tools such as bonnie/bonnie++, iometer, iozone, and ORION
can be used to benchmark a block device.
5
6. iostat
• iostat is part of the sysstat utilities which is maintained by Sebastien
Godard.
• sysstat is open source software written in C, and is available under
the GNU General Public License, version 2.
• Other utilities in the sysstat package include mpstat, pidstat, sar,
nfsiostat, and cifsiostat.
• sysstat is likely available from your favourite repo, but the latest
version can be found here:
http://sebastien.godard.pagesperso-orange.fr/download.html
• If you have an old (or very old) version installed, it is recommended
you install the latest stable version.
6
7. iostat
• At the time of writing the current stable version is 10.0.5.
• Note version 10 removes support for kernels older than 2.6.
• The version you have installed can be found with -V option.
[ben@lab ~]$ iostat V
sysstat version 10.0.3
(C) Sebastien Godard (sysstat <at> orange.fr)
7
8. iostat
• By default iostat produces two reports; the CPU Utilization report
and the Device Utilization report.
• The default invocation shows the statistics since system start up.
[ben@lab ~]$ iostat
Linux 3.8.4102.fc17.x86_64 (lab.mysqlhome) 04/10/2013 _x86_64_ (2 CPU)
avgcpu: %user %nice %system %iowait %steal %idle
17.56 2.93 3.62 2.42 0.00 73.47
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 8.51 100.17 44.62 890987 396917
dm0 4.87 78.76 6.80 700565 60488
dm1 7.40 20.74 37.45 184505 333128
dm2 0.14 0.19 0.37 1693 3300
8
9. iostat
• Inclusion of the CPU and Device Utilization reports can be
controlled with the -c and -d options.
• The Network Filesystem report (-n) was deprecated in version 10
and replaced with the nfsiostat and cifsiostat utilities.
[ben@lab ~]$ iostat c
Linux 3.8.4102.fc17.x86_64 (lab.mysqlhome) 04/10/2013 _x86_64_ (2 CPU)
avgcpu: %user %nice %system %iowait %steal %idle
17.01 2.66 3.69 2.35 0.00 74.29
[ben@lab ~]$ iostat d
Linux 3.8.4102.fc17.x86_64 (lab.mysqlhome) 04/10/2013 _x86_64_ (2 CPU)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 8.28 94.60 45.60 926915 446773
dm0 4.65 75.00 6.34 734861 62104
dm1 7.28 19.00 38.75 186129 379644
dm2 0.15 0.17 0.51 1701 5024
9
11. iostat
• Whilst reviewing the stats since since system start up can be useful,
more often you will want to review current activity.
• Current activity can be displayed by specifying an interval measured
in seconds.
• Output will continue until interrupted or a specified count has been
reached.
[ben@lab ~]$ iostat dx [interval] [count]
[ben@lab ~]$ iostat dx 3
The above command will display extended device statistics since system start up in the first
report, and the deltas for the last 3 seconds in subsequent reports until interrupted.
[ben@lab ~]$ iostat dx 3 3
The above command will display extended device statistics since system start up in the first
report, and the deltas for the last 3 seconds in two further reports.
11
13. iostat
• Since version 7.1.3 the report can be made more readable by
including the registered device mapper names using the -N option.
• Using the -N can skew the columns on each line, so it can be useful
to also specify the -h option to keep the report easily readable.
• Adding -k or -m will specify kB / mB per second respectively.
(If the POSIXLY_CORRECT environment variable is NULL the data
read / written is displayed in kB by default)
• Adding the -t option will include a timestamp with each report.
• Adding the -z option will exclude any inactive devices from the
individual reports.
• In version 10.1.3 (currently development version), adding the -y
option suppresses the first report showing statistics since system
start up.
13
15. iostat
• In version 10.0.5, the devices in the Device Utilization can be
grouped using the -g option.
• It is possible to specify multiple groups by supplying the -g option
multiple times.
• It's probably easier not to use the -N option, as the group devices
named would have to reference the registered device mapper
names.
• Using the -T option will display the group totals only.
15
17. iostat
• The Device Utilization report can be produced on a partition level
using the -p option.
• Historically partition statistics were restricted when moving from the
2.4 kernel to 2.6 kernel, however the enhanced statistics were made
available again from the 2.6.25 kernel.
• The -p option was mutually exclusive to the -x option up unto
version 8.1.8
• The partition focused report, can be limited to specific devices but
not specific partitions.
• Currently the group options (-g and -T) don't work well with the
partition focused report.
17
19. pt-diskstats
• pt-diskstats is part of the percona toolkit.
• pt-diskstats is open source software written in Perl, and is available
under the GNU General Public License, version 2.
• Other utilities in the percona toolkit include pt-stalk, pt-table-
checksum, pt-table-sync, pt-query-digest, and pt-summary.
• Percona toolkit can be downloaded from
http://www.percona.com/software/percona-toolkit/
or more simply from the command line:
wget percona.com/get/perconatoolkit.tar.gz
• Tools can also be downloaded individually:
wget percona.com/get/TOOL
e.g. wget percona.com/get/ptdiskstats
19
20. pt-diskstats
• At the time of writing the current stable version of percona toolkit is
version 2.1.1.
• Full documentation of pt-diskstats can be found here:
http://www.percona.com/doc/percona-toolkit/2.1/pt-diskstats.html
• The version of pt-diskstats you have installed can be found with
--version option.
[ben@lab ~]$ ./ptdiskstats version
ptdiskstats 2.2.1
20
23. pt-diskstats
• Displayed columns can be restricted using perl regex with the
--columns-regex option.
• Displayed devices can be restricted using perl regex with the
--devices-regex option.
• Adding the --show-timestamps option will include a timestamp with
each report.
• The output can be grouped by sample or by disk using the --group-
by option.
• Samples (of /proc/diskstats) can be saved for later analysis using
the --save-samples option.
• Contrary to the documentation --version-check is enabled by
default, if you wear a tin foil hat, disable using --noversion-check.
23
24. The interlude: A primer in Linux IO
24
Read http://kernel.dk/blk-mq.pdf for recent development
25. A primer in Linux IO
• Applications in the user space make requests to the VFS.
• The request is passed to the block layer if the page is not in the
page cache or the request is made using direct IO.
• Requests in the block layer can be split if the request is across
multiple devices, or remapped from a device partition to the
underlying block device.
• Requests are handled by the IO scheduler on a per block device
basis. Dependent on the scheduler, the request could be merged to
the front or back of existing requests (all schedulers), or sorted in
the request queue (cfq & deadline). The anticipatory scheduler was
removed from the kernel in version 2.6.33.
• Statistics are calculated at the block layer.
25
26. A primer in Linux IO
• To understand what is acceptable performance there's no substitute
to understanding the block device hardware.
• A block device is an abstraction, potentially it could be a 10 disk
array with a hardware RAID controller, it could be a LUN exposed
from a SAN, even if it's a single disk, the expected disk performance
could vary greatly dependent on it's specification.
• Expected IOPs, latency and throughput can be gathered via
benchmarks or estimated using calculations:
– http://www.techish.net/hardware/iops-calculator-and-raid-calculators-estimators/
– http://www.wmarow.com/strcalc/
• As a rough estimate you can expect:
75-100 iops from a 7200 rpm disk
125-150 iops from 10k rpm disk
175-200 iops from 15k rpm disk
1000's iops from SSD (++++++++++)
26
27. Block layer disk statistics
• From the 2.6 kernel, statistics are held for all block devices and
partitions in /proc/diskstats.
• /proc/diskstats lists the block devices major number, minor number,
and name as well as a statistic set of 11 counters.
• Prior to 2.6.25 the statistic set of partitions was only made up of 4
counters and the counters weren't consistent with the underlying
block device statistics.
• Statistics are also held for individual devices and partitions in sysfs.
• /sys/block/[dev]/stat holds the statistic set for the device.
• /sys/block/[dev]/[partition]/stat holds the statistic set for the device
partition.
27
30. Block layer disk statistics
Field 1 – read_IOs: Total number of reads completed (requests)
Field 2 – read_merges: Total number of reads merged (requests)
Field 3 – read_sectors: Total number of sectors read (sectors)
Field 4 – read_ticks: Total time spent reading (milliseconds)
Field 5 – write_IOs: Total number of writes completed (requests)
Field 6 – write_merges: Total number of writes merged (requests)
Field 7 – write_sectors: Total number of sectors written (sectors)
Field 8 – write_ticks: Total time spent writing (milliseconds)
Field 9 – in_flight: The number of I/Os currently in flight. It does not include I/O
requests that are in the queue but not yet issued to the device driver. (requests)
Field 10 – io_ticks: This value counts the time during which the device has had I/O
requests queued. (milliseconds)
Field 11 – time_in_queue: The number of I/Os in progress (field 9) times the number
of milliseconds spent doing I/O since the last update of this field. (milliseconds)
30
41. iostat - extended statistics
• svctm (milliseconds)
((delta[read_IOs(f1) + write_IOs(f5)] * HZ) / interval) /
(delta[IO_ticks(f10)] / interval)
(or 0.0 if tput = 0)
* HZ = ticks per second, (1000 on most systems).
** This field will be removed in a future sysstat version.
• %util (percent)
((delta[IO_ticks(f10)] / interval) / 10) / devices
* devices = 1 or the number of devices in the group (g
option).
41
56. RAID Controller CLIs
• Adaptec
arcconf ?
• Dell
omreport ?
• HP Smart Array
hpacucli
Type "help" for a list of supported commands.
Type "exit" to close the console.
• LSI
MegaCli ?
• Oracle
raidconfig ?
56
57. RAID Controller CLIs
• In this case it's a Dell server and controller
omreport storage controller
Controller PERC 6/i Integrated (Embedded)
Controllers
ID : 0
Status : Ok
Name : PERC 6/i Integrated
...
State : Ready
Firmware Version : 6.2.00013
...
Driver Version : 00.00.04.17RH1
...
Number of Connectors : 2
...
Cache Memory Size : 256 MB
...
57
58. RAID Controller CLIs
• /dev/sdb is RAID5 SAS HDDs
[root@316402db6 srvadmin]# omreport storage vdisk
List of Virtual Disks in the System
...
ID : 1
Status : Ok
...
Layout : RAID5
Size : 836.63 GB (898319253504 bytes)
Device Name : /dev/sdb
Bus Protocol : SAS
Media : HDD
Read Policy : No Read Ahead
Write Policy : Write Back
...
Stripe Element Size : 64 KB
Disk Cache Policy : Disabled
58
59. RAID Controller CLIs
• /dev/sdb is RAID5 SAS HDDs
[root@316402db6 srvadmin]# omreport storage pdisk controller=0 vdisk=1
List of Physical Disks on Controller PERC 6/i Integrated (Embedded)
Controller PERC 6/i Integrated (Embedded)
ID : 0:0:2
...
State : Online
Failure Predicted : No
...
Bus Protocol : SAS
Media : HDD
...
Capacity : 136.13 GB (146163105792 bytes)
...
Vendor ID : DELL
Product ID : ST3146356SS
...
59
61. Calculating IOPS
• http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182
“To calculate the IOPS range, use this formula: Average IOPS: Divide 1 by the sum
of the average latency in ms and the average seek time in ms (1 / (average latency
in ms + average seek time in ms).”
• Using gathered data:
1 / (2ms + ((3.4ms +3.9ms)/2))
1 / (0.002 + ((0.0034+0.0039)/2)) = 177 iops (Single disk)
• This is inline with the earlier rough estimate of 175-200 iops from 15k rpm disk!
Avg read/write access time are the only measures that should differ per model,
rotational latency should be constant dependent on the disk rotation speed, so the
estimates should be reasonable.
61
62. Calculating multi disk array IOPS
• http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182
“(Total Workload IOPS * Percentage of workload that is read operations) + (Total
Workload IOPS * Percentage of workload that is write operations * RAID IO
Penalty”
• Using gathered data:
((4 * 177) * 0.9) + (((4 * 177) * 0.1) / 4) = 654.9 iops
• This is doesn't account for the controller cache, so should be a worst case limit.
Picture credit also http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182
62
63. Calculating throughput
• Formula to calculate maximum throughput
MB/s = IOPS * KB per IO / 1024
• Using gathered data for IOPs, and estimates for KB per IO:
654.9 * 8 / 1024 = 5.1 mb/s
654.9 * 16 / 1024 = 10.2 mb/s
654.9 * 64 / 1024 = 40.9 mb/s
654.9 * 128 / 1024 = 81.9 mb/s
• Where KB per IO fits on this scale depends on how random vs sequential the IO is.
63
64. An example using iostat
[ben.mildren@316403db7 ~]$ iostat dx 5
Linux 2.6.18194.17.1.el5 (xxxx) 04/25/2013
....
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrqsz avgqusz await r_await w_await svctm %util
sda 0.00 4.60 0.00 16.80 0.00 85.60 10.19 0.06 3.29 0.00 3.29 0.06 0.10
sdb 7607.00 5.20 253.60 23.40 31467.20 114.40 228.03 0.86 3.09 2.93 4.86 2.68 74.10
dm0 0.00 0.00 0.00 0.40 0.00 1.60 8.00 0.00 4.00 0.00 4.00 2.00 0.08
dm1 0.00 0.00 0.00 28.20 0.00 112.80 8.00 0.11 4.07 0.00 4.07 0.04 0.12
dm2 0.00 0.00 374.80 0.00 31441.60 0.00 167.78 0.79 2.11 2.11 0.00 1.98 74.18
dm3 0.00 0.00 7860.40 0.40 31441.60 1.60 8.00 23.21 2.95 2.95 4.00 0.09 74.16
dm4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
....
Observations:
• ~275 IOPs well below estimated limit of ~650 IOPs
• Average queue size (avgqu-sz) is low.
• At ~30mb/s (rkB/s + wkB/s), throughput will not be saturating the controller,
rearranging the throughput formula we can see IOPs are about 114kb per IO which
leans to a more sequential workload.
• Latency (read/write average wait time) is inline with drive manufacturer
expectations.
64
65. An example using iostat
[ben.mildren@316403db7 ~]$ iostat dx 5
Linux 2.6.18194.17.1.el5 (xxxx) 04/25/2013
....
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrqsz avgqusz await r_await w_await svctm %util
sda 0.00 4.60 0.00 16.80 0.00 85.60 10.19 0.06 3.29 0.00 3.29 0.06 0.10
sdb 7607.00 5.20 253.60 23.40 31467.20 114.40 228.03 0.86 3.09 2.93 4.86 2.68 74.10
dm0 0.00 0.00 0.00 0.40 0.00 1.60 8.00 0.00 4.00 0.00 4.00 2.00 0.08
dm1 0.00 0.00 0.00 28.20 0.00 112.80 8.00 0.11 4.07 0.00 4.07 0.04 0.12
dm2 0.00 0.00 374.80 0.00 31441.60 0.00 167.78 0.79 2.11 2.11 0.00 1.98 74.18
dm3 0.00 0.00 7860.40 0.40 31441.60 1.60 8.00 23.21 2.95 2.95 4.00 0.09 74.16
dm4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
....
Observations:
• Average request size (avgrq-sz) is ~114k (matches throughput calc), so the load is
probably not being generated by MySQL.
• High number of merges, not a bad thing, but might want to investigate why..
(In this case they were caused by an LVM Snapshot being read during a backup).
65
66. Thank you – Q&A
To contact us
sales@pythian.com
1-877-PYTHIAN
To follow us
http://www.pythian.com/blog
http://www.facebook.com/pages/The-Pythian-Group/163902527671
@pythian
http://www.linkedin.com/company/pythian
6666
Editor's Notes
Calls are made to the VFS which provides abstraction from the file systems. Page cache, managed using familiar concepts such as LRU and dirty page percent. Since 2.6.32 pages flushed from the cache using bdi_flush. Page cache can be bypassed with direct IO. In 2.4 the buffers were king. In 2.6 we have buffers, buffer heads, and bio's. “ Generic block layer” maps request to underlying device, splitting the request if required. IO schedulers manage the request queue, where possible merging requests to the front and back of existing requests, and sorting requests. noop – most basic scheduler, doesn't merge just sorts. Good for SSDs. deadline – maintains 3 queues, read fifo, write fifo, and time based queue. Tries to promote read response time. cfq – complete fair queuing, per process queues served in round robin. Default scheduler. as – anticipatory, introduces delay, deprecated in 2.6.34.
The number of reads completed. This is the number of physical reads done by the underlying disk, not the number of reads that applications made from the block device. This means that 426 actual reads have completed successfully to the disk on which /dev/sda1 resides. Reads are not counted until they complete. The number of reads merged because they were adjacent. In the sample, 243 reads were merged. This means that /dev/sda1 actually received 869 logical reads, but sent only 426 physical reads to the underlying physical device. The number of sectors read successfully. The 426 physical reads to the disk read 3386 sectors. Sectors are 512 bytes, so a total of about 1.65MB have been read from /dev/sda1. The number of milliseconds spent reading. This counts only reads that have completed, not reads that are in progress. It counts the time spent from when requests are placed on the queue until they complete, not the time that the underlying disk spends servicing the requests. That is, it measures the total response time seen by applications, not disk response times. Ditto for field 1, but for writes. Ditto for field 2, but for writes. Ditto for field 3, but for writes. Ditto for field 4, but for writes. The number of I/Os currently in progress, that is, they’ve been scheduled by the queue scheduler and issued to the disk (submitted to the underlying disk’s queue), but not yet completed. There are bugs in some kernels that cause this number, and thus fields 10 and 11, to be wrong sometimes. The total number of milliseconds spent doing I/Os. This is not the total response time seen by the applications; it is the total amount of time during which at least one I/O was in progress. If one I/O is issued at time 100, another comes in at 101, and both of them complete at 102, then this field increments by 2, not 3. This field counts the total response time of all I/Os. In contrast to field 10, it counts double when two I/Os overlap. In our previous example, this field would increment by 3, not 2.
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;
common.h ... /* * Macros used to display statistics values. * * NB: Define SP_VALUE() to normalize to %; * HZ is 1024 on IA64 and % should be normalized to 100. */ #define S_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * HZ) #define SP_VALUE(m,n,p) (((double) ((n) - (m))) / (p) * 100) ... /* Number of ticks per second */ #define HZ hz extern unsigned int hz;