The document discusses the booting process of a computer. It involves the following key steps:
1. When the computer is turned on, the BIOS is loaded into RAM from the ROM chip and performs POST to check hardware.
2. POST checks for hardware errors and address conflicts.
3. The boot record is loaded into memory and contains instructions for loading the operating system.
4. The operating system is then loaded into RAM from storage, completing the booting process.
This document discusses the initial configuration of routers and switches. It covers what routers do, their booting process and memory components, interface types and cabling, connecting to routers, and basic router configuration practices like command modes, securing access, setting the hostname and banner, adding IP addresses to interfaces, and testing connections.
This study guide is intended to provide those pursuing the CCNA certification with a framework of what concepts need to be studied. This is not a comprehensive document containing all the secrets of the CCNA, nor is it a “braindump” of questions and answers.
I sincerely hope that this document provides some assistance and clarity in your studies.
The document discusses Cisco router components, boot sequence, configuration register, backing up and restoring the IOS and configuration, Cisco Discovery Protocol, Telnet, resolving hostnames, and troubleshooting tools. It provides details on the router boot process, configuration register settings, using CDP to view neighbor information, setting Telnet passwords to access routers remotely, building a host table or using DNS for name resolution, and basic network connectivity tools like ping and trace.
The document discusses optimizing Linux performance at various levels. It covers recompiling the kernel to optimize settings, monitoring CPU, memory, I/O, and network usage. Specific tools like vmstat, iostat, top, and htop are recommended. Optimizing includes adjusting settings like the I/O scheduler, disk cache policy, and TCP parameters. The kernel loads and decompresses to initialize essential subsystems before probing hardware and loading additional modules.
This chapter discusses motherboards, their components, and how to install and troubleshoot them. It covers the different types of motherboards and their components, including the CPU. It provides step-by-step instructions for building a computer, installing the motherboard, and setting configuration jumpers and switches. Troubleshooting tips include checking for errors during POST and substituting known good components.
The document discusses the booting process of a computer. It involves the following key steps:
1. When the computer is turned on, the BIOS is loaded into RAM from the ROM chip and performs POST to check hardware.
2. POST checks for hardware errors and address conflicts.
3. The boot record is loaded into memory and contains instructions for loading the operating system.
4. The operating system is then loaded into RAM from storage, completing the booting process.
This document discusses the initial configuration of routers and switches. It covers what routers do, their booting process and memory components, interface types and cabling, connecting to routers, and basic router configuration practices like command modes, securing access, setting the hostname and banner, adding IP addresses to interfaces, and testing connections.
This study guide is intended to provide those pursuing the CCNA certification with a framework of what concepts need to be studied. This is not a comprehensive document containing all the secrets of the CCNA, nor is it a “braindump” of questions and answers.
I sincerely hope that this document provides some assistance and clarity in your studies.
The document discusses Cisco router components, boot sequence, configuration register, backing up and restoring the IOS and configuration, Cisco Discovery Protocol, Telnet, resolving hostnames, and troubleshooting tools. It provides details on the router boot process, configuration register settings, using CDP to view neighbor information, setting Telnet passwords to access routers remotely, building a host table or using DNS for name resolution, and basic network connectivity tools like ping and trace.
The document discusses optimizing Linux performance at various levels. It covers recompiling the kernel to optimize settings, monitoring CPU, memory, I/O, and network usage. Specific tools like vmstat, iostat, top, and htop are recommended. Optimizing includes adjusting settings like the I/O scheduler, disk cache policy, and TCP parameters. The kernel loads and decompresses to initialize essential subsystems before probing hardware and loading additional modules.
This chapter discusses motherboards, their components, and how to install and troubleshoot them. It covers the different types of motherboards and their components, including the CPU. It provides step-by-step instructions for building a computer, installing the motherboard, and setting configuration jumpers and switches. Troubleshooting tips include checking for errors during POST and substituting known good components.
This document provides an index and overview of topics to be covered in a course on embedded systems. The index lists subjects like computer system components, microprocessors, memory, I/O, and microcontrollers. Diagrams are included to illustrate the basic structure of a computer system and a microprocessor. The document promotes sharing code in a GitHub repository and discusses differences between microcontrollers, microprocessors, and system on a chip (SOC) devices.
FP
Instru
stors
uced
oc
ster
s
mor
he
Ty
che
Sp
U
ctions
k
Size
Wi
y
pe
ee
dth
d
The document discusses processor specifications including speed, width of internal registers, data and address buses. It describes how early processors lacked cache memory but it was added starting at 16MHz. It details the evolution of L1 and L2 cache, moving from external chips to being integrated on the processor die for better speed. Tables list specifications for Intel and other processor families, noting clock speeds, cache
Processors come in various types and technologies. They have characteristics like speed, physical dimensions, voltage management, and internal and external architectures. Processors perform functions like running software, providing reliability, managing energy consumption and cooling, and supporting motherboards. Modern processors often have multiple independent processor cores, known as multicore processors, and can perform tasks simultaneously through technologies like hyper-threading. They are grouped into families that vary by clock speed but not architecture. Common recent processor examples include Intel i3, i5, and i7, as well as Intel Xeon chips. i5 and i7 processors have advantages over i3 like more cache and threads through technologies such as turbo boost and hyper-threading.
A motherboard houses the CPU and allows components to communicate. Its primary roles are to select a CPU and motherboard that meet needs and provide room for expansion. The motherboard contains key components like the CPU, chipset, and BIOS that manage basic functions. Configuration is stored in CMOS memory and involves settings like voltages, ports and passwords to control access. Proper cooling of high-powered CPUs is important for performance and longevity.
This document summarizes an Italian presentation on monitoring and tuning I/O performance on Linux. It discusses key topics like I/O monitoring tools like iostat, iotop and blktrace, tuning techniques like dirty page writeback and filesystem options, and ensuring reliability of data writes through proper synchronization and error handling. The presentation provides an overview of I/O subsystems in Linux and dives into specific tools and parameters for optimizing I/O.
The document discusses the main components of a computer's motherboard. It describes how the motherboard contains the processor, memory chips, and input/output chips. It also discusses the expansion slots that connect peripherals to the motherboard and how the motherboard facilitates data transfer between components. Booting processes are initialized by the motherboard reading instructions from ROM to test hardware and load the operating system.
The document discusses the main components of a computer's motherboard. It describes how the motherboard contains the processor, memory chips, and input/output chips. It also discusses the expansion slots that connect peripherals to the motherboard and how the motherboard facilitates data transfer between components. Booting processes are initialized by the motherboard reading configuration settings from memory chips.
This document provides information about computer hardware and software. It lists the main hardware components of a computer including the monitor, printer, speakers, CPU case, keyboard, mouse, and components inside the CPU case like the motherboard, CPU, RAM, hard disk, and sound card. It also discusses different types of hard disks and provides specifications for SSDs and HDDs. The document then covers system software like operating systems, listing Windows, MacOS, Linux, and mobile operating systems. It defines utility software and provides examples. Finally, it introduces application software.
PC Hardware & Booting
This document discusses the boot process of a PC from power-on to loading the operating system. It covers:
1) The different address types in a PC like memory addresses, I/O addresses, and memory-mapped I/O addresses.
2) The evolution of x86 CPUs from 8088 to 80386 to modern 64-bit processors.
3) The boot process from power-on reset to BIOS initialization, loading the master boot record, running the bootloader, and finally loading the operating system.
4) Multiprocessor booting where one CPU is designated as the boot processor that triggers other application processors.
The document introduces Samsung's eMMC memory technology. It summarizes the key features and enhancements of eMMC versions 4.4, 4.41, and 4.5, including improved performance, security, and reliability. Some notable additions in eMMC 4.5 are higher data transfer rates up to 200MHz SDR mode, packed commands to boost I/O performance, cache functionality to reduce write latency, and sanitize feature to securely purge all unused data at once. Sample availability timelines for eMMC 4.5 chips with and without 200MHz support are also provided.
Tuned is a tool that dynamically tunes Linux system settings for optimal performance based on usage profiles. It was created to allow systems to be tuned for peak performance during high workload times while saving power during idle periods. Tuned includes several predefined profiles that adjust settings for common use cases like high throughput, low latency, or power saving. New in Red Hat Enterprise Linux 7, Tuned is installed by default and will automatically select an initial profile based on system type. It also includes additional tuning capabilities and monitoring plugins for even more dynamic optimization of subsystems.
The Cisco IOS is the operating system that controls the routing and switching functions of Cisco networking devices. It allows routers and switches to function by running configuration files that contain parameters controlling traffic flow. The IOS software is stored in flash memory and loads the startup configuration from NVRAM on bootup. Setting a hostname, banner, and passwords are among the first configuration tasks for a new router.
he content of the exams is proprietary.[4] Cisco and its learning partners offer a variety of different training methods,[5] including books published by Cisco Press, and online and classroom courses available under the title "Interconnecting Cisco Network Devices."
Cisco IOS is the operating system that controls routing and switching functions on Cisco networking devices. It allows routers and switches to function by running configuration files that control traffic flow. Understanding Cisco IOS is essential for network administrators to properly configure and manage Cisco devices on their networks.
Cisco IOS is the operating system that controls routing and switching functions on Cisco networking devices. It allows routers and switches to function by running configuration files that control traffic flow. Understanding Cisco IOS is essential for network administrators to properly configure and manage Cisco devices on their networks.
This document discusses the Universal Asynchronous Receiver/Transmitter (UART) communication protocol. It explains that UART is an asynchronous serial communication protocol commonly used for communication between microcontrollers and peripheral devices. It operates by framing data bits with start and stop bits across two or more wires. The document covers UART fundamentals like frame formatting, transmission and reception of data, baud rate calculation, and differences between UART and Universal Synchronous/Asynchronous Receiver/Transmitter (USART). It also provides block diagrams of the UART module and examples of configuring baud rates on microcontrollers.
This document discusses the different types of memory used in routers and how they function. RAM is used to run the IOS and hold routing tables and packet buffers. ROM contains the bootstrap and POST programs to start the router. NVRAM stores the configuration register and configuration files. Flash stores the IOS by default. The configuration register controls how the router boots. CDP is used to synchronize time and collect information about neighboring devices.
This presentation will discuss cloud computing at Cisco Canada, including an overview of Cloud Computing, Cisco’s cloud strategy, the unified Data Center, Cisco Solution, Cloud Case study, and advances in technology and platforms.
GoGrid 3.0 Webinar: Complex Infrastructure Made Easy - Learn About the GoGrid...GoGrid Cloud Hosting
This presentation "Complex Infrastructure Made Easy - Learn About the GoGrid 3.0 Release & Our New East Coast Datacenter" is primarily for current GoGrid customers. The Webinar was recorded on 6/28/10. Presenters were Mario Olivarez & Justin Kitagawa of GoGrid and Ryan Holland of TrendMicro.
The webinar is available here: http://www.vimeo.com/13335979
Topics:
- Overview: Complex Infrastructure Made Easy - GoGrid's unique value proposition
- GoGrid 3.0: The latest version of the GoGrid platform - East Coast demo
- GoGrid Security: New services from Cisco, Fortinet, TrendMicro & Sentrigo
- PCI Compliance: PCI compliance in the GoGrid Cloud - Demo with TrendMicro
- Questions & Answers
Switch to NEW Flash Player
This document provides an index and overview of topics to be covered in a course on embedded systems. The index lists subjects like computer system components, microprocessors, memory, I/O, and microcontrollers. Diagrams are included to illustrate the basic structure of a computer system and a microprocessor. The document promotes sharing code in a GitHub repository and discusses differences between microcontrollers, microprocessors, and system on a chip (SOC) devices.
FP
Instru
stors
uced
oc
ster
s
mor
he
Ty
che
Sp
U
ctions
k
Size
Wi
y
pe
ee
dth
d
The document discusses processor specifications including speed, width of internal registers, data and address buses. It describes how early processors lacked cache memory but it was added starting at 16MHz. It details the evolution of L1 and L2 cache, moving from external chips to being integrated on the processor die for better speed. Tables list specifications for Intel and other processor families, noting clock speeds, cache
Processors come in various types and technologies. They have characteristics like speed, physical dimensions, voltage management, and internal and external architectures. Processors perform functions like running software, providing reliability, managing energy consumption and cooling, and supporting motherboards. Modern processors often have multiple independent processor cores, known as multicore processors, and can perform tasks simultaneously through technologies like hyper-threading. They are grouped into families that vary by clock speed but not architecture. Common recent processor examples include Intel i3, i5, and i7, as well as Intel Xeon chips. i5 and i7 processors have advantages over i3 like more cache and threads through technologies such as turbo boost and hyper-threading.
A motherboard houses the CPU and allows components to communicate. Its primary roles are to select a CPU and motherboard that meet needs and provide room for expansion. The motherboard contains key components like the CPU, chipset, and BIOS that manage basic functions. Configuration is stored in CMOS memory and involves settings like voltages, ports and passwords to control access. Proper cooling of high-powered CPUs is important for performance and longevity.
This document summarizes an Italian presentation on monitoring and tuning I/O performance on Linux. It discusses key topics like I/O monitoring tools like iostat, iotop and blktrace, tuning techniques like dirty page writeback and filesystem options, and ensuring reliability of data writes through proper synchronization and error handling. The presentation provides an overview of I/O subsystems in Linux and dives into specific tools and parameters for optimizing I/O.
The document discusses the main components of a computer's motherboard. It describes how the motherboard contains the processor, memory chips, and input/output chips. It also discusses the expansion slots that connect peripherals to the motherboard and how the motherboard facilitates data transfer between components. Booting processes are initialized by the motherboard reading instructions from ROM to test hardware and load the operating system.
The document discusses the main components of a computer's motherboard. It describes how the motherboard contains the processor, memory chips, and input/output chips. It also discusses the expansion slots that connect peripherals to the motherboard and how the motherboard facilitates data transfer between components. Booting processes are initialized by the motherboard reading configuration settings from memory chips.
This document provides information about computer hardware and software. It lists the main hardware components of a computer including the monitor, printer, speakers, CPU case, keyboard, mouse, and components inside the CPU case like the motherboard, CPU, RAM, hard disk, and sound card. It also discusses different types of hard disks and provides specifications for SSDs and HDDs. The document then covers system software like operating systems, listing Windows, MacOS, Linux, and mobile operating systems. It defines utility software and provides examples. Finally, it introduces application software.
PC Hardware & Booting
This document discusses the boot process of a PC from power-on to loading the operating system. It covers:
1) The different address types in a PC like memory addresses, I/O addresses, and memory-mapped I/O addresses.
2) The evolution of x86 CPUs from 8088 to 80386 to modern 64-bit processors.
3) The boot process from power-on reset to BIOS initialization, loading the master boot record, running the bootloader, and finally loading the operating system.
4) Multiprocessor booting where one CPU is designated as the boot processor that triggers other application processors.
The document introduces Samsung's eMMC memory technology. It summarizes the key features and enhancements of eMMC versions 4.4, 4.41, and 4.5, including improved performance, security, and reliability. Some notable additions in eMMC 4.5 are higher data transfer rates up to 200MHz SDR mode, packed commands to boost I/O performance, cache functionality to reduce write latency, and sanitize feature to securely purge all unused data at once. Sample availability timelines for eMMC 4.5 chips with and without 200MHz support are also provided.
Tuned is a tool that dynamically tunes Linux system settings for optimal performance based on usage profiles. It was created to allow systems to be tuned for peak performance during high workload times while saving power during idle periods. Tuned includes several predefined profiles that adjust settings for common use cases like high throughput, low latency, or power saving. New in Red Hat Enterprise Linux 7, Tuned is installed by default and will automatically select an initial profile based on system type. It also includes additional tuning capabilities and monitoring plugins for even more dynamic optimization of subsystems.
The Cisco IOS is the operating system that controls the routing and switching functions of Cisco networking devices. It allows routers and switches to function by running configuration files that contain parameters controlling traffic flow. The IOS software is stored in flash memory and loads the startup configuration from NVRAM on bootup. Setting a hostname, banner, and passwords are among the first configuration tasks for a new router.
he content of the exams is proprietary.[4] Cisco and its learning partners offer a variety of different training methods,[5] including books published by Cisco Press, and online and classroom courses available under the title "Interconnecting Cisco Network Devices."
Cisco IOS is the operating system that controls routing and switching functions on Cisco networking devices. It allows routers and switches to function by running configuration files that control traffic flow. Understanding Cisco IOS is essential for network administrators to properly configure and manage Cisco devices on their networks.
Cisco IOS is the operating system that controls routing and switching functions on Cisco networking devices. It allows routers and switches to function by running configuration files that control traffic flow. Understanding Cisco IOS is essential for network administrators to properly configure and manage Cisco devices on their networks.
This document discusses the Universal Asynchronous Receiver/Transmitter (UART) communication protocol. It explains that UART is an asynchronous serial communication protocol commonly used for communication between microcontrollers and peripheral devices. It operates by framing data bits with start and stop bits across two or more wires. The document covers UART fundamentals like frame formatting, transmission and reception of data, baud rate calculation, and differences between UART and Universal Synchronous/Asynchronous Receiver/Transmitter (USART). It also provides block diagrams of the UART module and examples of configuring baud rates on microcontrollers.
This document discusses the different types of memory used in routers and how they function. RAM is used to run the IOS and hold routing tables and packet buffers. ROM contains the bootstrap and POST programs to start the router. NVRAM stores the configuration register and configuration files. Flash stores the IOS by default. The configuration register controls how the router boots. CDP is used to synchronize time and collect information about neighboring devices.
This presentation will discuss cloud computing at Cisco Canada, including an overview of Cloud Computing, Cisco’s cloud strategy, the unified Data Center, Cisco Solution, Cloud Case study, and advances in technology and platforms.
GoGrid 3.0 Webinar: Complex Infrastructure Made Easy - Learn About the GoGrid...GoGrid Cloud Hosting
This presentation "Complex Infrastructure Made Easy - Learn About the GoGrid 3.0 Release & Our New East Coast Datacenter" is primarily for current GoGrid customers. The Webinar was recorded on 6/28/10. Presenters were Mario Olivarez & Justin Kitagawa of GoGrid and Ryan Holland of TrendMicro.
The webinar is available here: http://www.vimeo.com/13335979
Topics:
- Overview: Complex Infrastructure Made Easy - GoGrid's unique value proposition
- GoGrid 3.0: The latest version of the GoGrid platform - East Coast demo
- GoGrid Security: New services from Cisco, Fortinet, TrendMicro & Sentrigo
- PCI Compliance: PCI compliance in the GoGrid Cloud - Demo with TrendMicro
- Questions & Answers
Switch to NEW Flash Player
The document discusses the evolution and future direction of Citrix Receiver. It describes how Receiver has evolved from Updater in 2008 to take on more "broker" functions like discovery of apps/services, connectivity to the enterprise, and providing access to common value-added services. It highlights key trends driving this evolution, like mobility, BYOD, and new devices/platforms. Going forward, Receiver will focus on providing a unified experience across all devices and platforms, including a new Windows 8/WinRT version with metro-style interface. The summary emphasizes how Receiver acts as a "normalizer" to provide access and manage interactions across a diverse environment of devices, operating systems, and application types.
Guide citrix presentation server™ - client for java administrator’sxKinAnx
This document provides an administrator's guide for Citrix Presentation Server Client for Java version 9.x. It includes instructions for deploying, configuring, and optimizing the client. The guide covers topics such as unpacking and deploying the client using sample HTML files, initial and advanced configuration options, integrating security solutions, and techniques for improving client performance. It also discusses limitations of the client for different platforms.
Cisco's Unified Computing System (UCS) integrates computing, networking, storage access, and management into a single cohesive system designed to reduce total cost of ownership and increase agility. Key elements include UCS blade and rack servers, fabric interconnect switches, virtualized networking and storage access, and embedded management through UCS Manager. The unified architecture allows for policy-based provisioning and virtualization of system resources, providing efficient scalability and mobility for workloads.
Engage for success ibm spectrum accelerate 2xKinAnx
IBM Spectrum Accelerate is software that extends the capabilities of IBM's XIV storage system, such as consistent performance tuning-free, to new delivery models. It provides enterprise storage capabilities deployed in minutes instead of months. Spectrum Accelerate runs the proven XIV software on commodity x86 servers and storage, providing similar features and functions to an XIV system. It offers benefits like business agility, flexibility, simplified acquisition and deployment, and lower administration and training costs.
Presentation data center transformation cisco’s virtualization and cloud jo...xKinAnx
The document discusses Cisco's journey towards virtualization and cloud computing. It describes Cisco's global data center strategy, which includes building a new world-class data center in Allen, Texas and developing a cloud strategy and services called CITEIS. CITEIS provides infrastructure as a service using Cisco UCS, Nexus switches, and other Cisco technologies to enable automated, self-service provisioning and elastic infrastructure capacity.
The document discusses best practices for deploying MongoDB including sizing hardware with sufficient memory, CPU and I/O; using an appropriate operating system and filesystem; installing and upgrading MongoDB; ensuring durability with replication and backups; implementing security, monitoring performance with tools, and considerations for deploying on Amazon EC2.
MeetBSDCA 2014 Performance Analysis for BSD, by Brendan Gregg. A tour of five relevant topics: observability tools, methodologies, benchmarking, profiling, and tracing. Tools summarized include pmcstat and DTrace.
Shak larry-jeder-perf-and-tuning-summit14-part2-finalTommy Lee
This document provides an overview of performance analysis and tuning techniques in Red Hat Enterprise Linux (RHEL). It discusses the tuned profile packages and how they optimize systems for different workloads. Specific topics covered include disk I/O tuning, memory tuning, network performance tuning, and power management techniques. A variety of Linux performance analysis tools are also introduced, including tuned, turbostat, netsniff-ng, and Performance Co-Pilot.
1. The document discusses the structure and concepts of operating systems. It covers topics like what an operating system is, the history of operating systems, computer hardware components, different types of operating systems, and operating system concepts.
2. It describes the major components of an operating system including process management, memory management, file management, I/O management, protection and security, and operating system services.
3. It also discusses operating system structures like monolithic systems, layered systems, microkernels, and the client-server model. Examples are provided for each type of structure.
Analyzing OS X Systems Performance with the USE MethodBrendan Gregg
Talk for MacIT 2014. This talk is about systems performance on OS X, and introduces the USE Method to check for common performance bottlenecks and errors. This methodology can be used by beginners and experts alike, and begins by constructing a checklist of the questions we’d like to ask of the system, before reaching for tools to answer them. The focus is resources: CPUs, GPUs, memory capacity, network interfaces, storage devices, controllers, interconnects, as well as some software resources such as mutex locks. These areas are investigated by a wide variety of tools, including vm_stat, iostat, netstat, top, latency, the DTrace scripts in /usr/bin (which were written by Brendan), custom DTrace scripts, Instruments, and more. This is a tour of the tools needed to solve our performance needs, rather than understanding tools just because they exist. This talk will make you aware of many areas of OS X that you can investigate, which will be especially useful for the time when you need to get to the bottom of a performance issue.
The document discusses diagnosing and mitigating MySQL performance issues. It describes using various operating system monitoring tools like vmstat, iostat, and top to analyze CPU, memory, disk, and network utilization. It also discusses using MySQL-specific tools like the MySQL command line, mysqladmin, mysqlbinlog, and external tools to diagnose issues like high load, I/O wait, or slow queries by examining metrics like queries, connections, storage engine statistics, and InnoDB logs and data written. The agenda covers identifying system and MySQL-specific bottlenecks by verifying OS metrics and running diagnostics on the database, storage engines, configuration, and queries.
The document discusses the key components of computer hardware. It describes peripheral devices that expand a computer's functionality, such as keyboards, mice, and printers. The core internal components of a computer system unit are then outlined, including the motherboard, processor, memory, hard disks, optical drives, and expansion slots. Basic components of the motherboard like the chipset and BIOS are also summarized.
The document describes Linux containerization and virtualization technologies including containers, control groups (cgroups), namespaces, and backups. It discusses:
1) How cgroups isolate and limit system resources for containers through mechanisms like cpuset, cpuacct, cpu, memory, blkio, and freezer.
2) How namespaces isolate processes by ID, mounting, networking, IPC, and other resources to separate environments for containers.
3) The new backup system which uses thin provisioning and snapshotting to efficiently backup container environments to backup servers and restore individual accounts or full servers as needed.
- The document provides guidance on deploying MongoDB including sizing hardware, installing and upgrading MongoDB, configuration considerations for EC2, security, backups, durability, scaling out, and monitoring. Key aspects discussed are profiling and indexing queries for performance, allocating sufficient memory, CPU and disk I/O, using 64-bit OSes, ext4/XFS filesystems, upgrading to even version numbers, and replicating for high availability and backups.
Talk for PerconaLive 2016 by Brendan Gregg. Video: https://www.youtube.com/watch?v=CbmEDXq7es0 . "Systems performance provides a different perspective for analysis and tuning, and can help you find performance wins for your databases, applications, and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes six important areas of Linux systems performance in 50 minutes: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events), static tracing (tracepoints), and dynamic tracing (kprobes, uprobes), and much advice about what is and isn't important to learn. This talk is aimed at everyone: DBAs, developers, operations, etc, and in any environment running Linux, bare-metal or the cloud."
The document discusses memory usage in Linux systems. It begins by describing the boot process and how the kernel loads into memory. It then explains that Linux uses virtual memory, where each process has its own virtual address space. Physical memory is limited by factors like CPU architecture and motherboard configuration. When an executable runs, not all of its code is loaded; dynamic libraries it depends on are mapped into its address space as well, significantly increasing its memory usage compared to the executable size alone.
linux monitoring and performance tunning iman darabi
howto monitor linux server? what metrics are important when monitor server? what is related between metrics and monitoring tools? what are basic linux server optimization ? howto optimize ?
Shak larry-jeder-perf-and-tuning-summit14-part1-finalTommy Lee
This document provides an overview and agenda for a performance analysis and tuning presentation focusing on Red Hat Enterprise Linux evolution, NUMA scheduling improvements, and use of cgroups/containers for resource management. Key points include how RHEL has incorporated features like tuned profiles, transparent hugepages, automatic NUMA balancing, and how cgroups can guarantee quality of service and enable dynamic resource allocation for multi-application environments. Performance results are shown for databases and SPEC benchmarks utilizing these features.
This document provides information on monitoring Linux system resources and performance. It discusses tools like vmstat, sar, iostat for monitoring CPU usage, memory usage, I/O usage, and other metrics. It also covers Linux processes, memory management, and block device monitoring.
This document discusses disk storage topics including disk interfaces, components, performance, reliability, partitions, RAID, and failures/backups. It provides 7 slides on disk interfaces such as SAS, SATA, Fibre Channel, and iSCSI. Another 7 slides cover disk components, performance factors like seek time and throughput, caching, and measuring performance. The remaining slides discuss reliability issues, partitions, extended partitions, GUID tables, reasons for partitioning, and an overview of RAID.
On X86 systems, using an Unbreakable Enterprise Kernel (UEK) is recommended over other enterprise distributions as it provides better hardware support, security patches, and testing from the larger Linux community. Key configuration recommendations include enabling maximum CPU performance in BIOS, using memory types validated by Oracle, ensuring proper NUMA and CPU frequency settings, and installing only Oracle-validated packages to avoid issues. Monitoring tools like top, iostat, sar and ksar help identify any CPU, memory, disk or I/O bottlenecks.
This document provides an overview of monitoring Linux system performance. It discusses how to interpret output from tools like vmstat and top to analyze CPU utilization, context switching, and run queue length. These metrics can help identify performance bottlenecks by indicating whether a system is CPU-bound, I/O-bound, or experiencing high load. The document also defines important CPU terminology like user time, system time, interrupts, and priorities to understand how the Linux scheduler manages threads.
Extreme Linux Performance Monitoring and TuningMilind Koyande
This document provides an introduction to monitoring Linux system performance. It discusses determining the type of application running and establishing a baseline of typical system usage. Key CPU concepts are then outlined such as hardware interrupts, soft interrupts, real-time threads and kernel/user threads. Context switches between threads and the thread scheduling queue are also introduced. The goal is to understand typical system behavior and identify any bottlenecks.
Unit 2 processor&memory-organisationPavithra S
This document discusses processor and memory organization for embedded systems. It describes the structural units of a processor like the MAR, MDR, buses, BIU, IR, ID, CU, ALU, PC, and caches. It covers memory devices like ROM, RAM, SRAM, DRAM, and flash memory. It provides case studies on selecting a processor based on features like clock speed, performance needs, and power efficiency. The document aims to help with selecting appropriate processors and memory for different types of embedded systems.
Similar to Presentation aix performance tuning (20)
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep divexKinAnx
The document provides an overview of IBM Spectrum Virtualize HyperSwap functionality. HyperSwap allows host I/O to continue accessing volumes across two sites without interruption if one site fails. It uses synchronous remote copy between two I/O groups to make volumes accessible across both groups. The document outlines the steps to configure a HyperSwap configuration, including naming sites, assigning nodes and hosts to sites, and defining the topology.
Software defined storage provisioning using ibm smart cloudxKinAnx
This document provides an overview of software-defined storage provisioning using IBM SmartCloud Virtual Storage Center (VSC). It discusses the typical challenges with manual storage provisioning, and how VSC addresses those challenges through automation. VSC's storage provisioning involves three phases - setup, planning, and execution. The setup phase involves adding storage devices, servers, and defining service classes. In the planning phase, VSC creates a provisioning plan based on the request. In the execution phase, the plan is run to automatically complete all configuration steps. The document highlights how VSC optimizes placement and streamlines the provisioning process.
This document discusses IBM Spectrum Virtualize 101 and IBM Spectrum Storage solutions. It provides an overview of software defined storage and IBM Spectrum Virtualize, describing how it achieves storage virtualization and mobility. It also provides details on the new IBM Spectrum Virtualize DH8 hardware platform, including its performance improvements over previous platforms and support for compression acceleration.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
HyperSwap provides high availability by allowing volumes to be accessible across two IBM Spectrum Virtualize systems in a clustered configuration. It uses synchronous remote copy to replicate primary and secondary volumes between the two systems, making the volumes appear as a single object to hosts. This allows host I/O to continue if an entire system fails without any data loss. The configuration requires a quorum disk in a third site for the cluster to maintain coordination and survive failures across the two main sites.
IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) provides data protection and recovery for hybrid cloud environments. This document summarizes a presentation on IBM's strategic direction for Spectrum Protect, including plans to enhance the product to better support hybrid cloud, virtual environments, large-scale deduplication, simplified management, and protection for key workloads. The presentation outlines roadmap features for 2015 and potential future enhancements.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
IBM Spectrum Scale can help achieve ILM efficiencies through policy-driven, automated tiered storage management. The ILM toolkit manages file sets and storage pools and automates data management. Storage pools group similar disks and classify storage within a file system. File placement and management policies determine file placement and movement based on rules.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...xKinAnx
This document provides an overview of managing Spectrum Scale opportunity discovery and working with external resources to be successful. It discusses how to build presentations and configurations to address technical and philosophical solution requirements. The document introduces IBM Spectrum Scale as providing low latency global data access, linear scalability, and enterprise storage services on standard hardware for on-premise or cloud deployments. It also discusses Spectrum Scale and Elastic Storage Server, noting the latter is a hardware building block with GPFS 4.1 installed. The document provides tips for discovering opportunities through RFPs, RFIs, events, workshops, and engaging clients to understand their needs in order to build compelling proposal information.
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...xKinAnx
This document provides guidance on sizing and configuring Spectrum Scale and Elastic Storage Server solutions. It discusses collecting information from clients such as use cases, workload characteristics, capacity and performance goals, and infrastructure requirements. It then describes using tools to help architect solutions that meet the client's needs, such as breaking the problem down, addressing redundancy and high availability, and accounting for different sites, tiers, clients and protocols. The document also provides tips for working with the configuration tool and pricing the solution appropriately.
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...xKinAnx
The document provides an overview of key concepts covered in a GPFS 4.1 system administration course, including backups using mmbackup, SOBAR integration, snapshots, quotas, clones, and extended attributes. The document includes examples of commands and procedures for administering these GPFS functions.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
Presentation aix performance tuning
1. 1
AIX Performance Tuning
Jaqui Lynch
Senior Systems Engineer
Mainline Information Systems
Userblue Session 6601
http://www.circle4.com/papers/s6601jla.pdf
Agenda
• AIX v5.2 versus AIX v5.3
• 32 bit versus 64 bit
• JFS versus JFS2
• DIO and CIO
• AIX Performance Tips
2. 2
New in AIX 5.2
• P5 support
• JFS2
• Large Page support (16mb)
• Dynamic LPAR
• Small Memory Mode
– Better granularity in assignment of memory to LPARs
• CuOD
• xProfiler
• New Performance commands
– vmo, ioo, schedo replace schedtune and vmtune
• AIX 5.1 Status
– Will not run on p5 hardware
– Withdrawn from marketing end April 2005
– Support withdrawn April 2006
AIX 5.3
• New in 5.3
– With Power5 hardware
• SMT
• Virtual Ethernet
• With APV
– Shared Ethernet
– Virtual SCSI Adapter
– Micropartitioning
– PLM
3. 3
AIX 5.3
• New in 5.3
– JFS2 Updates
• Improved journaling
• Extent based allocation
• 1tb filesystems and files with potential of 4PB
• Accounting
• Filesystem shrink
• Striped Columns
– Can extend striped LV if a disk fills up
• 1024 disk scalable volume group
– 1024 PVs, 4096 LVs, 2M pps/vg
• Quotas
• Each VG now has its own tunable pbuf pool
– Use lvmo command
AIX 5.3
• New in 5.3
– NFSv4 Changes
• ACLs
– NIM enhancements
• Security
• Highly available NIM
• Post install configuration of Etherchannel and Virtual IP
– SUMA patch tool
– Last version to support 32 bit kernel
– MP kernel even on a UP
– Most commands changed to support LPAR stats
4. 4
32 bit versus 64 bit
• 32 Bit
• Up to 96GB memory
• Uses JFS for rootvg
• Runs on 32 or 64 bit
hardware
• Hardware all defaults to
32 bit
• JFS is optimized for 32
bit
• 5.3 is last version of AIX
with 32 bit kernel
• 64 bit
• Allows > 96GB memory
• Current max is 256GB
(arch is 16TB)
• Uses JFS2 for rootvg
• Supports 32 and 64 bit
apps
• JFS2 is optimized for 64
bit
JFS versus JFS2
• JFS
• 2gb file max unless BF
• Can use with DIO
• Optimized for 32 bit
• Runs on 32 bit or 64 bit
• JFS2
• Optimized for 64 bit
• Required for CIO
• Allows larger file sizes
• Runs on 32 bit or 64 bit
5. 5
DIO and CIO
• DIO
– Direct I/O
– Around since AIX v5.1
– Used with JFS
– CIO is built on it
– Effectively bypasses filesystem caching to bring data directly
into application buffers
– Does not like compressed JFS or BF (lfe) filesystems
– Reduces CPU and eliminates overhead copying data twice
– Reads are synchronous
– Bypasses filesystem readahead
– Inode locks still used
DIO and CIO
• CIO
– Concurrent I/O
– Only available in JFS2
– Allows performance close to raw devices
– Use for Oracle data and logs, not for binaries
– No system buffer caching
– Designed for apps (such as RDBs)that enforce
serialization at the app
– Allows non-use of inode locks
– Implies DIO as well
– Not all apps benefit from CIO and DIO –
some are betttr with filesystem caching
6. 6
Performance Tuning
• CPU
– vmstat, ps, nmon
• Network
– netstat, nfsstat, no, nfso
• I/O
– iostat, filemon, ioo, lvmo
• Memory
– lsps, svmon, vmstat, vmo, ioo
CPU Time
• Real
– Wallclock time
• User State
– Time running a users program
– Includes library calls
– Affected by optimization and inefficient code
• System
– Time spent in system state on behalf of user
– Kernel calls and all I/O routines
– Affected by blocking I/O transfers
• I/O and Network
– Time spent moving data and servicing I/O requests
7. 7
Tools
• vmstat – for processor and memory
• nmon
– http://www-106.ibm.com/developerworks/eserver/articles/analyze_aix/
• nmon analyzer
– http://www-106.ibm.com/developerworks/eserver/articles/nmon_analyser/
• sar
– sar -A -o filename 2 30 >/dev/null
• ps
– ps gv | head -n 1 >filename
– ps gv | egrep -v "RSS" | sort +6b -7 -n -r >>filename
– ps –ef
– ps aux
vmstat
IGNORE FIRST LINE - average since boot
Run vmstat over an interval (i.e. vmstat 2 30)
System configuration: lcpu=24 mem=102656MB ent=0
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 1 18451085 7721230 0 0 0 0 0 0 1540 30214 10549 4 5 88 4 1.36 11.3
1 1 18437534 7734781 0 0 0 0 0 0 1684 30499 11484 5 5 87 4 1.40 11.6
56 1 18637043 7533530 0 0 0 0 0 0 4298 24564 9866 98 2 0 0 12.00 100.0
57 1 18643753 7526811 0 0 0 0 0 0 3867 25124 9130 98 2 0 0 12.00 100.0
Pc = processors consumed
Ec = entitled capacity consumed
fre should be between minfree and maxfree
fr:sr ratio 1783:2949 means that for every 1783 pages freed 2949 pages had to
be examined.
To get a 60 second average try: vmstat 60 2
8. 8
vmstat -I
vmstat -I
System configuration: lcpu=24 mem=102656MB ent=0
kthr memory page faults cpu
-------- ----------- ------------------------ ------------ -----------------------
r b p avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec
4 7 0 18437529 7734646 492 521 0 0 0 0 2839 34733 15805 7 1 87 6 1.01 8.4
System configuration: lcpu=24 mem=102656MB ent=0
kthr memory page faults cpu
-------- ----------- ------------------------ ------------ -----------------------
r b p avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec
4 7 0 18644053 7526373 472 499 0 0 0 0 2836 34826 15801 7 1 99 99 1.06 8.8
vmstat -s
774400476 total address trans. faults
557108234 page ins
589044152 page outs
0 paging space page ins
0 paging space page outs
0 total reclaims
272954956 zero filled pages faults
45692 executable filled pages faults
0 pages examined by clock
0 revolutions of the clock hand Compare 2 snapshots 60 seconds apart
0 pages freed by the clock
394231320 backtracks
0 free frame waits
0 extend XPT waits
407266032 pending I/O waits
1146102449 start I/Os
812431131 iodones
18592774929 cpu context switches
3338223399 device interrupts
4351388859 software interrupts
1207868395 decrementer interrupts
699614 mpc-sent interrupts
699595 mpc-receive interrupts
92841048 phantom interrupts
0 traps
40977828143 syscalls
9. 9
vmstat -v
• 26279936 memory pages
• 25220934 lruable pages
• 7508669 free pages
• 4 memory pools
• 3829840 pinned pages
• 80.0 maxpin percentage
• 20.0 minperm percentage
• 80.0 maxperm percentage
• 0.3 numperm percentage
• 89337 file pages
• 0.0 compressed percentage
• 0 compressed pages
• 0.1 numclient percentage
• 80.0 maxclient percentage
• 28905 client pages
• 0 remote pageouts scheduled
• 280354 pending disk I/Os blocked with no pbuf
• 0 paging space I/Os blocked with no psbuf
• 2938 filesystem I/Os blocked with no fsbuf
• 7911578 client filesystem I/Os blocked with no fsbuf
• 0 external pager filesystem I/Os blocked with no fsbuf
• Totals since boot so look at 2 snapshots 60 seconds apart
lparstat
lparstat -h
System Configuration: type=shared mode=Uncapped smt=On lcpu=4 mem=512 ent=5.0
%user %sys %wait %idle physc %entc lbusy app vcsw phint %hypv hcalls
0.0 0.5 0.0 99.5 0.00 1.0 0.0 - 1524 0 0.5 1542
16.0 76.3 0.0 7.7 0.30 100.0 90.5 - 321 1 0.9 259
Physc – physical processors consumed
Lbusy – logical processor utilization for system and user
Phint – phantom interrupts to other partitions
http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/lparstat.htm
10. 10
lparstat -H
lparstat –H
Gives info per Hypervisor call type as follows:
Number of calls
Time spent on this types of calls
Hypervisor time spent on this type of call
Average call time
Max call time
http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/lparstat.htm
mpstat
mpstat –s
System configuration: lcpu=4 ent=0.5
Proc1 Proc0
0.27% 49.63%
cpu0 cpu2 cpu1 cpu3
0.17% 0.10% 3.14% 46.49%
Above shows how processor is distributed using SMT
11. 11
Network
• Buffers
– Mbufs
• Kernel buffer using pinned memory
• thewall is max memory for mbufs
– TCP and UDP receive and send buffers
– Ethernet adapter attributes
– no and nfso commands
– nfsstat
– rfc1323 and nfs_rfc1323
netstat
• netstat –i
– Shows input and output packets and errors for
each adapter
– Also shows collisions
• netstat –ss
– Shows summary info such as udp packets dropped
due to no socket
• netstat –m
– Memory information
• netstat –v
– Statistical information on all adapters
12. 12
Network tuneables
• no -a
• Using no
– rfc1323 = 1
– sb_max=1310720
– tcp_sendspace=262144
– tcp_recvspace=262144
– udp_sendspace=65536
– udp_recvspace=655360
• Using nfso
– nfso -a
– Nfs_rfc1323=1
– nfs_socketsize=60000
• Do a web search on “nagle effect”
nfsstat
• Client and Server NFS Info
• nfsstat –cn or –r or –s
– Retransmissions due to errors
• Retrans>5% is bad
– Badcalls
– Timeouts
– Waits
– Reads
13. 13
Memory and I/O problems
• iostat
– Look for overloaded disks and adapters
• vmstat
• vmo and ioo (replace vmtune)
• sar
• fileplace and filemon
• Asynchronous I/O
• Paging
• svmon
– svmon -G >filename
• Nmon
• Check error logs
I/O Tuneables 1/3
• minperm
– default is 20%
• maxperm
– default is 80%
– This is a soft limit
– Affects JFS filesystems
• numperm
– This is what percent of real memory is currently being used for caching
files - if it is high reduce maxperm to 30 to 50%
• strict_maxperm
– Used to avoid double caching – be extra careful!!!!!
• Reducing maxperm stops file caching affecting programs that are running
• maxrandwrt is random write behind
– default is 0 – try 32
• numclust is sequential write behind
14. 14
I/O Tuneables 2/3
• maxclient
– default is 80%
– Must be less than or equal to maxperm
– Affects NFS and JFS2
– Hard limit by default
• numclient
– This is what percent of real memory is currently being used for caching
JFS2 filesystems and NFS
• strict_maxclient
– Available at AIX 5.2 ML4
– By default it is set to a hard limit
– Change to a soft limit
• J2_maxRandomWrite is random write behind
– default is 0 – try 32
• J2_nPagesPerWriteBehindCluster
– Default is 32
• J2_nRandomCluster is sequential write behind
I/O Tuneables 2/2
• minpgahead, maxpgahead, J2_minPageReadAhead &
J2_maxPageReadAhead
– Default min =2 max = 8
– Maxfree – minfree >= maxpgahead
• lvm_bufcnt
– Buffers for raw I/O. Default is 9
– Increase if doing large raw I/Os (no jfs)
• numfsbufs
– Helps write performance for large write sizes
• hd_pbuf_cnt
– Pinned buffers to hold JFS I/O requests
– Increase if large sequential I/Os to stop I/Os bottlenecking at the
LVM
– One pbuf is used per sequential I/O request regardless of the
number of pages
– With AIX v5.3 each VG gets its own set of pbufs
– Prior to AIX 5.3 it was a system wide setting
• sync_release_ilock
18. 18
lvmo
• lvmo output
•
• vgname = rootvg
• pv_pbuf_count = 256
• total_vg_pbufs = 512
• max_vg_pbuf_count = 8192
• pervg_blocked_io_count = 0
• global_pbuf_count = 512
• global_blocked_io_count = 46
lsps –a
(similar to pstat)
• Ensure all page datasets the same size
although hd6 can be bigger - ensure more
page space than memory
• Only includes pages allocated (default)
• Use lsps -s to get all pages (includes reserved
via early allocation (PSALLOC=early)
• Use multiple page datasets on multiple disks
19. 19
lsps output
lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
paging05 hdisk9 pagvg01 2072MB 1 yes yes lv
paging04 hdisk5 vgpaging01 504MB 1 yes yes lv
paging02 hdisk4 vgpaging02 168MB 1 yes yes lv
paging01 hdisk3 vgpagine03 168MB 1 yes yes lv
paging00 hdisk2 vgpaging04 168MB 1 yes yes lv
hd6 hdisk0 rootvg 512MB 1 yes yes lv
lsps -s
Total Paging Space Percent Used
3592MB 1%
Bad Layout above
Should be balanced
Make hd6 the biggest by one lp or the same size as the others
svmon -G
size inuse free pin virtual
memory 26279936 18778708 7501792 3830899 18669057
pg space 7995392 53026
work pers clnt lpage
pin 3830890 0 0 0
in use 18669611 80204 28893 0
In GB Equates to:
size inuse free pin virtual
memory 100.25 71.64 28.62 14.61 71.22
pg space 30.50 0.20
work pers clnt lpage
pin 14.61 0 0 0
in use 71.22 0.31 0.15 0
20. 20
I/O Pacing
• Set high value to multiple of (4*n)+1
• Limits the number of outstanding I/Os
against an individual file
• minpout – minimum
• maxpout – maximum
• If process reaches maxpout then it is
suspended from creating I/O until
outstanding requests reach minpout
Other
• Mirroring of disks
– lslv to check # copies
– Mirror write consistency
• Mapping of backend disks
– Don’t software mirror if already raided
• RIO and adapter Limits
• Logical volume scheduling policy
– Parallel
– Parallel/sequential
– Parallel/roundrobin
– Sequential
• Async I/O
21. 21
Async I/O
Total number of AIOs in use
pstat –a | grep aios | wc –l
4205
AIO max possible requests
lsattr –El aio0 –a maxreqs
maxreqs 4096 Maximum number of REQUESTS True
AIO maxservers
lsattr –El aio0 –a maxservers
maxservers 320 MAXIMUM number of servers per cpu True
NB – maxservers is a per processor setting in AIX 5.3
Other tools
• filemon
– filemon -v -o filename -O all
– sleep 30
– trcstop
• pstat to check async I/O
– pstat -a | grep aio | wc –l
• perfpmr to build performance info for
IBM
– /usr/bin/perfpmr.sh 300
22. 22
Striping
• Spread data across drives
• Improves r/w performance of large
sequential files
• Can mirror stripe sets
• Stripe width = # of stripes
• Stripe size
– Set to 64kb for best sequential I/O
throughput
– Any N² from 4kb to 128kb
General
Recommendations
• Different hot LVs on separate physical volumes
• Stripe hot LV across disks to parallelize
• Mirror read intensive data
• Ensure LVs are contiguous
– Use lslv and look at in-band % and distrib
– reorgvg if needed to reorg LVs
• Writeverify=no
• minpgahead=2, maxpgahead=16 for 64kb stripe size
• Increase maxfree if you adjust maxpgahead
• Tweak minperm, maxperm and maxrandwrt
• Tweak lvm_bufcnt if doing a lot of large raw I/Os
• If JFS2 tweak j2 versions of above fields
• Clean out inittab and rc.tcpip for things that should not start
23. 23
References
• IBM Redbooks
– SG24-7940 – Advanced Power Virtualization on IBM p5 servers –
Introduction and Basic Configuration
– The Complete Partitioning Guide for IBM eServer pSeries Servers
– pSeries – LPAR Planning Redpiece
– Logical Partition Security in the IBM eServer pSeries 690
– Technical Overview Redbooks for p520, p550 and p570, etc
– SG24-7039 - Partitioning Implementation on p5 and Openpower
Servers
• eServer Magazine and AiXtra
– http://www.eservercomputing.com/
• Search on Jaqui AND Lynch
• Articles on Tuning and Virtualization
• Find more on Mainline at:
– http://mainline.com/ebrochure
Questions?