A cada nova geração de mainframes, a IBM aumenta a capacidade de suas máquinas. Mas você sabe realmente como a arquitetura do sistema de todos esses processadores afeta a performance e impacta o seu Planejamento da Capacidade? Essa apresentação propõe essa discussão e mostra um caso real sobre como os componentes internos da arquitetura do sistema afetam as diretrizes das disciplinas de Planejamento da Capacidade, Disponibilidade e Desempenho dos ambientes mainframe.
The document discusses changes in z/VM 6.3 to support large logical partition (LPAR) workloads. Key changes include implementing HiperDispatch to improve processor efficiency through affinity-aware dispatching and vertical CPU management. Memory support was increased from 256GB to 1TB per z/VM system. Other improvements include enhanced dump support for larger environments and tools for studying monitor data to understand workload behavior.
This document provides an overview of concepts and challenges related to capacity planning in a Parallel Sysplex environment. It discusses z/OS, the Coupling Facility, connectivity options, and references. Key factors that impact performance are identified for the Coupling Facility and z/OS CECs, including processor speed and type, workload characteristics, connectivity configuration, memory size, and distance between systems. Topologies, duplexing, and configuration options are also reviewed. Metrics from RMF reports on subchannels and path busy times are presented.
This document discusses userspace storage systems as an alternative to kernel-based storage for petascale workloads. It outlines several userspace filesystems, block storage systems, and object storage systems used in practice. Common languages used include C, Python, Java, and Golang. Interfaces to the kernel include FUSE, UIO, DPDK and libvma. Challenges include balancing performance, scalability, and complexity across unified, self-managing systems. Specific examples covered are NFS-Ganesha, GlusterFS, HDFS, NBD, tgt, and caching systems like Tachyon and Redis.
These slides are a series of "best practices" for running on the Cray XT line of supercomputers. This talk was presented at the HPCMP meeting at SDSC on 11/5/2009
Kernel Recipes 2018 - XDP: a new fast and programmable network layer - Jesper...Anne Nicolas
This talk will introduce XDP (eXpress Data Path), and explain how this is essentially a new (programmable) network layer in-front of the existing network stack. Then it will dive into the details of the new XDP redirect feature, which goes beyond forwarding packets out other NIC devices.
The eXpress Data Path (XDP) has been gradually integrated into the Linux kernel over several releases. XDP offers fast and programmable packet processing in kernel context. The operating system kernel itself provides a safe execution environment for custom packet processing applications, in form of eBPF programs, executed in device driver context. XDP provides a fully integrated solution working in concert with the kernel’s networking stack. Applications are written in higher level languages such as C and compiled via LLVM into eBPF bytecode which the kernel statically analyses for safety, and JIT translates into native instructions. This is an alternative approach, compared to kernel bypass mechanisms (like DPDK and netmap).
Gluster Cloud Night in Tokyo 2013 -- Tips for getting startedKeisuke Takahashi
The document discusses using deployment automation tools like Capistrano to simplify the installation of GlusterFS across multiple nodes. It recommends copying the commands output by a deployment tool rather than requiring operations teams to learn how to use the tools. It then provides details on Capistrano and a Capistrano plugin called capistrano-glusterfs that facilitates automated deployment of GlusterFS. Tasks are defined for common operations like preparing nodes, installing dependencies, building GlusterFS, and configuring the cluster.
1. DPDK achieves high throughput packet processing on commodity hardware by reducing kernel overhead through techniques like polling, huge pages, and userspace drivers.
2. In Linux, packet processing involves expensive operations like system calls, interrupts, and data copying between kernel and userspace. DPDK avoids these by doing all packet processing in userspace.
3. DPDK uses techniques like isolating cores for packet I/O threads, lockless ring buffers, and NUMA awareness to further optimize performance. It can achieve throughput of over 14 million packets per second on 10GbE interfaces.
This document discusses improving debugging of the Linux kernel using the open source debugger GDB. It begins with an overview of existing ways to debug the kernel using GDB, including via KGDB, Qemu, and JTAG probes like OpenOCD. It then discusses the concept of adding "Linux awareness" to GDB, which would allow it to better understand kernel concepts like threads and modules. Finally, it outlines three approaches to implementing this awareness: via a GDB scripting extension, in the GDB stub, or with a C extension to GDB. The overall goal is to make GDB a more full-featured and useful tool for kernel debugging.
The document discusses changes in z/VM 6.3 to support large logical partition (LPAR) workloads. Key changes include implementing HiperDispatch to improve processor efficiency through affinity-aware dispatching and vertical CPU management. Memory support was increased from 256GB to 1TB per z/VM system. Other improvements include enhanced dump support for larger environments and tools for studying monitor data to understand workload behavior.
This document provides an overview of concepts and challenges related to capacity planning in a Parallel Sysplex environment. It discusses z/OS, the Coupling Facility, connectivity options, and references. Key factors that impact performance are identified for the Coupling Facility and z/OS CECs, including processor speed and type, workload characteristics, connectivity configuration, memory size, and distance between systems. Topologies, duplexing, and configuration options are also reviewed. Metrics from RMF reports on subchannels and path busy times are presented.
This document discusses userspace storage systems as an alternative to kernel-based storage for petascale workloads. It outlines several userspace filesystems, block storage systems, and object storage systems used in practice. Common languages used include C, Python, Java, and Golang. Interfaces to the kernel include FUSE, UIO, DPDK and libvma. Challenges include balancing performance, scalability, and complexity across unified, self-managing systems. Specific examples covered are NFS-Ganesha, GlusterFS, HDFS, NBD, tgt, and caching systems like Tachyon and Redis.
These slides are a series of "best practices" for running on the Cray XT line of supercomputers. This talk was presented at the HPCMP meeting at SDSC on 11/5/2009
Kernel Recipes 2018 - XDP: a new fast and programmable network layer - Jesper...Anne Nicolas
This talk will introduce XDP (eXpress Data Path), and explain how this is essentially a new (programmable) network layer in-front of the existing network stack. Then it will dive into the details of the new XDP redirect feature, which goes beyond forwarding packets out other NIC devices.
The eXpress Data Path (XDP) has been gradually integrated into the Linux kernel over several releases. XDP offers fast and programmable packet processing in kernel context. The operating system kernel itself provides a safe execution environment for custom packet processing applications, in form of eBPF programs, executed in device driver context. XDP provides a fully integrated solution working in concert with the kernel’s networking stack. Applications are written in higher level languages such as C and compiled via LLVM into eBPF bytecode which the kernel statically analyses for safety, and JIT translates into native instructions. This is an alternative approach, compared to kernel bypass mechanisms (like DPDK and netmap).
Gluster Cloud Night in Tokyo 2013 -- Tips for getting startedKeisuke Takahashi
The document discusses using deployment automation tools like Capistrano to simplify the installation of GlusterFS across multiple nodes. It recommends copying the commands output by a deployment tool rather than requiring operations teams to learn how to use the tools. It then provides details on Capistrano and a Capistrano plugin called capistrano-glusterfs that facilitates automated deployment of GlusterFS. Tasks are defined for common operations like preparing nodes, installing dependencies, building GlusterFS, and configuring the cluster.
1. DPDK achieves high throughput packet processing on commodity hardware by reducing kernel overhead through techniques like polling, huge pages, and userspace drivers.
2. In Linux, packet processing involves expensive operations like system calls, interrupts, and data copying between kernel and userspace. DPDK avoids these by doing all packet processing in userspace.
3. DPDK uses techniques like isolating cores for packet I/O threads, lockless ring buffers, and NUMA awareness to further optimize performance. It can achieve throughput of over 14 million packets per second on 10GbE interfaces.
This document discusses improving debugging of the Linux kernel using the open source debugger GDB. It begins with an overview of existing ways to debug the kernel using GDB, including via KGDB, Qemu, and JTAG probes like OpenOCD. It then discusses the concept of adding "Linux awareness" to GDB, which would allow it to better understand kernel concepts like threads and modules. Finally, it outlines three approaches to implementing this awareness: via a GDB scripting extension, in the GDB stub, or with a C extension to GDB. The overall goal is to make GDB a more full-featured and useful tool for kernel debugging.
Here are some useful GDB commands for debugging:
- break <function> - Set a breakpoint at a function
- break <file:line> - Set a breakpoint at a line in a file
- run - Start program execution
- next/n - Step over to next line, stepping over function calls
- step/s - Step into function calls
- finish - Step out of current function
- print/p <variable> - Print value of a variable
- backtrace/bt - Print the call stack
- info breakpoints/ib - List breakpoints
- delete <breakpoint#> - Delete a breakpoint
- layout src - Switch layout to source code view
- layout asm - Switch layout
The document discusses algorithms used in the DPDK libraries for fast lookups. It describes the characteristics and usage of the hash, LPM, and ACL libraries. The hash library uses cuckoo hashing for tables like FDB and host tables. The LPM library uses a modified DIR-24-8-BASIC algorithm for IPv4 and IPv6 route tables. The ACL library classifies entries using techniques like scalar, SSE, and AVX2 based on multi-bit tries. Examples of lookups and inserts are provided for each library.
Do Theoretical Flo Ps Matter For Real Application’S Performance Kaust 2012Joshua Mora
The document discusses how theoretical FLOPs per clock do not necessarily correlate with real application performance. It uses an AMD processor called "Fangio" that has its floating point capability capped to 2 FLOPs/clock compared to 4 FLOPs/clock normally. Despite having only half the theoretical FLOPs, Fangio delivers similar performance to the normal processor on many applications. This shows that FLOPs alone do not determine performance, and that code vectorization and algorithm design are also important factors.
Kernel Recipes 2019 - XDP closer integration with network stackAnne Nicolas
XDP (eXpress Data Path) is the new programmable in-kernel fast-path, which is placed as a layer before the existing Linux kernel network stack (netstack).
We claim XDP is not kernel-bypass, as it is a layer before and it can easily fall-through to netstack. Reality is that it can easily be (ab)used to create a kernel-bypass situation, where non of the kernel facilities are used (in form of BPF-helpers and in-kernel tables). The main disadvantage with kernel-bypass, is the need to re-implement everything, even basic building blocks, like routing tables and ARP protocol handling.
It is part of the concept and speed gain, that XDP allows users to avoid calling part of the kernel code. Users have the freedom to do kernel-bypass and re-implement everything, but the kernel should provide access to more in-kernel tables, via BPF-helpers, such that users can leverage other parts of the Open Source ecosystem, like router daemons etc.
This talk is about how XDP can work in-concert with netstack, and proposal on how we can take this even-further. Crazy ideas like using XDP frames to move SKB allocation out of driver code, will also be proposed.
Kernel Recipes 2016 - entry_*.S: A carefree stroll through kernel entry codeAnne Nicolas
I have always wondered what happens when we enter the kernel from userspace: what preparations does the hardware meet when the userspace to kernel space switch instructions are executed and back, and what does the kernel do when it executes a system call. There are also a bunch of things it does before it executes the actual syscall so I try to look at those too.
This talk is an attempt to demystify some of the aspects of the cryptic x86 entry code in arch/x86/entry/ written in assembly and how does that all fit with software-visible architecture of x86, what hardware features are being used and how.
With the hope to get more people excited about this funky piece of the kernel and maybe have the same fun we’re having.
Borislav Petkov, SUSE
The document discusses the Dalvik virtual machine (VM) used in Android. It begins by explaining what a VM is and the basic parts that make up a VM. It then discusses the differences between stack-based and register-based VMs, noting that Dalvik uses a register-based architecture. The document explains that Dalvik was chosen for Android because it executes instructions more efficiently than Java VM and requires less memory. It also discusses just-in-time (JIT) compilation techniques used to improve performance of interpreted code. Specifically, Dalvik uses a trace JIT that compiles short sequences of instructions to optimize mobile performance.
1. The document describes an MMAP failure occurring occasionally with a DPDK secondary application.
2. Address Space Layout Randomization (ASLR) can interfere with shared memory mappings between primary and secondary DPDK processes. Disabling ASLR may resolve MMAP failures.
3. Providing a fixed base virtual address with the "--base-virtaddr" option can ensure primary and secondary applications mmap shared memory at the same locations if ASLR is enabled.
Distributed Stream Processing in the real [Perl] worldSATOSHI TAGOMORI
This document discusses distributed stream processing. It defines stream processing as continuously processing increasing data in real-time rather than waiting for batches. This allows for very low latency analysis. It describes features needed for stream processing like one-by-one and burst processing, buffering, load balancing, and distribution across nodes. It also discusses frameworks for stream processing like Apache Kafka, Twitter Storm, and Fluentd. Finally, it covers implementations for stream processing in Perl, including the fluent-agent and fluent-agent-lite libraries.
This document discusses various Linux debugging tools including:
1. SIMD, cache monitoring, firmware checks, NUMA memory, interrupts using tools like lstopo, ethtool, lspci, and lshw.
2. Using GDB for debugging with features like breakpoints, disassembly, and core file generation.
3. Tools like strace, ltrace, nm, objdump, and readelf for system call tracing, library call tracing, symbol tables, and object file analysis.
4. Techniques like LD_PRELOAD, ulimit, and perf for custom debugging and performance analysis.
Kernel Recipes 2016 - Speeding up development by setting up a kernel build farmAnne Nicolas
Building a full kernel takes time but is often necessary during development or when backporting patches. The nature of the kernel makes it easy to distribute its build on multiple cheap machines. This presentation will explain how to set up a build farm based on cost, size, and performance.
Willy Tarreau, HaProxy
Trip down the GPU lane with Machine LearningRenaldas Zioma
What Machine Learning professional should know about GPU!
Brief outline of the deck:
* GPU architecture explained with simple images
* memory bandwidth cheat-sheats for common hardware configuration,
* overview of GPU programming model
* under the hood peek at the main building block of ML - matrix multiplication
* effect of mini-batch size on performance
Originally I gave this talk at the internal Machine Learning Workshop in Unity Seattle
HIGH QUALITY pdf slides: http://bit.ly/2iQxm7X (on Dropbox)
CC-4005, Performance analysis of 3D Finite Difference computational stencils ...AMD Developer Central
The document discusses performance analysis of 3D finite difference computational stencils on Seamicro fabric compute systems. It provides an overview of the hardware including chassis, compute cards, storage cards, and 3D torus fabric topology. It then describes the software stack and various microbenchmarks performed, including CPU, memory, network and storage benchmarks. It also describes modeling of 3D Laplace's equation using an 8th order finite difference scheme and its discretization over a 25 point stencil for computation on the system.
BPF (Berkeley Packet Filter) allows for safe dynamic program injection into the Linux kernel. It provides an in-kernel virtual machine and instruction set for running custom programs. The BPF infrastructure includes a verifier that checks programs for safety, helper functions to access kernel APIs, and maps for inter-process communication. BPF has become a core kernel subsystem and is used for applications like XDP, tracing, networking, and more.
1. The document describes Glacier, a component library and compiler for implementing continuous queries on FPGAs.
2. Glacier includes common streaming operators as well as specialized building blocks for the FPGA context. It can implement a variety of streaming queries by composing these components.
3. The paper evaluates the performance of queries implemented on an FPGA using Glacier, finding they can process over 1 million tuples per second directly from the network interface.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
The document discusses PROSE (Partitioned Reliable Operating System Environment), an approach that runs applications in specialized kernel partitions for finer control over system resources and improved reliability. It aims to simplify development of specialized kernels and enable resource sharing across partitions. The approach is evaluated using IBM's research hypervisor rHype, which shows PROSE can reduce noise and provide more deterministic performance than Linux. Future work focuses on running larger commercial workloads and further performance/noise experiments.
USENIX Vault'19: Performance analysis in Linux storage stack with BPFTaeung Song
The document discusses BPF (Berkeley Packet Filter) and how it allows running custom code in the Linux kernel. It explains that BPF programs are written in C, compiled to BPF bytecode, loaded into the kernel via the BPF syscall. A key part of the process is the BPF verifier, which checks the safety of programs before injection by analyzing control flow and simulating execution.
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...Jim St. Leger
Thomas Monjalon, 6WIND, presents on where/how to use DPDK, the DPDK ecosystem, and the DPDK.org community.
Thomas is the community maintainer of DPDK.org.
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardsonharryvanhaaren
The document summarizes seven deadly sins of packet processing that can negatively impact performance:
1) Unpredictable branches that confuse the branch predictor. Code should guide the compiler on likely/unlikely cases.
2) Incorrect prefetching that pulls in unnecessary data or data needed by other cores, adding overhead. Hardware prefetchers often help but can also share cache lines inadvertently.
3) Per-packet operations like memory I/O and atomics that have overhead magnified at the per-packet level.
This presentation will give a quick introduction on how to use slurm, the scheduler that runs programs (scripts) in HPC. Targeted for audience who are new to the Lawrencium or who may want to learn a few more things in troubleshooting their jobs.
Here are some useful GDB commands for debugging:
- break <function> - Set a breakpoint at a function
- break <file:line> - Set a breakpoint at a line in a file
- run - Start program execution
- next/n - Step over to next line, stepping over function calls
- step/s - Step into function calls
- finish - Step out of current function
- print/p <variable> - Print value of a variable
- backtrace/bt - Print the call stack
- info breakpoints/ib - List breakpoints
- delete <breakpoint#> - Delete a breakpoint
- layout src - Switch layout to source code view
- layout asm - Switch layout
The document discusses algorithms used in the DPDK libraries for fast lookups. It describes the characteristics and usage of the hash, LPM, and ACL libraries. The hash library uses cuckoo hashing for tables like FDB and host tables. The LPM library uses a modified DIR-24-8-BASIC algorithm for IPv4 and IPv6 route tables. The ACL library classifies entries using techniques like scalar, SSE, and AVX2 based on multi-bit tries. Examples of lookups and inserts are provided for each library.
Do Theoretical Flo Ps Matter For Real Application’S Performance Kaust 2012Joshua Mora
The document discusses how theoretical FLOPs per clock do not necessarily correlate with real application performance. It uses an AMD processor called "Fangio" that has its floating point capability capped to 2 FLOPs/clock compared to 4 FLOPs/clock normally. Despite having only half the theoretical FLOPs, Fangio delivers similar performance to the normal processor on many applications. This shows that FLOPs alone do not determine performance, and that code vectorization and algorithm design are also important factors.
Kernel Recipes 2019 - XDP closer integration with network stackAnne Nicolas
XDP (eXpress Data Path) is the new programmable in-kernel fast-path, which is placed as a layer before the existing Linux kernel network stack (netstack).
We claim XDP is not kernel-bypass, as it is a layer before and it can easily fall-through to netstack. Reality is that it can easily be (ab)used to create a kernel-bypass situation, where non of the kernel facilities are used (in form of BPF-helpers and in-kernel tables). The main disadvantage with kernel-bypass, is the need to re-implement everything, even basic building blocks, like routing tables and ARP protocol handling.
It is part of the concept and speed gain, that XDP allows users to avoid calling part of the kernel code. Users have the freedom to do kernel-bypass and re-implement everything, but the kernel should provide access to more in-kernel tables, via BPF-helpers, such that users can leverage other parts of the Open Source ecosystem, like router daemons etc.
This talk is about how XDP can work in-concert with netstack, and proposal on how we can take this even-further. Crazy ideas like using XDP frames to move SKB allocation out of driver code, will also be proposed.
Kernel Recipes 2016 - entry_*.S: A carefree stroll through kernel entry codeAnne Nicolas
I have always wondered what happens when we enter the kernel from userspace: what preparations does the hardware meet when the userspace to kernel space switch instructions are executed and back, and what does the kernel do when it executes a system call. There are also a bunch of things it does before it executes the actual syscall so I try to look at those too.
This talk is an attempt to demystify some of the aspects of the cryptic x86 entry code in arch/x86/entry/ written in assembly and how does that all fit with software-visible architecture of x86, what hardware features are being used and how.
With the hope to get more people excited about this funky piece of the kernel and maybe have the same fun we’re having.
Borislav Petkov, SUSE
The document discusses the Dalvik virtual machine (VM) used in Android. It begins by explaining what a VM is and the basic parts that make up a VM. It then discusses the differences between stack-based and register-based VMs, noting that Dalvik uses a register-based architecture. The document explains that Dalvik was chosen for Android because it executes instructions more efficiently than Java VM and requires less memory. It also discusses just-in-time (JIT) compilation techniques used to improve performance of interpreted code. Specifically, Dalvik uses a trace JIT that compiles short sequences of instructions to optimize mobile performance.
1. The document describes an MMAP failure occurring occasionally with a DPDK secondary application.
2. Address Space Layout Randomization (ASLR) can interfere with shared memory mappings between primary and secondary DPDK processes. Disabling ASLR may resolve MMAP failures.
3. Providing a fixed base virtual address with the "--base-virtaddr" option can ensure primary and secondary applications mmap shared memory at the same locations if ASLR is enabled.
Distributed Stream Processing in the real [Perl] worldSATOSHI TAGOMORI
This document discusses distributed stream processing. It defines stream processing as continuously processing increasing data in real-time rather than waiting for batches. This allows for very low latency analysis. It describes features needed for stream processing like one-by-one and burst processing, buffering, load balancing, and distribution across nodes. It also discusses frameworks for stream processing like Apache Kafka, Twitter Storm, and Fluentd. Finally, it covers implementations for stream processing in Perl, including the fluent-agent and fluent-agent-lite libraries.
This document discusses various Linux debugging tools including:
1. SIMD, cache monitoring, firmware checks, NUMA memory, interrupts using tools like lstopo, ethtool, lspci, and lshw.
2. Using GDB for debugging with features like breakpoints, disassembly, and core file generation.
3. Tools like strace, ltrace, nm, objdump, and readelf for system call tracing, library call tracing, symbol tables, and object file analysis.
4. Techniques like LD_PRELOAD, ulimit, and perf for custom debugging and performance analysis.
Kernel Recipes 2016 - Speeding up development by setting up a kernel build farmAnne Nicolas
Building a full kernel takes time but is often necessary during development or when backporting patches. The nature of the kernel makes it easy to distribute its build on multiple cheap machines. This presentation will explain how to set up a build farm based on cost, size, and performance.
Willy Tarreau, HaProxy
Trip down the GPU lane with Machine LearningRenaldas Zioma
What Machine Learning professional should know about GPU!
Brief outline of the deck:
* GPU architecture explained with simple images
* memory bandwidth cheat-sheats for common hardware configuration,
* overview of GPU programming model
* under the hood peek at the main building block of ML - matrix multiplication
* effect of mini-batch size on performance
Originally I gave this talk at the internal Machine Learning Workshop in Unity Seattle
HIGH QUALITY pdf slides: http://bit.ly/2iQxm7X (on Dropbox)
CC-4005, Performance analysis of 3D Finite Difference computational stencils ...AMD Developer Central
The document discusses performance analysis of 3D finite difference computational stencils on Seamicro fabric compute systems. It provides an overview of the hardware including chassis, compute cards, storage cards, and 3D torus fabric topology. It then describes the software stack and various microbenchmarks performed, including CPU, memory, network and storage benchmarks. It also describes modeling of 3D Laplace's equation using an 8th order finite difference scheme and its discretization over a 25 point stencil for computation on the system.
BPF (Berkeley Packet Filter) allows for safe dynamic program injection into the Linux kernel. It provides an in-kernel virtual machine and instruction set for running custom programs. The BPF infrastructure includes a verifier that checks programs for safety, helper functions to access kernel APIs, and maps for inter-process communication. BPF has become a core kernel subsystem and is used for applications like XDP, tracing, networking, and more.
1. The document describes Glacier, a component library and compiler for implementing continuous queries on FPGAs.
2. Glacier includes common streaming operators as well as specialized building blocks for the FPGA context. It can implement a variety of streaming queries by composing these components.
3. The paper evaluates the performance of queries implemented on an FPGA using Glacier, finding they can process over 1 million tuples per second directly from the network interface.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
The document discusses PROSE (Partitioned Reliable Operating System Environment), an approach that runs applications in specialized kernel partitions for finer control over system resources and improved reliability. It aims to simplify development of specialized kernels and enable resource sharing across partitions. The approach is evaluated using IBM's research hypervisor rHype, which shows PROSE can reduce noise and provide more deterministic performance than Linux. Future work focuses on running larger commercial workloads and further performance/noise experiments.
USENIX Vault'19: Performance analysis in Linux storage stack with BPFTaeung Song
The document discusses BPF (Berkeley Packet Filter) and how it allows running custom code in the Linux kernel. It explains that BPF programs are written in C, compiled to BPF bytecode, loaded into the kernel via the BPF syscall. A key part of the process is the BPF verifier, which checks the safety of programs before injection by analyzing control flow and simulating execution.
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...Jim St. Leger
Thomas Monjalon, 6WIND, presents on where/how to use DPDK, the DPDK ecosystem, and the DPDK.org community.
Thomas is the community maintainer of DPDK.org.
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardsonharryvanhaaren
The document summarizes seven deadly sins of packet processing that can negatively impact performance:
1) Unpredictable branches that confuse the branch predictor. Code should guide the compiler on likely/unlikely cases.
2) Incorrect prefetching that pulls in unnecessary data or data needed by other cores, adding overhead. Hardware prefetchers often help but can also share cache lines inadvertently.
3) Per-packet operations like memory I/O and atomics that have overhead magnified at the per-packet level.
This presentation will give a quick introduction on how to use slurm, the scheduler that runs programs (scripts) in HPC. Targeted for audience who are new to the Lawrencium or who may want to learn a few more things in troubleshooting their jobs.
The document discusses strategies for improving application performance on POWER9 processors using IBM XL and open source compilers. It reviews key POWER9 features and outlines common bottlenecks like branches, register spills, and memory issues. It provides guidelines on using compiler options and coding practices to address these bottlenecks, such as unrolling loops, inlining functions, and prefetching data. Tools like perf are also described for analyzing performance bottlenecks.
Security researchers have limited options when it comes to debuggers and dynamic binary instrumentation tools for ARM-based devices. Hardware-based solutions can be expensive or destructive, while software tools are often restricted to user mode. Presented at REcon 2016, this presentation explores a common but often ignored feature of the ARM debug architecture in search of other options. Digging deeper into this hardware component reveals many interesting use-cases for researchers ranging from debugging and instrumentation to building a novel rootkit.
Learnings from the Field. Lessons from Working with Dozens of Small & Large D...HostedbyConfluent
- Upgrades should be done often to get bug fixes and improvements, following the upgrade guide carefully. Start with a healthy cluster and upgrade components outward from Zookeeper to Kafka brokers to clients. Don't rush the process or have any unresolved partition reassignments.
- Collect JMX metrics to monitor the cluster as outages can be prolonged without visibility. The Kafka defaults are suitable for single node deployments but replication factor, threads, and broker configuration should be tuned for larger clusters.
- Quotas like replication throttling and bandwidth/request limits per client or topic should be used to protect the cluster and clients. Log files should separate each component and be retained for a few days. Consider multiple clusters by SLA
The document describes an IBM workshop on CAPI and OpenCAPI technologies. It provides an overview of FPGA acceleration using SNAP, including how SNAP simplifies FPGA programming using a C/C++ based approach. Examples of use cases for FPGA acceleration like video processing and machine learning inference are also presented.
Netronome's half-day tutorial on host data plane acceleration at ACM SIGCOMM 2018 introduced attendees to models for host data plane acceleration and provided an in-depth understanding of SmartNIC deployment models at hyperscale cloud vendors and telecom service providers.
Presenter Bios
Jakub Kicinski is a long term Linux kernel contributor, who has been leading the kernel team at Netronome for the last two years. Jakub’s major contributions include the creation of BPF hardware offload mechanisms in the kernel and bpftool user space utility, as well as work on the Linux kernel side of OVS offload.
David Beckett is a Software Engineer at Netronome with a strong technical background of computer networks including academic research with DDoS. David has expertise in the areas of Linux architecture and computer programming. David has a Masters Degree in Electrical, Electronic Engineering at Queen’s University Belfast and continues as a PhD student studying Emerging Application Layer DDoS threats.
Вячеслав Блинов «Java Garbage Collection: A Performance Impact»Anna Shymchenko
This document discusses Java garbage collection and provides an overview of common GC algorithms, their performance impacts, and basic tuning strategies. It describes how the generational heap is divided and explains that GC pauses can significantly impact performance. Different algorithms like the serial, parallel, CMS and G1 collectors are introduced along with considerations for choosing a collector based on heap size, CPU usage, and pause requirements. Guidelines are provided for sizing the heap and generations as well as enabling adaptive sizing.
2009-01-28 DOI NBC Red Hat on System z Performance ConsiderationsShawn Wells
Presented with the U.S. Department of the Interior, National Business Center. DOI NBC offered a for-fee Linux on System z to the U.S. Government. This presentation steps through performance management considerations, including: FCP/SCSI single path vs multipath LMV; filesystem striping; crypto express2 accelerator (CEX2A) SSL handshakes; cryptographic performance (WebSEAL SSL Access); and CMM1 & CMMA.
Experiences building a distributed shared log on RADOS - Noah WatkinsCeph Community
This document summarizes Noah Watkins' presentation on building a distributed shared log using Ceph. The key points are:
1) Noah discusses how shared logs are challenging to scale due to the need to funnel all writes through a total ordering engine. This bottlenecks performance.
2) CORFU is introduced as a shared log design that decouples I/O from ordering by striping the log across flash devices and using a sequencer to assign positions.
3) Noah then explains how the components of CORFU can be mapped onto Ceph, using RADOS object classes, librados, and striping policies to implement the shared log without requiring custom hardware interfaces.
4) ZLog is presented
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/07/efficiently-map-ai-and-vision-applications-onto-multi-core-ai-processors-using-cevas-parallel-processing-framework-a-presentation-from-ceva/
Rami Drucker, Machine Learning Software Architect at CEVA, presents the “Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using CEVA’s Parallel Processing Framework” tutorial at the May 2023 Embedded Vision Summit.
Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way to deliver high performance (especially in cost- and power-constrained applications) is to use multi-core processors. But developers must then map their applications onto the multiple cores in an efficient manner, which can be difficult. To address this challenge and streamline application development, CEVA has introduced the Architecture Planner tool as a new element in CEVA’s comprehensive AI SDK.
In this talk, Drucker shows how the Architecture Planner tool analyzes the network model and the processor configuration (number of cores, memory sizes), then automatically maps the workload onto the multiple cores in an efficient manner. He explains key techniques used by the tool, including symmetrical and asymmetrical multi-processing, partition by sub-graphs, batch partitioning and pipeline partitioning.
Testing Persistent Storage Performance in Kubernetes with SherlockScyllaDB
Getting to understand your Kubernetes storage capabilities is important in order to run a proper cluster in production. In this session I will demonstrate how to use Sherlock, an open source platform written to test persistent NVMe/TCP storage in Kubernetes, either via synthetic workload or via variety of databases, all easily done and summarized to give you an estimate of what your IOPS, Latency and Throughput your storage can provide to the Kubernetes cluster.
LPAR2RRD is free performance monitoring and capacity planning tool.
Presentation is from IBM annual meeting with its customers called COMON in Czech Republic
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
The document discusses administering parallel execution in Oracle databases. It describes how parallel query uses slave processes to perform work across instances, and how the placement of slaves can be controlled using services or parallel instance groups. It provides an example execution plan showing how slaves perform different tasks like scanning and sorting. It also covers best practices, new features in Oracle 11g like parallel statement queueing, and how parallel DML works.
Optimizing Servers for High-Throughput and Low-Latency at DropboxScyllaDB
I'm going to discuss the efficiency/performance optimizations of different layers of the system. Starting from the lowest levels like hardware and drivers: these tunings can be applied to pretty much any high-load server. Then we’ll move to Linux kernel and its TCP/IP stack: these are the knobs you want to try on any of your TCP-heavy boxes. Finally, we’ll discuss library and application-level tunings, which are mostly applicable to HTTP servers in general and nginx/envoy specifically.
For each potential area of optimization I’ll try to give some background on latency/throughput tradeoffs (if any), monitoring guidelines, and, finally, suggest tunings for different workloads.
Also, I'll cover more theoretical approaches to performance analysis and the newly developed tooling like `bpftrace` and new `perf` features.
This document discusses capacity planning tools LPAR2RRD and STOR2RRD for monitoring IBM Power Systems. It provides an introduction to the tools, how they can be used to monitor CPU, memory, networking and storage capacity and utilization. Specific features of LPAR2RRD are described like monitoring CPU usage of logical partitions and supporting IBM virtualization technologies. The business model is also briefly covered, noting the tools are free but support subscriptions provide additional features.
This document summarizes a presentation on improvements to RMF's Parallel Sysplex instrumentation over recent years. Some key points covered include:
1) Structure-level CPU reporting in SMF 74-4 allows for capacity planning at the individual structure level and examining CPU consumption of different structures.
2) Enhancements help match CPU data between SMF 70-1 and 74-4 to get a complete picture of Coupling Facility CPU usage.
3) Additional instrumentation provides useful information on topics like structure duplexing performance, XCF traffic patterns, and Coupling Facility link details.
Snap ML is a machine learning framework for fast training of generalized linear models (GLMs) that can scale to large datasets. It uses multi-level parallelism across nodes and GPUs. Snap ML implementations include snap-ml-local for single nodes, snap-ml-mpi for multi-node HPC environments, and snap-ml-spark for Apache Spark clusters. Experimental results show Snap ML can train a logistic regression model on a 3TB Criteo dataset within 1.5 minutes using 16 GPUs.
Similar to Como obter o melhor do Z por Gustavo Fernandes Araujo (Itau Unibanco) (20)
Apresentação realizada na reunião de 26 de junho de 2019 do Atlassian User Group de São Paulo. Demonstração de como a interface REST dos produtos da plataforma Atlassian podem ser considerados como alternativas aos APPS e a tarefas manuais pela interface Web. Ao final há um exemplo onde a opção de utilizar a interface REST economizou tempo e dinheiro e entregou o trabalho feito
O documento discute as tecnologias de computação empresarial moderna para cargas de trabalho de negócios. Ele descreve como infraestrutura flexível, escalabilidade não disruptiva, continuidade de negócios e eficiência operacional permitem aplicações corporativas modernas e liberdade através de padrões abertos. A segurança de dados confidenciais é essencial nesse ambiente.
This document discusses novelties in z/OS 2.4 and z14 GA2, including enhancements to application development and cloud computing. It introduces buzzwords from the past like OO and ERP that are now outdated, and new buzzwords like cloud, analytics, and microservices. Specific z/OS and hardware enhancements described include 25GbE for OSA and RoCE, crypto enhancements, dynamic I/O configuration for standalone CFs, asynchronous cache cross-invalidation, and HMC enhancements. The document ends with a debate between the presenters on whether new application development approaches will deliver continuous availability, data integrity and performance in production environments.
Na era do Devops, a operacionalização de serviços na nuvem, cada vez mais, vem sendo automatizada para atender demandas emergentes de negócios a qual exige resposta rápida à mudancas e capacidade em se escalar. Automatizações como CI/CD (Continuous Integration e Continuous Development) permitem em grande parte atender cenários diversos onde é necessário reduzir ou simplesmente liquidar operações manuais de Deployment por meio de passos automatizados intermediados por um agente robô. Em contrapartida, existe a necessidade em se ter o mínimo necessário de infraestrutura como pré-requisito, o que obriga equipes a investirem tempo e esforço na criação desses ambientes, nos quais, em alguns casos, a complexidade é multiplicada pelo uso de serviços distintos de computação na nuvem, a Multi-cloud. O tempo " Infrastructure as a Code" é um assunto emergente o qual trata infraestrutura como código versionado, um asset do projeto onde o seu objetivo não é apenas reduzir o esforço operacional mas também poder compatilhar conhecimento e engajar membros de equipes. Esta apresentação tem como objetivo introduzir "Infrastructure as a Code" assim como o seu potencial para cenários Multi-Cloud.
Marcus Vinicius Bittencourt é um especialista em Data Platform e SQL Server com 4 prêmios MVP. O documento discute cybercrime, brechas de segurança e demonstra como um atacante pode acessar uma rede corporativa e explorar falhas de segurança no SQL Server para escalar privilégios.
O documento descreve a jornada de definição da estratégia de nuvem de um banco brasileiro. O processo envolveu análises técnicas e estratégicas das aplicações, provedores de nuvem e cenários de migração, com o objetivo de identificar as aplicações ideais para a nuvem e os melhores caminhos de migração. A estratégia definida prioriza o uso de nuvem para novos projetos e estima potencial de até 40% das aplicações existentes migrarem para a nuvem.
O documento fornece uma visão geral da área de Ciência de Dados, discutindo como a tecnologia está transformando as profissões e a necessidade de atualização contínua. Também explica conceitos-chave da Ciência de Dados como descritivo, diagnóstico, preditivo e prescritivo e como esses conceitos são aplicados usando algoritmos e machine learning.
The document discusses several new capabilities and enhancements being introduced in z/OS V2.4, including z/OS Container Extensions to enable running Linux containers alongside z/OS applications, 25GbE support for OSA and RoCE, asynchronous cache cross-invalidation to improve performance, and policies to simplify customizing JES2 without assembler exits. It also mentions continued efforts to drive pervasive encryption and support for additional data analytics capabilities.
Geralmente escolhemos uma medida base como, por exemplo, o pico da média horária, ou o pico de um determinado período definido, para mostrar a evolução mensal do consumo de processador passado e fazer projeções de consumo futuro até uma determinada data. Esta medida, porém, pode não justificar, por si só, o crescimento de determinados meses. A proposta aqui é usar a evolução do perfil médio diário de consumo, analisando as mudanças de um mês para outro, e ao longo de anos, que pode ser aplicada à partição, equipamento ou Sysplex. Com essa visão é possível mais rapidamente identificar tendências de crescimento por horários e fazer ajustes no consumo de modo a minimizar os picos recorrentes.
O documento discute boas práticas de programação para melhorar a qualidade e performance de códigos. Ele aborda tópicos como comentários, nomes de variáveis, identação, testes, uso de loops, arrays e funções de forma otimizada. O documento argumenta que seguindo essas práticas recomendadas é possível desenvolver software mais eficiente e de fácil manutenção.
O documento discute a tokenização de ativos e novas linhas de negócios. Ele aborda como empresas como a Microsoft estão apostando na tokenização de ativos usando blockchain e como seguradoras veem grandes oportunidades no armazenamento e cobertura de criptomoedas. Também menciona como a tokenização pode ajudar a combater a pobreza global e a poluição dos oceanos.
O Santo Graal da IoT é a capacidade de distribuir facilmente a inteligência entre a nuvem e os dispositivos (edge). Descubra como inovações edge vão ajudar você a encontrar e certificar hardwares seguros, lucrar com estes módulos e construir soluções da IoT compatíveis com o edge. Veja também com desenvolver, criar e implementar soluções escaláveis e repetíveis alavancando inovações em serviços de Visão, Voz, IoT Edge e Serviços Congnitivos para melhorar soluções da IoT.
O documento discute a solução Eccox Application for Parallel Testing (APT) da Eccox Technology. Em três frases: A APT fornece ambientes de teste isolados no mainframe para permitir testes paralelos, clonando recursos como bancos de dados e arquivos. Isso permite que vários usuários executem testes simultaneamente sem conflitos, reduzindo custos com infraestrutura e horas-homem. A solução também gera dados de teste sintéticos para apoiar cenários de teste isolados.
O documento descreve como o Banco de Brasília implementou uma solução de balanceamento dinâmico para melhorar a gestão da capacidade e desempenho de seu ambiente mainframe IBM ZOS. A solução otimizou o uso dos recursos, reduziu custos com software e adiamento de hardware, com retorno de investimento em 5 meses.
Esta palestra mostrará o “Basicão” [por isso o 1.01 no Título!] da Eletricidade e da Eletrônica, enfocando: Condutores e Isolantes, Relês, Válvulas Termoiônicas [a Retificação e Amplificação começaram assim...], Flip-Flops, Dopagem de Cristais Semi Condutores, Diodos, Transístores, CMOS, SRAM e DRAM. A utilização destes componentes básicos, em Circuitos Sequenciais e Combinatórios, será tema para futuro estudo.
O documento discute a pervasive encryption no IBM Z, que permite a criptografia transparente e em massa de dados em repouso para simplificar a proteção de dados e o cumprimento de normas regulatórias. A pervasive encryption criptografa automaticamente todos os dados em repouso usando chaves gerenciadas pelo sistema, de forma transparente para as aplicações. Isso protege dados em vários níveis, incluindo VSAM, DB2, IMS e logs, entre outros, sem impactar o desempenho.
Nos novos mainframes IBM z, a tecnologia do chip de CPU ficou mais complexa, especialmente incorporando camadas de memória cache. Uma nova terminologia foi introduzida -Relative Nest Intensity (RNI), indicando o nível de atividade para a hierarquia de memória. A área mais sensível ao desempenho da hierarquia de memória é a distribuição de atividade dos caches compartilhados e a memória: quanto maior o RNI, mais profunda será a hierarquia de memória que o processador deve percorrer para recuperar as instruções e os dados de um workload. Discutiremos como podemos diminuir a influência do RNI no CICS fazendo ajustes de desempenho.
O documento discute Infraestrutura como Código (IaC) e como ele pode ser usado para automatizar infraestrutura em nuvem de forma segura e consistente. Ele explica os benefícios do IaC, como aumentar a produtividade das equipes, permitir mudanças contínuas e melhorias incrementais. Também aborda desafios como deriva de configuração e ferramentas IaC populares como Terraform.
Inspirada na lei europeia GDPR (General Data Protection Regulation) que já foi colocada em prática pela comunidade no final de maio de 2018, a LGPD já em vigor no Brasil com prazo de implantação até agosto 2020, tem como objetivo de reforçar a segurança jurídica dos dados pessoais dos indivíduos e mitigar abusos em relação a estes ativos tão poderosos e valiosos. Nesta apresentação iremos abordar um método de implantação da LGPD nas empresas do Brasil e os principais pontos de adequação a seus requisitos.
Este documento descreve os componentes internos do processador z14. Ele discute os estágios de fetch, parsing, decode e issue de instruções, bem como as unidades de execução fixa, load/store, vetor e flutuante. O documento também fornece diagramas ilustrando o fluxo de dados e instruções entre esses componentes.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
20240609 QFM020 Irresponsible AI Reading List May 2024
Como obter o melhor do Z por Gustavo Fernandes Araujo (Itau Unibanco)
1. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Gustavo Fernandes Araujo
Capacity and Performance Team
ITAU UNIBANCO BANK
How To Get The Most from IBM Z
System Design
Real User Experience
2. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Introduction01
Tools02
Capacity Planning Evaluation03
Conclusions04
3. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
2012 – Graduationin Materials Engineering – Universityof Sao Paulo
2013 – 2015 – Intelectual PropertyConsultant
2015 – now – Mainframe Capacity and PerformanceAnalyst in ITAU UNIBANCO
DataCenter migrations
Technology migration through z Generations
WLM Analysis
Performance Analysis
2018 – Post graduationin Data Analysis and Data Mining - FIA
ABOUT ME_
4. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
2017 - Planning and Performance Study in the Consolidation of
Mainframe CECs
May, CMG IMPACT, Sao Paulo, Brazil – Best Paper CMG Brazil
August, IBM STU, Sao Paulo, Brazil
November, CMG IMPACT, New Orleans, USA
2018 – Mainframe Performance Review
May, CMG IMPACT, Sao Paulo, Brazil
2019 – How To Get The Most from IBM Z System Design - Real User Ex
February,SHARE, Phoenix,USA
May, CMG IMPACT, Sao Paulo, Brazil
2019 – Real Cases Performance Evaluation of Z Generations
February,SHARE, Phoenix,USA
PRESENTATIONS_
5. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
ABOUT ITAU UNIBANCO_
49.7 M
Retail
Clients
32.4 M
Credit Card
Accounts
100,335
Employees
28.1 M
Debit Card
Accounts
4,940
Bank Agencies and
Banking Services Posts
48,476
ATMs
6. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
OBJECTIVE_
Present and discuss the Cross Drawer effect
in the Mainframe and how it can
drive the Capacity Planning of your
Company.
7. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Cross Drawer_
Drawer1Drawer2Drawer3Drawer4
LPAR 1
LPAR 1 LPAR 2
CF 1
LPAR 3
> The Cross Drawer occurs when the PR/SM is required to
dispatch the logical processors (GCPs + zIIPs) of the LPAR
in more than one Drawer.
> The limitation of the amount of physical processors in
the Drawer depends on the Hardware Model.
> A loss of performance is expected due to the allocation
of the LPAR over more than one Drawer and the use more
intense of shared caches and central memory.
8. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Cross Drawer Inside z14_
CP Logical
Cluster 0
SCSC
MemMem (DIMMs)
CP
CP
Mem A-Bus
CP
Mem Mem (DIMMs)
CP
CP
MemA-Bus
CP
CP Logical
Cluster 1
9. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
z13 and z14 Capacity Comparison _
z14z13
+31%111,556 MIPS 146,462 MIPS
Scalability
Maximum Capacity
per CEC
10. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
z13 and z14 Capacity Comparison _
z14z13
+31%111,556 MIPS 146,462 MIPS
Scalability
Maximum Capacity
per CEC
Maximum Capacity
of a Single Drawer
+30%36 PUs
37,973 MIPS
43 PUs
49,210 MIPS
11. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
z13 and z14 Capacity Comparison _
z14
z13
12. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Logical Processors Allocation_
If all my LPARs have less logical
processors than the amount of
physical processors of the
drawer, is it possible to occur the
Cross Drawer event?
13. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Logical Processors Allocation_
The number of
Logical Processors of
the LPAR is higher
than the number of
processors of the
Drawer.
PR/SM
Scenario 1 Ex: LPAR 1 with 44 LCPs (GCP+zIIP) in z14
LCP Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
43LCPs VH
LPAR 1
1 LCP VH
14. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Logical Processors Allocation_
The number of
Logical Processors of
the LPAR is higher
than the number of
processors of the
Drawer.
PR/SM
Scenario 1 Ex: LPAR 1 with 44 LCPs (GCP+zIIP) in z14
LCP Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
43LCPs VH
LPAR 1
1 LCP VH
The number of
Logical Processors of
the LPAR is smaller
than the number of
processors of the
Drawer. All LCPs VH,
no other LPAR in the
CEC.
PR/SM
Scenario 2 Ex: LPAR 2 with 30 LGCPs in z14
No Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
30LCPs VH
LPAR 1
15. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Logical Processors Allocation_
The number of
Logical Processors of
the LPARs is smaller
than the number of
processors of the
Drawer, but they
share 2 processors
VM and it does not fit
all in one drawer.
PR/SM
Scenario 3 Ex: LPAR 1: 20 LCPs VH + 2VM 30% - LPAR 2: 25 LCPs + 2VM 70%
Vertical Medium LCP Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
20LCPsVH 25LCPsVH
LPAR 1 LPAR 2
2 VM
16. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Logical Processors Allocation_
The number of
Logical Processors of
the LPARs is smaller
than the number of
processors of the
Drawer, but they
share 2 processors
VM and it does not fit
all in one drawer.
PR/SM
Scenario 3 Ex: LPAR 1: 20 LCPs VH + 2VM 30% - LPAR 2: 25 LCPs + 2VM 70%
Vertical Medium LCP Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
20LCPsVH 25LCPsVH
LPAR 1 LPAR 2
2 VM
The number of
Logical Processors of
the LPARs is smaller
than the number of
processors of the
Drawer, but they
share 2 processors
VM and the total fits
in one drawer.
PR/SM
Scenario 4 Ex: LPAR 1: 10 LCPs VH + 2VM 30% - LPAR 2: 15 LCPs + 2VM 70%
No Cross Drawer
DRAWER 1
42 PUs
DRAWER 2 – 43 PUs DRAWER 3 – 43 PUs
DRAWER 4
42 PUs
10LCP
VH
15LCPsVH
LPAR 1
LPAR 2
2 VM
17. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Introduction01
Tools02
Capacity Planning Evaluation03
Conclusions04
18. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Tools_
How do you know the PR/SM
placement of the logical
processors
19. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Triggers_
PR/SM
Triggers for PR/SM change the processors allocation
> Configuration Changes
- number of physicalprocessors of the CEC
- number of logicalprocessors of the LPAR
- weight of the logical processors of the LPAR
> Vary onlineoffline in the z/OS
> IPL
> Soft Capping
20. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Tools_
z14
HMC View LPAR Dump WLM
Topology Report
z14
z13
IBM as-is
tool
21. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
HMC - LPAR Resource Assignment Task_
z14
HMC View
CF1 LPAR1 LPAR2 LPAR3
Serial
Number
22. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
HMC - LPAR Resource Assignment Task_
z14
HMC View > The Operator can visualize the allocation of the
processorsby himself.
> Nodes 1 and 2, 3 and 4, 5 and 6, 7 and 8 are in the
same Drawer.
> Document with method available in TechDoc: IBM Z:
Accessing the LPAR Resource Assignment Task.
(https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102754)
23. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
HMC - LPAR Resource Assignment Task_
z13
z14
LPAR Dump
> The Operator must generate the LPAR Dump and send it to the
IBM LAB for analysis.
> IBM Lab processes the DUMP and send it formatted for the IBM
Support.
> More info than the LPAR Resource Assignment Task of HMC in
z14.
> Always used in case of problem determination.
24. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Introduction01
Tools02
Capacity Planning Evaluation03
Conclusions04
25. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Cross Drawer Impact_
Do you know the impact of Cross
Drawer in your Mainframe
Environment?
And how may it drive the Capacity
Planning?
26. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Performance Metrics_
Traditional Metrics
CPU/Execution
CPI
L1MP
RNI
Performance Index
27. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Timeline of Analysis_
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
28. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Performance Evaluation_
> It was not identified performance difference
between the LPAR A without Cross Drawer and
the LPAR B with Cross Drawer in the Transaction
Manager Performance.
> On the other hand, the Captured Ratio of LPAR
B was 5 p.p. worse than the Captured Ratio of
LPAR A, indicating a bigger overhead in the LPAR
with cross drawer.
29. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Timeline of Analysis_
IBM Resiliency Review
> Study the impact of
Cross Drawer x Creation
of new LPARs
> Use of Vertical High
Processors
configuration.
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
30. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
IBM Resiliency Review_
Recommendations
> Study the impact of cross drawer configuration versus
adding new LPARs.
> Contain LPARs to a single z System Drawer with Vertical
High processor configuration as much as possible for best
efficiency
Benefits
> More effective use of CPU Cycles.
> Lower and more predictable response time of a transaction.
> Lower cost to process a transaction
31. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Timeline of Analysis _
IBM Resiliency Review
> Study the impact of
Cross Drawer x Creation
of new LPARs
> Use of Vertical High
Processors
configuration.
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
> Significant
performance loss
and increase in the
response time of
transactions.
32. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Performance Evaluation_
Analysis
> It was identified significant
performance loss and increase in the
response time of the transactions in the
LPAR with 6 CPs in Cross Drawer.
> The Captured Ratio deteriorate
abruptly, from 85% to 66% in a 20 min
average interval.
33. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Performance Evaluation_
Analysis
> It was identified significant
performance loss and increase in the
response time of the transactions in the
LPAR with 6 CPs in Cross Drawer.
> The Captured Ratio deteriorate
abruptly, from 85% to 66% in a 20 min
average interval.
34. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Timeline of Analysis _
IBM Resiliency Review
> Study the impact of
Cross Drawer x Creation
of new LPARs
> Use of Vertical High
Processors
configuration.
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
ITAU UNIBANCO
> Configurations
with 6 processors
in Cross Drawer.
> Significant
performance loss
and increase in the
response time of
transactions.
IBM
> The Cross Drawer
for ITAU Unibanco
should Always be
avoided .
35. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
IBM Statements_
The IBM analysis concluded
that for ITAU UNIBANCO
currentenvironment, the
Cross Drawer should always
be avoided!!
LPAR 1
LPAR 1
36. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
If the LPARs of your DataCenter
can only grow until the capacity
limited by the Drawer Size, this
fact shall make you reavaluate
your Capacity Planning.
37. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
For illustatrive purposes, the following examples compare the Capacity of a
scenariowith 6 CPs in CrossDrawer and a scenario with No Cross Drawer.
All the following examples evaluatethe Capacity with MIPS values from the
LPSR IBM Tables.
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 1
LPAR 1
LPAR 1
38. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 1
LPAR 1
LPAR 1
Maximum LPAR
Capacityin z13
Maximum LPAR
Capacityin z14
43,115 MIPS
54,881 MIPS
37,972 MIPS
49,210 MIPS
-10%
-11%
39. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
EXAMPLE 1
Reevaluate the Capacity
based on Usage
40. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 1
LPAR 1
LPAR 1
LPAR 1 (z13)
Use of
35,000 MIPS
81%
of the total
CapacityPossible
(43,115 MIPS)
92%
of the total
CapacityPossible
(37,972 MIPS)
41. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 1
LPAR 1
LPAR 1
LPAR 1 (z13)
Use of
35,000 MIPS
42. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Alternative A – Reduce the consumption_
No CPs in Cross Drawer
LPAR 1
LPAR 1 (z13)
New use of
32,000 MIPS
Reduce the Consumption
It should be evaluated options to
reduce the consumption in the
LPAR, or redistribute the
consumption in others LPARs. For
example:
> Performance improvements;
>Migration of Workloads among
LPARs
43. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Alternative B - Create New LPARs _
No CPs in Cross Drawer
LPAR 1 Create New LPARs
The decision to create new LPARs, should take into
consideration:
> The performance loss with the Cross Drawer;
> The growth expectationfor the LPAR;
On the other hand, don’t forget to
> Evaluate the use of other resources
- central memory
- channels
> Evaluate the effort, costs, and subsytems
specificities.
> PR/SM overheadwith a second LPAR in CEC.
LPAR 1 LPAR 2
LPAR 2
44. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Driver to Capacity Planning_
EXAMPLE 2
Reevaluate the Capacity
based on High
Avaliability/Contingency
45. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Reevaluate the Capacity based on High Avaliability/Contingency_
LPAR 3 (z14)
Use of 20,000 MIPS
LPAR 4(z14)
Use of 20,000 MIPS
In a situationof High Availability or Contingency,
one LPAR in a diferenteCEC will receive the
workload of the other LPAR and proccessthe
workloadof the two LPARs.
LPAR 3
LPAR 3
LPAR 4
LPAR 4
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 3 LPAR 4
46. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Reevaluate the Capacity based on High Avaliability/Contingency_
LPAR 3 (z14)
Use of 20,000 MIPS
LPAR 4(z14)
Use of 20,000 MIPS
LPAR 3
LPAR 3
LPAR 4
LPAR 4
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 3 LPAR 4
40,000 MIPS- 93%
of the total
CapacityPossible
(43,115 MIPS)
40,000 MIPS- 105%
of the total
CapacityPossible
(37,972 MIPS)
47. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Reevaluate the Capacity based on High Avaliability/Contingency_
LPAR 3 (z14)
Use of 20,000 MIPS
LPAR 4(z14)
Use of 20,000 MIPS
LPAR 3
LPAR 3
LPAR 4
LPAR 4
No CPs in Cross Drawer6 CPs in Cross Drawer
LPAR 3 LPAR 4
48. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Alternative A – Reduce the consumption_
No CPs in Cross Drawer
Reduce the Consumption
It should be evaluated options to reduce
the consumption in the LPAR, or
redistribute the consumption in others
LPARs. For example:
> Performance improvements;
>Migration of Workloads among LPARs
Reevaluate the Continency/High
Availability Estrategy
LPAR 3 (z14)
New use of
17,000 MIPS
LPAR 4(z14)
New use of
17,000 MIPS
LPAR 3 LPAR 4
49. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Alternative B – Create new LPARs_
No CPs in Cross Drawer
Create New LPARs
The decision to create new LPARs, should take
into consideration:
> The performance loss with the Cross Drawer;
> The growth expectationfor the LPAR;
On the other hand, don’t forget to
> Evaluate the use of other resources
- central memory
- channels
> Evaluate the effort, costs, and subsytems
specificities.
> PR/SM overheadwith a second LPAR in CEC.
LPAR 3 LPAR 4
LPAR 35 LPAR 46
LPAR 5 LPAR 6
50. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
Introduction01
Tools02
Capacity Planning Evaluation03
Conclusions04
51. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
> The Cross Drawer impact on performance should be
avaluated for each specific datacenter. In general, there will
be performance improvement processing inside one Drawer.
>The limit of processing an LPAR inside just one drawer may
drive the decisions to create more LPARs and split the
workload among them.
> The driver may be based on even the usage of the LPAR and
the growth expectation, or taking in consideration a situation
of contingency or High Availability.
Conclusions_
52. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
SPECIAL THANKS TO
CAPACITY PLANNING AND PERFORMANCE TEAM
MAINFRAME SUPPORT TEAMS
ITAU UNIBANCO
CAROLINA SOUZA JOAQUIM
IBM SPECIALIST
53. Proibida cópia ou divulgação sem
permissão escrita do CMG Brasil.
THANKS FOR YOUR ATTENTION
GUSTAVO-FERNANDES.ARAUJO@ITAU-UNIBANCO.COM.BR