The IBM Smart Analytics Optimizer works by offloading CPU-intensive query processing from DB2 for z/OS to specialized hardware. It defines logical data marts containing related tables and loads them into compressed, memory-resident formats on the accelerator. This provides an order of magnitude performance improvement for queries involving the accelerated tables. The optimizer is transparent to applications and preserves DB2's qualities of service while improving price/performance.
Imex Research Virtualization Executive Summary On SlideshareM. R. Pamidi, Ph. D.
This document discusses the drivers for data center virtualization. It notes that legacy systems are difficult to manage, have poor reliability and high management costs. Server utilization rates are often only 5-10% leading to high power and cooling costs. Virtualization can help improve utilization rates and reduce costs by consolidating multiple physical servers onto fewer physical servers. The document outlines how virtualization can be implemented at different levels including the processor, operating system, network and storage. It also discusses how a next generation virtualized data center may integrate automation, standardization, and cloud compatibility.
Striving for excellence is a human trait shared by many, as we all try to be the best that we can in at least one area under our control. Achieving excellence is a little harder to accomplish; it requires an amount of hard work and dedication that only a select few are willing to deliver. Improving on excellence, on the other hand, requires that rare individual who sets his sights on being the best in the world at whatever he attempts and continues to work harder than everyone else, even after he has arrived at the pinnacle of his quest. Individuals like Olympic athletes Michael Phelps and Usain Bolt each set world records (in swimming and track), yet each continues to train even harder to break their own records and reap the rewards of these continuing efforts.
This same quality of continuing to improve on success is an essential requirement for every enterprise data center looking to improve upon the performance of its IT infrastructure, ensure the security and reliability of its environment, and continue to lower the total cost of ownership (TCO) of that infrastructure in the face of increasing demands. The deployment of new applications on new servers and the continuing explosion of data, which tends to be doubling every 12-to-18 months, are putting a strain on the budgets of every enterprise data center around the globe. Programs are being implemented to consolidate and virtualize both servers and storage to reduce the TCO and preserve valuable resources, both human and natural. By reducing the number of physical servers populating the data center, the CIO can reduce the number of systems administrators required to drive the IT infrastructure, as well as reducing the amount of energy necessary to power the data center, and the amount of floor space required to house it. These last two points are especially critical as enterprise data centers approach maximum capacity in both of these categories. In fact, if either is exceeded, the enterprise may be forced to build out a brand new data center at a cost of millions of dollars.
z/OS Small Enhancements - Episode 2014AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
z/OS Small Enhancements - Episode 2013AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
Next-Gen Data Center Virtualization: Studies in ImplementationIMEX Research
This document discusses next-generation data center virtualization through the implementation of virtualization at various levels including the operating system, server, network, and storage. It notes how virtualization can help address challenges with current enterprise IT infrastructures like scalability issues, difficult management, questionable reliability, and high management costs. The document also outlines a vision for next-generation highly virtualized and automated data centers that integrate grids, services, and autonomics in addition to virtualization.
The document provides information about an upcoming webcast on enhancements in z/OS Version 2.1. It begins with disclaimers and contact information for the presenters. It then provides the webcast URL and dates. The remainder of the document outlines key new capabilities in z/OS 2.1 related to performance, scale, availability, security, data serving, and management. These are aimed at helping customers drive business value, achieve superior economics, improve performance and scale, and increase customer satisfaction.
This document provides an overview of the IBM zEnterprise EC12 and BC12 hardware, including their key specifications and features. It describes the EC12 and BC12 systems and chips, I/O management and features, and includes a customer example from Algar Telecom that consolidated over 90 servers onto a single zEnterprise 196 system through virtualization, improving efficiency, reducing costs and maintenance efforts.
Imex Research Virtualization Executive Summary On SlideshareM. R. Pamidi, Ph. D.
This document discusses the drivers for data center virtualization. It notes that legacy systems are difficult to manage, have poor reliability and high management costs. Server utilization rates are often only 5-10% leading to high power and cooling costs. Virtualization can help improve utilization rates and reduce costs by consolidating multiple physical servers onto fewer physical servers. The document outlines how virtualization can be implemented at different levels including the processor, operating system, network and storage. It also discusses how a next generation virtualized data center may integrate automation, standardization, and cloud compatibility.
Striving for excellence is a human trait shared by many, as we all try to be the best that we can in at least one area under our control. Achieving excellence is a little harder to accomplish; it requires an amount of hard work and dedication that only a select few are willing to deliver. Improving on excellence, on the other hand, requires that rare individual who sets his sights on being the best in the world at whatever he attempts and continues to work harder than everyone else, even after he has arrived at the pinnacle of his quest. Individuals like Olympic athletes Michael Phelps and Usain Bolt each set world records (in swimming and track), yet each continues to train even harder to break their own records and reap the rewards of these continuing efforts.
This same quality of continuing to improve on success is an essential requirement for every enterprise data center looking to improve upon the performance of its IT infrastructure, ensure the security and reliability of its environment, and continue to lower the total cost of ownership (TCO) of that infrastructure in the face of increasing demands. The deployment of new applications on new servers and the continuing explosion of data, which tends to be doubling every 12-to-18 months, are putting a strain on the budgets of every enterprise data center around the globe. Programs are being implemented to consolidate and virtualize both servers and storage to reduce the TCO and preserve valuable resources, both human and natural. By reducing the number of physical servers populating the data center, the CIO can reduce the number of systems administrators required to drive the IT infrastructure, as well as reducing the amount of energy necessary to power the data center, and the amount of floor space required to house it. These last two points are especially critical as enterprise data centers approach maximum capacity in both of these categories. In fact, if either is exceeded, the enterprise may be forced to build out a brand new data center at a cost of millions of dollars.
z/OS Small Enhancements - Episode 2014AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
z/OS Small Enhancements - Episode 2013AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
Next-Gen Data Center Virtualization: Studies in ImplementationIMEX Research
This document discusses next-generation data center virtualization through the implementation of virtualization at various levels including the operating system, server, network, and storage. It notes how virtualization can help address challenges with current enterprise IT infrastructures like scalability issues, difficult management, questionable reliability, and high management costs. The document also outlines a vision for next-generation highly virtualized and automated data centers that integrate grids, services, and autonomics in addition to virtualization.
The document provides information about an upcoming webcast on enhancements in z/OS Version 2.1. It begins with disclaimers and contact information for the presenters. It then provides the webcast URL and dates. The remainder of the document outlines key new capabilities in z/OS 2.1 related to performance, scale, availability, security, data serving, and management. These are aimed at helping customers drive business value, achieve superior economics, improve performance and scale, and increase customer satisfaction.
This document provides an overview of the IBM zEnterprise EC12 and BC12 hardware, including their key specifications and features. It describes the EC12 and BC12 systems and chips, I/O management and features, and includes a customer example from Algar Telecom that consolidated over 90 servers onto a single zEnterprise 196 system through virtualization, improving efficiency, reducing costs and maintenance efforts.
Architecting Next Generation Enterprise Network StorageIMEX Research
The document discusses strategies for architecting next generation enterprise network storage. It covers managing strategic storage plans, data protection techniques like mirroring and replication, lowering total cost of ownership through tiered storage and standardization, and emerging technologies like continuous data protection and information classification. The overall goal is to provide resilient, cost-effective storage that aligns with business needs and data life cycles.
System z Technology Summit Streamlining UtilitiesSurekha Parekh
Most DB2 applications are global non-stop, requiring almost
100% accessibility. Availability demands reduce the amount
of time available to perform necessary routine tasks, such as
utility maintenance on the underlying data and objects stored
in DB2 for z/OS that support critical business applications. In
addition, companies are looking for ways to streamline DB2 utility
processing to maximize system and personnel resources. How
valuable would it be to maximize your use of IBM DB2 Utilities
Suite for z/OS for both DB2 9 and DB2 10? What if you could
establish DB2 utility practices at a company level and know
that they would be monitored and adhered to? Do you want
to reduce your batch window during utility sort processing to
improve availability and performance? How important would it
be to run utilities only on objects when and if it’s necessary?
The answers to these questions and more will be revealed in
this session.
These solutions from IBM help customers achieve the highest levels of availability and service. Flash Express strengthens performance during critical times. IBM zAware uses analytics to diagnose issues faster through its self-learning capabilities. Together, they enable exceptional availability and service quality.
z/OS Small Enhancements - Episode 2014BMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
z/OS Small Enhancements - Episode 2015AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
The document provides an overview of IBM i 7.1 including:
- IBM i 7.1 will deliver major new capabilities for workload optimization, integration with DB2, and resiliency.
- IBM i offers lower total cost of ownership than x86 systems, with costs averaging 41% less than x86/Windows and 47% less than x86/Linux.
- IBM i 7.1 announcement highlights include improvements to workload optimization using SSDs, integration with DB2 including XML and encryption support, high availability using PowerHA, and enhanced systems management capabilities.
z/OS Small Enhancements - Episode 2015BMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
Next generation data centers are moving towards more automated, dense, and efficient infrastructure using blade servers and IP-based networks. Blade servers provide significant cost savings over traditional rack-mounted servers through higher density, lower administration costs, and reduced power and space requirements. Management software is key to provisioning and monitoring resources in these highly virtualized environments. Standards around fabrics, management protocols, and form factors will drive further adoption of blade servers across applications ranging from web infrastructure to high performance computing.
BCBS Minnesota reduced costs by consolidating 140 servers onto a single IBM System z system running Linux virtual servers. This cut TCO over 5 years and reduced server provisioning times by 99% while allowing 97% faster disaster recovery. Running applications in Linux virtual servers on System z provided better performance, reliability, and cost efficiency than their previous Windows/Intel environment.
IBM Z and LinuxONE virtualization technology allows customers to create virtual resources like processors, memory, I/O and networking to help reduce hardware costs and support new workloads. The virtualization is designed into the hardware, firmware and software layers. z/VM is the hypervisor that provides the software virtualization layer and extends hardware capabilities. z/VM helps customers run private clouds more efficiently and respond more quickly by virtualizing resources and allowing dynamic configuration changes without restarts. Customers can interact with the z/VM community through sponsor programs and mailing lists.
Next-Gen Data Center: Improving TCO & ROI in Data Centers Through Virtualizat...IMEX Research
This document discusses improving data center efficiency and ROI through virtualization and blade servers. It notes that virtualization allows better utilization of servers and storage, improving scalability and manageability while lowering costs. Adopting blade servers allows for higher density and power/cooling efficiency. The document recommends these strategies to address challenges of rising IT costs, inefficient infrastructure utilization, and improving alignment between IT and business goals.
The document discusses IBM System z processors and how their capabilities have required changes in how CPU management is approached, focusing on features introduced in recent years like zAAP, zIIP, defined capacity limits, blocked workloads, and z10 HiperDispatch which optimizes cache usage by consistently dispatching work to the same physical CPU. It also provides guidance on how to evolve CPU reporting to account for these new capabilities and their instrumentation in SMF records and RMF.
New system information panels in SDSF on z/OS 2.1 and 2.2 allow users to easily view system configuration information such as:
- System parameters (SYS)
- Link list data sets (LNK)
- Link pack data sets (LPA)
- APF authorized libraries (APF)
- Page data sets (PAG)
- Parmlib data sets (PARM)
The SDSFAUX address space must be started to access these panels, which show consolidated data and allow searching within data sets.
Tools for developing and monitoring SQL in DB2 for z/OSSurekha Parekh
Building optimal applications against DB2 for z/OS is often difficult. As most of the real issues relate to maintenancence and changes in the surrounding parameters it’s necessary to define a methodology which enables performance monitoring as well as optimization in existing code.
By knowing more about how the SQL in your shop performs, a lot can be done to to anticipate future problems, such as identifying performance bottlenecks before they impact users and add to IT costs. By building a performance history table, where you monitor performance as time passes, and changes in the environment occur, it’s possible to be proactive, and optimize before the costs turns red.
Move up to POWER7 and IBM i 7, IBM Power EventIBM Danmark
Fordele ved Version 6.1 og 7.1 for i-kunder
Version 6.1. har været på gaden i mange år, og Version 7.1 i godt to år. Få indblik i, hvad du kan tilføre din virksomhed ved at opgradere nu.
Erik Rex, Consulting IT Specialist, IBM
This document discusses DB2 backup and recovery. It covers logging, different backup types including full, incremental, and delta backups. It also discusses performing backups offline and online. The document describes how to check backup history and image consistency. Recovery types like crash, version, and roll-forward recovery are explained. Commands for restarting, restoring, and recovering databases are provided. The appendix includes links for more information on backup, restore, and roll-forward commands.
DB2 LUW Security introduces new auditing features in DB2 9.5 that make auditing more flexible and granular. Key points include:
- Database auditing now has separate instance and database levels for more flexibility and separation of duties.
- New auditing categories like EXECUTE allow auditing just SQL statements instead of entire operation contexts.
- Audit policies are used at the database level instead of the old db2audit commands. Policies are created and assigned to objects by SECADMs.
- Instance level auditing is still done with db2audit commands by SYSADMs, while database level uses stored procedures delegated by SECADMs.
- The new
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingJohn Campbell
This document discusses thread reuse using the RELEASE(DEALLOCATE) bind option in DB2, considerations for lock avoidance, and lessons learned on DB2 locking. It provides primers on thread reuse, the RELEASE bind option, lock avoidance techniques like commit log sequence numbers and possibly uncommitted bits, and the ramifications of lock avoidance for SQL. It recommends using programming techniques to avoid data currency exposures when using lock avoidance, and outlines how to identify packages that can safely be rebound with CURRENTDATA(NO).
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
DB2 introduced universal table spaces in version 9 to address the need for a table space type that provides both partitioned and segmented organization. Universal table spaces allow tables to be larger than 64GB, provide inter-partition parallelism, and support fast insert and delete operations while avoiding the overhead of partitioning by a ROWID column.
Architecting Next Generation Enterprise Network StorageIMEX Research
The document discusses strategies for architecting next generation enterprise network storage. It covers managing strategic storage plans, data protection techniques like mirroring and replication, lowering total cost of ownership through tiered storage and standardization, and emerging technologies like continuous data protection and information classification. The overall goal is to provide resilient, cost-effective storage that aligns with business needs and data life cycles.
System z Technology Summit Streamlining UtilitiesSurekha Parekh
Most DB2 applications are global non-stop, requiring almost
100% accessibility. Availability demands reduce the amount
of time available to perform necessary routine tasks, such as
utility maintenance on the underlying data and objects stored
in DB2 for z/OS that support critical business applications. In
addition, companies are looking for ways to streamline DB2 utility
processing to maximize system and personnel resources. How
valuable would it be to maximize your use of IBM DB2 Utilities
Suite for z/OS for both DB2 9 and DB2 10? What if you could
establish DB2 utility practices at a company level and know
that they would be monitored and adhered to? Do you want
to reduce your batch window during utility sort processing to
improve availability and performance? How important would it
be to run utilities only on objects when and if it’s necessary?
The answers to these questions and more will be revealed in
this session.
These solutions from IBM help customers achieve the highest levels of availability and service. Flash Express strengthens performance during critical times. IBM zAware uses analytics to diagnose issues faster through its self-learning capabilities. Together, they enable exceptional availability and service quality.
z/OS Small Enhancements - Episode 2014BMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
z/OS Small Enhancements - Episode 2015AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
The document provides an overview of IBM i 7.1 including:
- IBM i 7.1 will deliver major new capabilities for workload optimization, integration with DB2, and resiliency.
- IBM i offers lower total cost of ownership than x86 systems, with costs averaging 41% less than x86/Windows and 47% less than x86/Linux.
- IBM i 7.1 announcement highlights include improvements to workload optimization using SSDs, integration with DB2 including XML and encryption support, high availability using PowerHA, and enhanced systems management capabilities.
z/OS Small Enhancements - Episode 2015BMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
Next generation data centers are moving towards more automated, dense, and efficient infrastructure using blade servers and IP-based networks. Blade servers provide significant cost savings over traditional rack-mounted servers through higher density, lower administration costs, and reduced power and space requirements. Management software is key to provisioning and monitoring resources in these highly virtualized environments. Standards around fabrics, management protocols, and form factors will drive further adoption of blade servers across applications ranging from web infrastructure to high performance computing.
BCBS Minnesota reduced costs by consolidating 140 servers onto a single IBM System z system running Linux virtual servers. This cut TCO over 5 years and reduced server provisioning times by 99% while allowing 97% faster disaster recovery. Running applications in Linux virtual servers on System z provided better performance, reliability, and cost efficiency than their previous Windows/Intel environment.
IBM Z and LinuxONE virtualization technology allows customers to create virtual resources like processors, memory, I/O and networking to help reduce hardware costs and support new workloads. The virtualization is designed into the hardware, firmware and software layers. z/VM is the hypervisor that provides the software virtualization layer and extends hardware capabilities. z/VM helps customers run private clouds more efficiently and respond more quickly by virtualizing resources and allowing dynamic configuration changes without restarts. Customers can interact with the z/VM community through sponsor programs and mailing lists.
Next-Gen Data Center: Improving TCO & ROI in Data Centers Through Virtualizat...IMEX Research
This document discusses improving data center efficiency and ROI through virtualization and blade servers. It notes that virtualization allows better utilization of servers and storage, improving scalability and manageability while lowering costs. Adopting blade servers allows for higher density and power/cooling efficiency. The document recommends these strategies to address challenges of rising IT costs, inefficient infrastructure utilization, and improving alignment between IT and business goals.
The document discusses IBM System z processors and how their capabilities have required changes in how CPU management is approached, focusing on features introduced in recent years like zAAP, zIIP, defined capacity limits, blocked workloads, and z10 HiperDispatch which optimizes cache usage by consistently dispatching work to the same physical CPU. It also provides guidance on how to evolve CPU reporting to account for these new capabilities and their instrumentation in SMF records and RMF.
New system information panels in SDSF on z/OS 2.1 and 2.2 allow users to easily view system configuration information such as:
- System parameters (SYS)
- Link list data sets (LNK)
- Link pack data sets (LPA)
- APF authorized libraries (APF)
- Page data sets (PAG)
- Parmlib data sets (PARM)
The SDSFAUX address space must be started to access these panels, which show consolidated data and allow searching within data sets.
Tools for developing and monitoring SQL in DB2 for z/OSSurekha Parekh
Building optimal applications against DB2 for z/OS is often difficult. As most of the real issues relate to maintenancence and changes in the surrounding parameters it’s necessary to define a methodology which enables performance monitoring as well as optimization in existing code.
By knowing more about how the SQL in your shop performs, a lot can be done to to anticipate future problems, such as identifying performance bottlenecks before they impact users and add to IT costs. By building a performance history table, where you monitor performance as time passes, and changes in the environment occur, it’s possible to be proactive, and optimize before the costs turns red.
Move up to POWER7 and IBM i 7, IBM Power EventIBM Danmark
Fordele ved Version 6.1 og 7.1 for i-kunder
Version 6.1. har været på gaden i mange år, og Version 7.1 i godt to år. Få indblik i, hvad du kan tilføre din virksomhed ved at opgradere nu.
Erik Rex, Consulting IT Specialist, IBM
This document discusses DB2 backup and recovery. It covers logging, different backup types including full, incremental, and delta backups. It also discusses performing backups offline and online. The document describes how to check backup history and image consistency. Recovery types like crash, version, and roll-forward recovery are explained. Commands for restarting, restoring, and recovering databases are provided. The appendix includes links for more information on backup, restore, and roll-forward commands.
DB2 LUW Security introduces new auditing features in DB2 9.5 that make auditing more flexible and granular. Key points include:
- Database auditing now has separate instance and database levels for more flexibility and separation of duties.
- New auditing categories like EXECUTE allow auditing just SQL statements instead of entire operation contexts.
- Audit policies are used at the database level instead of the old db2audit commands. Policies are created and assigned to objects by SECADMs.
- Instance level auditing is still done with db2audit commands by SYSADMs, while database level uses stored procedures delegated by SECADMs.
- The new
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingJohn Campbell
This document discusses thread reuse using the RELEASE(DEALLOCATE) bind option in DB2, considerations for lock avoidance, and lessons learned on DB2 locking. It provides primers on thread reuse, the RELEASE bind option, lock avoidance techniques like commit log sequence numbers and possibly uncommitted bits, and the ramifications of lock avoidance for SQL. It recommends using programming techniques to avoid data currency exposures when using lock avoidance, and outlines how to identify packages that can safely be rebound with CURRENTDATA(NO).
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
DB2 introduced universal table spaces in version 9 to address the need for a table space type that provides both partitioned and segmented organization. Universal table spaces allow tables to be larger than 64GB, provide inter-partition parallelism, and support fast insert and delete operations while avoiding the overhead of partitioning by a ROWID column.
Batch applications are programs that run without human intervention to process large amounts of data. They are characterized by non-interactive processing, large input sizes, and long transaction times. Common examples of batch processing include payroll, billing, reporting, and analytics jobs. A batch application is made up of jobs that contain steps. Each step uses a reader to input data, a processor to apply logic, and a writer to output results. Batch applications are well-suited for data-intensive tasks like ETL and can take advantage of off-peak hours. Frameworks like Java Batch and Spring Batch provide models and APIs for developing batch jobs in Java/JEE environments.
Java EE 7 Batch processing in the Real WorldRoberto Cortez
This talk will explore one of the newest API for Java EE 7, the JSR 352, Batch Applications for the Java Platform. Batch processing is found in nearly every industry when you need to execute a non-interactive, bulk-oriented and long running operation task. A few examples are: financial transactions, billing, inventory management, report generation and so on. The JSR 352 specifies a common set of requirements that every batch application usually needs like: checkpointing, parallelization, splitting and logging. It also provides you with a job specification language and several interfaces that allow you to implement your business logic and interact with the batch container. We are going to live code a real life example batch application, starting with a simple task and then evolve it using the advanced API's until we have a full parallel and checkpointing reader-processor-writer batch. By the end of the session, attendees should be able to understand the use cases of the JSR 352, when to apply it and how to develop a full Java EE Batch Application.
Optimizer is the component of the DB2 SQL compiler responsible for selecting an optimal access plan for an SQL statement. The optimizer works by calculating the execution cost of many alternative access plans, and then choosing the one with the minimal estimated cost. Understanding how the optimizer works and knowing how to influence its behaviour can lead to improved query performance and better resource usage.
This presentation was created for the workshop delivered at the CASCON 2011 conference. Its aim is to introduce basic optimizer and related concepts, and to serve as a starting point for further study of the optimizer techniques.
Linux on Z13 and Simulatenus Multithreading - Sebastien LlaurencyNRB
The newly IBM z13 hardware has multithreading capabilities implement in its hardware architecture. This of direct use for the zLINUX running on it, by doubling the capacity of processing of each IFL (Integrated Facilty for Linux) engine; Also other newly announced capabilities announced on the z13 improve the efficiency of LINUX on z Suystems.
Learn about Workload Management Update for z/OS 1.10 and 1.11. For more information on IBM System z, visit http://ibm.co/PNo9Cb.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
This document discusses memory topics related to IBM System z, including:
- Paging subsystem design recommendations to avoid paging and allow full system dumps.
- Enhancements in z/OS R12 to improve dumping performance.
- Benefits of 1MB large pages for TLB coverage and various product exploitations.
- New z/OS R10 64-bit common area and RMF support for monitoring it.
- Considerations for coupling facility memory allocation for structures, dumps, and white space.
MongoDB Linux Porting, Performance Measurements and and Scaling Advantage usi...MongoDB
MongoDB has been ported onto Linux on z Systems. MongoDB Performance benefits from the superior single thread performance of System z processor and system design. The goal of the presentation is to demonstrate the value of running MongoDB on Linux for Systems z by comparing scaling behavior of MongoDB sharding on x86 and mainframe. The presentation will give details on performance numbers and scaling behavior of MongoDB on Systems z versus Intel based servers. The presentation will also sketch how MongoDB sharding on Linux on z Systems can be dockerized to facilitate the setup.
This presentations shares the latest News and Announcements about and around z/VSE. It discusses the z/VSE V6.2 announcement, future enhancement, pricing, Statements of Direction, and more.
The document discusses several topics related to comparing the performance and capacity of different computing systems. It introduces the concept of workload factor which allows comparing the capacity of systems to process the same workload despite architectural differences. Several industry standard benchmarks are described but they are noted to not always match real customer workloads. Real workloads place more stress on system interconnect and cache performance than most benchmarks.
IBM Wave for z/VM is a solution that provides simplified management of z/VM environments through a graphical user interface. It fully abstracts physical and virtual resources, allowing Linux administrators to manage Linux virtual servers on zSystems without knowledge of the underlying z/VM infrastructure. Key features include provisioning of virtual servers, networks, and storage; monitoring and automation of tasks; and delegation of administration. The solution aims to make z/VM administration more intuitive and efficient for managing Linux virtualization at scale.
DB2 Web Query for i is a web-based query and reporting tool that provides a modernized interface for accessing data in DB2 for i. Version 2.1 introduces simplified packaging and new features like improved mobility support, integrated report scheduling, and a consolidated development tool called InfoAssist. The application integration extension allows integrating DB2 Web Query reports and analytics into other applications through a simple URL interface to improve business intelligence capabilities.
OpenStack and z/VM – What is it and how do I get it?Anderson Bassani
The document discusses OpenStack and how to get it running on z/VM. It provides an overview of OpenStack, describing what it is and who it is for. It then covers specifics of the z/VM OpenStack implementation, including supported features in Nova, Neutron and Cinder. Finally, it outlines the steps to install the z/VM OpenStack appliance, including requirements, downloading the necessary files, and configuring directories.
Stephan Hummel – IT-Tage 2015 – DB2 In-Memory - Eine Technologie nicht nur fü...Informatik Aktuell
This document discusses DB2 In-Memory Acceleration, a technology from IBM that improves performance for analytic workloads. DB2 In-Memory Acceleration uses columnar storage and encoding to compress and analyze data more efficiently. It allows data to be queried much faster using techniques like parallel processing, data skipping, and CPU acceleration. The document provides guidance on sizing DB2 In-Memory Acceleration and describes how it can be used to improve performance of both analytic queries and transactions by creating column-based shadow tables of row-based operational data.
Unisanta - Visão Geral de hardware Servidor IBM System zAnderson Bassani
Apresentação realizada na Universidade Santa Cecília - Cidade de Santos, São Paulo em 03/09/2014. Apresentado aos alunos de Sistemas de Informação e Ciência da Computação.
Ims13 ims tools ims v13 migration workshop - IMS UG May 2014 Sydney & Melbo...Robert Hain
Together, the IBM IMS Tools Solution Packs and IMS 13 deliver simplification, automation and intelligence, with all the tools needed to support IMS databases now in one package. It doesn’t make sense to run reorganization utilities if your databases do not need to be reorganized. Now you can quickly and easily improve IMS application performance, IMS resource utilization and deliver higher system availability with the end-to-end analysis of IMS transactions. Comprehensive performance reporting and easier interactive analysis determine what happened, what needs fixing and how to fix it – all part of the intelligence and automation of the IMS Tools Performance Solution Pack.
This document discusses SAP solutions running on IBM Power Systems and IBM i. It provides an overview of IBM Power7 technology and how it improves performance, scalability, and efficiency for SAP workloads compared to previous Power6 systems. It also describes IBM i Solution Editions for the Power 720 and Power 740 Express models, which include optimized pricing and services to simplify deploying SAP on IBM i.
z/OS Small Enhancements - Episode 2016AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
This presentation provides an overview about the various networking options with z/VSE. It discusses the new z/VSE Networking Appliance (VNA) as well as support for VLAN and Layer 2. In addition, it will cover new networking solutions like the z/VSE Fast Path to Linux on System z (LFP) in a z/VM or LPAR environment as well as z/VSE's z/VM IP Assist function (VIA). Besides this, it also covers some IPv6 basics and how you can make use of it.
Communications Server provides TCP/IP and SNA connectivity and services on z/OS. It combines the prior VTAM and TCP/IP products and provides common networking functions. Applications can access networks using SNA APIs, sockets APIs, or standard TCP/IP applications. Communications Server supports both SNA and TCP/IP protocols and their integration.
This presentation provides step by step description of the z/VSE base installation. Since migration to z/VSE V6 requires a base install, the steps to prepare and execute a tape-less base installation are explained. Also, hits & tips about data migration after the base install is complete are covered.
The document provides an overview of IBM z Systems and how it enables digital transformation through hybrid cloud infrastructure, rapid application creation, real-time insight, and combating cyber threats. It discusses how the new IBM z13s delivers more performance, scale, and capabilities to fuel innovation with a secure hybrid cloud. The z13s is designed to perform in the open digital era through improvements like increased throughput, faster analytics processing, encryption functions, data compression, and memory management. It also discusses how z Systems provides an optimized platform to accelerate time to value for organizations in the API economy.
Similar to Smart analytic optimizer how it works (20)
The document provides an overview of social networking and discusses whether it is right for individuals, professionals, and companies. It discusses popular social networks like Twitter and LinkedIn, and covers topics like establishing an online presence, networking, and guidelines for appropriate social media use. The presentation aims to help attendees understand how to get started with social networking and determine how it can benefit them personally and professionally.
First of 3 presentations on social networking. All three are all very similar. However, each has a slightly different approach to explaining social networking.
One of three variations of social networking / social media presentations. All three are all very similar. However, each has a slightly different approach to explaining social networking.
A First Look at the DB2 10 DSNZPARM ChangesWillie Favero
This document discusses changes to DB2 subsystem parameter module (DSNZPARM) in DB2 10. It provides information on DSNZPARM macros, how parameters can be changed through installation panels or dynamically using -SET SYSPARM command, and differences between hidden, opaque and visible parameters. The document also introduces new documentation for opaque parameters and explains how to display current DSNZPARM settings using sample program DSN8ED7.
Tips from my personal experience at preparing an abstract for a conference, preparing a presentation for a conference, and most importantly, how you speak (or present) at a conference. In fact most of this is on the speaking portion of the experience.
An Intro to Tuning Your SQL on DB2 for z/OSWillie Favero
This document provides an introduction to SQL tuning for a DB2 for z/OS environment. It was presented on March 1, 2011 by Willie Favero from IBM's Data Warehouse on System z Swat Team. The presentation covers various techniques for optimizing SQL queries and access paths in DB2 for z/OS, with the goal of improving query performance. It addresses topics such as monitoring wait times, buffer pool usage, checkpointing, WLM policies, sort techniques, and disk I/O optimization. The overall aim is to help database administrators understand how to analyze and "tune" queries to reduce response times and meet business performance objectives.
Why computers are cool high school audience - 2010-12-01Willie Favero
The document discusses career opportunities in computer science and technology. It notes that computer science degrees can lead to a variety of jobs at companies like IBM, including opportunities in hardware, software, services, and sales/marketing. It also highlights the growing demand for technology professionals and lists some local colleges that offer computer science programs to prepare students for these in-demand careers.
This document provides information about a presentation on DB2 for z/OS data and index compression given by Willie Favero. It includes disclaimers about the information provided, lists IBM trademarks, and outlines objectives to describe DB2 compression fundamentals, how data and index compression are implemented in DB2, and how to determine if compression achieves expected disk savings. It also references the history of data compression techniques including the Lempel-Ziv algorithms from 1977 that DB2 compression is based on.
This document outlines a joint effort between IBM's Poughkeepsie Lab and Silicon Valley Lab to benchmark a 50TB data warehouse on System z and establish best practices for managing large data warehouses on the System z platform. It discusses using workload manager to handle mixed transactional and analytic workloads, implementation considerations, and references several other IBM Redbooks publications related to enterprise data warehousing with DB2 on System z.
Parallelism was first introduced to DB2 way back in 1993 with DB2 Version 3. With every release of DB2 parallelism has been enhanced. In applications like data warehousing and business intelligence, it's almost a necessity. Yet, a surprisingly large number of customers continue to avoid parallelism when it could offer significant elapsed time improvements. With this presentation, we'll try to debunk the myths that surround using DB2 parallelism. We will discuss a little bit of parallelism's history, how parallelism works, the parameters in DB2 that control parallelism and the affects of the value chosen for those parameters, and how to get the greatest benefits when parallelism is put to use; the DOS & DON'TS necessary for you to get the most out of DB2's parallelism. We will also discuss the latest enhancements to parallelism and how parallelism can take advantage of zIIP specialty engines.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).