This is a "One day Seminar -ODS " . The objectives of this ODS are to focus on key areas
• System address space CPU, EDM pools, data set activity, logging, lock/latch contention, DBM1
virtual and real storage, buffer pools and GBP,…
• Identify the key performance indicators to be monitored
• Provide rules-of-thumb to be applied
• Typically expressed in a range, e.g. < X-Y
• If <x,>Y, need further investigation and tuning - RED
• Boundary condition if in between - AMBER
• Investigate with more detailed tracing and analysis when time available
• Provide tuning advice for common problems
This document discusses various DB2 database objects and utilities. It provides descriptions of storage groups, databases, tablespaces, tables, indexes, views, and the utilities for unload, load, reorganization, running statistics, and copy. It includes examples of creating and using these objects and utilities.
The document describes various DB2 online utilities including UNLOAD, LOAD, REBUILD INDEX, COPY, RECOVER, RUNSTATS, MODIFY RECOVERY, QUIESCE, and REORG. These utilities perform functions like unloading and loading data, rebuilding indexes, taking image copies of data, recovering data to a prior point in time, updating catalog statistics, and reorganizing tablespaces.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
DB2 runs on 5 address spaces that each perform essential functions:
- DSNMSTR controls connections to other systems and performs logging, recovery, and system management.
- DSNDBM1 supports data definition, manipulation, and retrieval.
- IRLMPROC controls concurrent data access and maintains integrity through locking.
- DSNDIST enables remote access to distributed databases.
- DSNSPAS provides an isolated environment to execute stored procedures.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
JCL (Job Control Language) is used on IBM mainframes to instruct the operating system how to run batch jobs and start subsystems. It acts as an interface between application programming and the MVS Operating System. JCL is used for compiling and executing batch programs, controlling jobs, allocating files, sorting files, and more. JCL uses statements like JOB, EXEC, and DD to identify the job, specify execution parameters, and define file allocations respectively.
This document discusses various DB2 database objects and utilities. It provides descriptions of storage groups, databases, tablespaces, tables, indexes, views, and the utilities for unload, load, reorganization, running statistics, and copy. It includes examples of creating and using these objects and utilities.
The document describes various DB2 online utilities including UNLOAD, LOAD, REBUILD INDEX, COPY, RECOVER, RUNSTATS, MODIFY RECOVERY, QUIESCE, and REORG. These utilities perform functions like unloading and loading data, rebuilding indexes, taking image copies of data, recovering data to a prior point in time, updating catalog statistics, and reorganizing tablespaces.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
DB2 runs on 5 address spaces that each perform essential functions:
- DSNMSTR controls connections to other systems and performs logging, recovery, and system management.
- DSNDBM1 supports data definition, manipulation, and retrieval.
- IRLMPROC controls concurrent data access and maintains integrity through locking.
- DSNDIST enables remote access to distributed databases.
- DSNSPAS provides an isolated environment to execute stored procedures.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
JCL (Job Control Language) is used on IBM mainframes to instruct the operating system how to run batch jobs and start subsystems. It acts as an interface between application programming and the MVS Operating System. JCL is used for compiling and executing batch programs, controlling jobs, allocating files, sorting files, and more. JCL uses statements like JOB, EXEC, and DD to identify the job, specify execution parameters, and define file allocations respectively.
The document provides an overview of Job Control Language (JCL) used to communicate with the IBM mainframe operating system. It describes the key components of JCL including JOB, EXEC and DD statements. JOB statements name a job and supply accounting/scheduling information. EXEC statements call programs for execution and can invoke cataloged procedures. DD statements define resources like input/output files used by the job. The document outlines the format, fields and common parameters used in each JCL statement type.
This document provides an overview and agenda for a presentation on tips and techniques for DB2 for z/OS. The presentation covers various topics including performance management, EDM pool tuning, SQL and application tuning, and data integrity. It emphasizes the importance of understanding access paths, managing commits, regular rebinding, and choosing appropriate data types and lengths.
The document discusses DB2's use of storage on the mainframe. It notes that DB2 uses VSAM data sets to store tablespaces, indexes, and other objects. These data sets can be managed by DB2 storage groups or SMS. Storage groups are lists of volumes where data sets are placed. The document recommends letting DB2 manage data sets using storage groups for less administrative work, but with less control, or defining your own data sets for more control but more work. It also provides details on where to find storage-related information in the DB2 catalog.
This document discusses the relationship between DB2 and storage management. It describes how DB2 uses storage through tablespaces, indexes, and other objects that are stored on disk as VSAM data sets. It also discusses how DB2 interacts with DFSMS to manage data sets and how storage groups and SMS can be used to simplify storage administration for DB2 objects. While DB2 provides storage management features, there is still a gap between DBA and storage administration that tools can help address.
IEBCOPY is a utility that can copy members between PDS datasets. It allows copying all members, selected members, or excluding specific members. It can also replace or rename members during the copy process. The utility can also compress a PDS dataset to free unused space. Example JCL is provided showing how to copy all members, select specific members, exclude members, replace and rename members, and compress a dataset.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
This document provides an overview of using DB2 on IBM mainframe systems. It discusses logging into TSO, allocating datasets for DB2 use, using the SPUFI tool to interactively execute SQL statements against DB2, and some key DB2 concepts like logical unit of work and the different views that programs and the system have of the DB2 environment.
This slide contains all the basic concepts of ISPF. It's giving the simple and easy step to get the knowledge of Interactive system productivity facility. If u like it then give me feedback on email anilbharti85@gmail.com Thanks v much.
A K Bharti
Mainframe jcl exec and dd statements part - 3janaki ram
EXEC STATEMENT(EXEC)
EXEC Statement is used to identify program name or procedure name.
Maximum we can code 255 EXEC statements in a JOB.
In EXEC statement has two kinds of parameters
Position parameter Keyword parameter
PGM TIME , COND
PROC REGION , PARM
If you not mention any thing default it takes PROC
PGM
This is a positional parameter which must be coded after EXEC with one blank. This parameter indicates the name of program (or) procedure name
Syntax
//STEPNAME EXEC PGM=REPORT (OR)
PROC = PROCEDURE NAME
PARM
It is keyword parameter and it is mainly used for passing the data to another program.
To pass input to Application program
To invoke complier option.
This is a keyword parameter at activity / step level must be coded with exec operand after PGM parameter.
This parameter allows MAX(100) characters.
To receive PARM parameter data, Cobol program must be coded with “Procedure Division Using Parameter ” option.
These parameters must be declared in the linkage section.
DATA DESCRIPTOR STATEMENT ( DD STMT)
It is used to identify files( input and output ) used in JCL
DD name act as a bridge b/w Cobol program and execution JCL
In DD statement has two kinds of parameters.
Position Parameter Keyword Parameter
* DSN,DISP
DATA SPACE,UNIT
DUMMY DCB,VOLUME
*
Position parameter at DD level used with SYSIN to pass data to Cobol programs this is known as In stream data any number of records can be passed to the program.
In stream data is used to pass values from JCL to Cobol dynamically.
To accept the values in Cobol program, we should have equivalent accept verbs.
Syntax
//SYSIN DD * ------- Entry of in stream data
100
200
/* ---------------------- End of in stream data
DATA
We can pass any thing to another program including special character
Syntax
//SYSIN DD DATA
100
/*
200
/*
DUMMY
All files are treated as End of the file
Syntax
//SYSIN DD DUMMY
NOTE
There is no input to the application all input files treated as End of the file.
DSN
Through DSN we can refer temporary / permanent file
Is a physical space or file where the records will be stored.
DISP
DISP parameter is used to identify the mode of the file.
DISP= ( STATUS, NORMAL , ABNORMAL )
TERMINATION TERMINATION
NEW DELETE DELETE
OLD KEEP KEEP
SHR CATLG CATLG
MOD UNCATLG UNCATLG
NEW Creating the data set first time
OLD Already created accessing the data set all resources are allocated exclusively . No other user can until is released by the current user. If dataset is not existing this creates.
SHR The data set can be accessed by multiple users at a time.
MOD Appending the records to the existing record
NOTE If the dataset is not existing , then MOD is assumed as NEW and writes records to the dataset.
This PPT File, helps with the Basic Interview Questions specially for DataBase Domain.. For more questions , please log in to www.rekruitin.com
By ReKruiTIn.com
Top jcl interview questions and answers job interview tipsjcltutorial
You'll likely be asked difficult questions during the interview. Preparing the list of likely questions in advance will help you easily transition from question to question.
The document discusses implementing a Parallel Sysplex which couples multiple z/OS systems together using hardware and software services. Key steps include defining coupling facility structures, configuring XCF signalling paths using CTCs or a coupling facility, formatting and configuring sysplex couple data sets, and defining CFRM policies to manage coupling facility resources.
This document provides an overview and instructions for using BMC MainView software to monitor DB2 system and application performance. It outlines the MainView easy menu interface and describes how to view various DB2 performance metrics such as storage usage, logging, locking, threads, SQL activity and more. Drill-downs and filtering options are demonstrated to get more detailed information on specific topics like buffer pools, page sets, exceptions and traced threads.
The document provides an overview of utilities used in the IBM Z/OS mainframe operating system. It discusses the objectives and agenda of a training course on IBM utilities. The first session covers the introduction and types of utilities, including dataset utilities, system utilities, and access method services. Common dataset utilities like IEFBR14, IEBGENER, IEBCOPY, and SORT are introduced. The document provides examples of using IEFBR14 to create and delete datasets, and examples of using IEBCOPY and IEBGENER to copy datasets and work with partitioned dataset members.
The document provides an overview of the DB2 database on mainframe systems. It discusses prerequisites for DB2 including mainframe concepts, COBOL, file handling, and VSAM. It then covers topics like database introduction, relational concepts, data definition language, SQL, DB2 objects, and more. The last section lists additional topics to be covered, including more on SQL statements, functions, complex queries, DML statements, dynamic SQL, and DB2 objects like indexes and views.
Here are the key steps to reorganize an HDAM IMS database:
1. Back up the database before starting the reorganization process. This provides a recovery point in case of errors.
2. Run the DBDGEN utility to generate a new DBD for the database. This will incorporate any schema changes.
3. Run the HD Reorganization Unload (DFSURGU0) utility to unload all segments from the database to a sequential file.
4. Run the HD Reorganization Reload (DFSURGL0) utility to reload the segments from the sequential file into a new database with the new DBD. This rebuilds the database in a more efficient structure.
5. Run
Upgrade to zOS V2.5 - Planning and Tech Actions.pdfMarna Walle
This is a critical presentation for those that are upgrading from z/OS 3.1 from z/OS V2.4/V2.5. Using this presentation, you can see the planning activities and technical upgrade actions.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
The document provides an overview of Job Control Language (JCL) used to communicate with the IBM mainframe operating system. It describes the key components of JCL including JOB, EXEC and DD statements. JOB statements name a job and supply accounting/scheduling information. EXEC statements call programs for execution and can invoke cataloged procedures. DD statements define resources like input/output files used by the job. The document outlines the format, fields and common parameters used in each JCL statement type.
This document provides an overview and agenda for a presentation on tips and techniques for DB2 for z/OS. The presentation covers various topics including performance management, EDM pool tuning, SQL and application tuning, and data integrity. It emphasizes the importance of understanding access paths, managing commits, regular rebinding, and choosing appropriate data types and lengths.
The document discusses DB2's use of storage on the mainframe. It notes that DB2 uses VSAM data sets to store tablespaces, indexes, and other objects. These data sets can be managed by DB2 storage groups or SMS. Storage groups are lists of volumes where data sets are placed. The document recommends letting DB2 manage data sets using storage groups for less administrative work, but with less control, or defining your own data sets for more control but more work. It also provides details on where to find storage-related information in the DB2 catalog.
This document discusses the relationship between DB2 and storage management. It describes how DB2 uses storage through tablespaces, indexes, and other objects that are stored on disk as VSAM data sets. It also discusses how DB2 interacts with DFSMS to manage data sets and how storage groups and SMS can be used to simplify storage administration for DB2 objects. While DB2 provides storage management features, there is still a gap between DBA and storage administration that tools can help address.
IEBCOPY is a utility that can copy members between PDS datasets. It allows copying all members, selected members, or excluding specific members. It can also replace or rename members during the copy process. The utility can also compress a PDS dataset to free unused space. Example JCL is provided showing how to copy all members, select specific members, exclude members, replace and rename members, and compress a dataset.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
This document provides an overview of using DB2 on IBM mainframe systems. It discusses logging into TSO, allocating datasets for DB2 use, using the SPUFI tool to interactively execute SQL statements against DB2, and some key DB2 concepts like logical unit of work and the different views that programs and the system have of the DB2 environment.
This slide contains all the basic concepts of ISPF. It's giving the simple and easy step to get the knowledge of Interactive system productivity facility. If u like it then give me feedback on email anilbharti85@gmail.com Thanks v much.
A K Bharti
Mainframe jcl exec and dd statements part - 3janaki ram
EXEC STATEMENT(EXEC)
EXEC Statement is used to identify program name or procedure name.
Maximum we can code 255 EXEC statements in a JOB.
In EXEC statement has two kinds of parameters
Position parameter Keyword parameter
PGM TIME , COND
PROC REGION , PARM
If you not mention any thing default it takes PROC
PGM
This is a positional parameter which must be coded after EXEC with one blank. This parameter indicates the name of program (or) procedure name
Syntax
//STEPNAME EXEC PGM=REPORT (OR)
PROC = PROCEDURE NAME
PARM
It is keyword parameter and it is mainly used for passing the data to another program.
To pass input to Application program
To invoke complier option.
This is a keyword parameter at activity / step level must be coded with exec operand after PGM parameter.
This parameter allows MAX(100) characters.
To receive PARM parameter data, Cobol program must be coded with “Procedure Division Using Parameter ” option.
These parameters must be declared in the linkage section.
DATA DESCRIPTOR STATEMENT ( DD STMT)
It is used to identify files( input and output ) used in JCL
DD name act as a bridge b/w Cobol program and execution JCL
In DD statement has two kinds of parameters.
Position Parameter Keyword Parameter
* DSN,DISP
DATA SPACE,UNIT
DUMMY DCB,VOLUME
*
Position parameter at DD level used with SYSIN to pass data to Cobol programs this is known as In stream data any number of records can be passed to the program.
In stream data is used to pass values from JCL to Cobol dynamically.
To accept the values in Cobol program, we should have equivalent accept verbs.
Syntax
//SYSIN DD * ------- Entry of in stream data
100
200
/* ---------------------- End of in stream data
DATA
We can pass any thing to another program including special character
Syntax
//SYSIN DD DATA
100
/*
200
/*
DUMMY
All files are treated as End of the file
Syntax
//SYSIN DD DUMMY
NOTE
There is no input to the application all input files treated as End of the file.
DSN
Through DSN we can refer temporary / permanent file
Is a physical space or file where the records will be stored.
DISP
DISP parameter is used to identify the mode of the file.
DISP= ( STATUS, NORMAL , ABNORMAL )
TERMINATION TERMINATION
NEW DELETE DELETE
OLD KEEP KEEP
SHR CATLG CATLG
MOD UNCATLG UNCATLG
NEW Creating the data set first time
OLD Already created accessing the data set all resources are allocated exclusively . No other user can until is released by the current user. If dataset is not existing this creates.
SHR The data set can be accessed by multiple users at a time.
MOD Appending the records to the existing record
NOTE If the dataset is not existing , then MOD is assumed as NEW and writes records to the dataset.
This PPT File, helps with the Basic Interview Questions specially for DataBase Domain.. For more questions , please log in to www.rekruitin.com
By ReKruiTIn.com
Top jcl interview questions and answers job interview tipsjcltutorial
You'll likely be asked difficult questions during the interview. Preparing the list of likely questions in advance will help you easily transition from question to question.
The document discusses implementing a Parallel Sysplex which couples multiple z/OS systems together using hardware and software services. Key steps include defining coupling facility structures, configuring XCF signalling paths using CTCs or a coupling facility, formatting and configuring sysplex couple data sets, and defining CFRM policies to manage coupling facility resources.
This document provides an overview and instructions for using BMC MainView software to monitor DB2 system and application performance. It outlines the MainView easy menu interface and describes how to view various DB2 performance metrics such as storage usage, logging, locking, threads, SQL activity and more. Drill-downs and filtering options are demonstrated to get more detailed information on specific topics like buffer pools, page sets, exceptions and traced threads.
The document provides an overview of utilities used in the IBM Z/OS mainframe operating system. It discusses the objectives and agenda of a training course on IBM utilities. The first session covers the introduction and types of utilities, including dataset utilities, system utilities, and access method services. Common dataset utilities like IEFBR14, IEBGENER, IEBCOPY, and SORT are introduced. The document provides examples of using IEFBR14 to create and delete datasets, and examples of using IEBCOPY and IEBGENER to copy datasets and work with partitioned dataset members.
The document provides an overview of the DB2 database on mainframe systems. It discusses prerequisites for DB2 including mainframe concepts, COBOL, file handling, and VSAM. It then covers topics like database introduction, relational concepts, data definition language, SQL, DB2 objects, and more. The last section lists additional topics to be covered, including more on SQL statements, functions, complex queries, DML statements, dynamic SQL, and DB2 objects like indexes and views.
Here are the key steps to reorganize an HDAM IMS database:
1. Back up the database before starting the reorganization process. This provides a recovery point in case of errors.
2. Run the DBDGEN utility to generate a new DBD for the database. This will incorporate any schema changes.
3. Run the HD Reorganization Unload (DFSURGU0) utility to unload all segments from the database to a sequential file.
4. Run the HD Reorganization Reload (DFSURGL0) utility to reload the segments from the sequential file into a new database with the new DBD. This rebuilds the database in a more efficient structure.
5. Run
Upgrade to zOS V2.5 - Planning and Tech Actions.pdfMarna Walle
This is a critical presentation for those that are upgrading from z/OS 3.1 from z/OS V2.4/V2.5. Using this presentation, you can see the planning activities and technical upgrade actions.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
Planning and executing a DB2 11 for z/OS Migration by Ian Cook Surekha Parekh
This document discusses planning and executing a migration from DB2 10 to DB2 11 for z/OS. It begins with an overview of the DB2 11 Early Support Program (ESP) feedback, which was positive regarding performance, quality, and reliability. The presentation then covers key aspects of developing a migration project plan, including assembling a project team, identifying technical considerations, and creating a test plan. It emphasizes early elimination of risks and issues. Sample project frameworks are provided to help structure planning and testing across sandbox, development, and production environments. Attendees are advised to contact software vendors to coordinate DB2 version requirements.
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingJohn Campbell
This document discusses thread reuse using the RELEASE(DEALLOCATE) bind option in DB2, considerations for lock avoidance, and lessons learned on DB2 locking. It provides primers on thread reuse, the RELEASE bind option, lock avoidance techniques like commit log sequence numbers and possibly uncommitted bits, and the ramifications of lock avoidance for SQL. It recommends using programming techniques to avoid data currency exposures when using lock avoidance, and outlines how to identify packages that can safely be rebound with CURRENTDATA(NO).
Best Practices For Optimizing DB2 Performance FinalDatavail
DB2 performance tuning and optimization is a complex issue comprising multiple sub-disciplines and levels of expertise. Mastering all of the nuances can take an entire career. Deploying standard best practices can minimize the effort to achieve efficient DB2 applications and databases.
This white paper outlines the most important aspects and ingredients of successful DB2 for z/ OS performance management. It offers multiple guidelines and tips for improving performance within the three major performance tuning categories required of every DB2 implementation: the application, the database and the system.
The Five R's: There Can be no DB2 Performance Improvement Without Them!Craig Mullins
We know that BIND and REBIND are important components in assuring optimal application performance. It is the bind process that determines exactly how your DB2 data is accessed in your application programs. But binding requires statistics for the optimizer to use... and if the data is disorganized even current stats might not help... and you have to make sure that you check on the results of binding... and... well, let's just say this short presentations examines all of these issues and more.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
SQL In The City - Understanding and Controlling Transaction Logs by Nigel Peter Sammy.
- Relational DBMS Basics
- Introduction to Transaction Logs
- The Architecture
- Recovery Models
- Managing the Transaction Logs
- Red Gate Tools
An AMIS Overview of Oracle database 12c (12.1)Marco Gralike
Presentation used by Lucas Jellema and Marco Gralike during the AMIS Oracle Database 12c Launch event on Monday the 15th of July 2013 (much thanks to Tom Kyte, Oracle, for being allowed to use some of his material)
M.
A First Look at the DB2 10 DSNZPARM ChangesWillie Favero
This document discusses changes to DB2 subsystem parameter module (DSNZPARM) in DB2 10. It provides information on DSNZPARM macros, how parameters can be changed through installation panels or dynamically using -SET SYSPARM command, and differences between hidden, opaque and visible parameters. The document also introduces new documentation for opaque parameters and explains how to display current DSNZPARM settings using sample program DSN8ED7.
Learning to administer and use DB2 for z/OS in an effective and efficient manner can be a laborious task. Join us as the Senior DBA teaches the novice DBA the Tao (or the way) of DB2.
Key Note Session IDUG DB2 Seminar, 16th April London - Julian Stuhler .Trito...Surekha Parekh
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It outlines current capabilities and future directions for DB2 on both z/OS and LUW platforms, emphasizing ongoing focus on reducing costs while improving availability, performance and analytics capabilities through techniques like in-memory computing and integration with big data technologies. The future of DB2 skills and the changing IT landscape are also addressed.
This document discusses techniques for understanding a customer's DB2 environment using readily available system data before speaking with DB2 specialists. It covers analyzing CPU usage, memory usage, I/O, coupling facility usage, XCF traffic, stored procedures, applications, workload manager configuration, DDF rules, and restart patterns using SMF records and other data to detect issues and understand normal behavior. The goal is to "bridge the gap in perspectives between DB2 and system performance specialists."
This document discusses tuning DB2 in a Solaris environment. It provides background on the presenters, Tom Bauch from IBM and Jignesh Shah from Sun Microsystems. The agenda covers general considerations, memory usage and bottlenecks, disk I/O considerations and bottlenecks, and tuning DB2 V8.1 specifically in Solaris 9. It discusses supported Solaris versions, kernel settings, required patches, installation methods, and the configuration wizard. Specific topics covered in more depth include the Data Partitioning Feature, DB2 Enterprise Server Edition, and analyzing and addressing potential memory bottlenecks.
Ims12 workbench data visualization - IMS UG May 2014 Sydney & MelbourneRobert Hain
Analyzing problems with transactions on z/OS can feel like measuring a strand of cotton when your starting point is a shirt: you need to dissect individual aspects of the transaction without losing the overall picture of how they fit together. That means knowing where and how to get logs for various subsystems, relating these logs together, and finally interpreting the combined output.
IBM Transaction Analysis Workbench for z/OS is a tool that provides a coherent picture of a transaction across subsystems - including IMS, DB2, CICS, WebSphere MQ, and z/OS itself - helping you to pinpoint the source of problems. We demonstrate a step-by-step proof-of-concept model for visually interacting with composite log data to help identify and resolve problems involving multiple subsystems.
DB2 for z/OS and DASD-based Disaster Recovery - Blowing away the mythsFlorence Dubois
Is your Disaster Recovery solution based on DASD replication functions? In most cases, all you will need to do is a normal restart of DB2 for z/OS. But this assumes the DASD copy is consistent. Otherwise, it is guaranteed data corruption that will have to be fixed up, possibly several weeks or months after the event. This presentation will tell you everything you need to know about the Copy Services for IBM System z and what is required to ensure data consistency. It will address the most common myths and misconceptions about these DASD replication solutions. It will also provide hints and tips on how to tune for fast DB2 restart and how to optimise GRECP/LPL recovery.
We4IT lcty 2013 - infra-man - domino run faster We4IT Group
The document discusses optimizing performance for IBM Lotus Domino. It recommends using 64-bit hardware and operating systems to allow Domino to utilize more memory. Transaction logging and separating disks for data, transaction logs, and indexes are also advised. The document provides tips for configuring hardware, operating systems, and Domino server settings to improve performance.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
DB2 10 provides significant enhancements to memory management that allow for much greater scalability. Key changes include moving most objects above the 2GB bar, enabling larger buffer pools through 1MB page support, and enhanced real storage monitoring. Migrating to DB2 10 requires ensuring sufficient real storage is available, monitoring real storage usage, and addressing other limiting factors before taking advantage of new features to further scale vertically.
This document discusses zIIP capacity planning for IBM mainframes. It notes that zIIP capacity planning is important given enhancements that allow more workloads to run on zIIPs. It provides guidelines for doing zIIP capacity planning properly through instrumentation and measuring zIIP usage at the address space level. It also discusses factors to consider like LPAR configuration and new software that can exploit zIIPs.
CollabSphere2018 - Virtual, Faster, Better! How to virtualize IBM Notes V10Christoph Adler
This document provides tips for optimizing IBM Notes in virtual environments. It recommends switching to local multi-user installations of Notes to avoid network latency from storing user data on network drives. It also suggests using roaming to synchronize the latest user configuration across virtual servers. Other optimizations include sharing the jvm.shareclasses file, increasing JVM memory allocation, and pre-building workspace folders to reduce file I/O and speed up the Notes client startup time in virtual sessions.
How Nyherji Manages High Availability TSM Environments using FlashCopy ManagerIBM Danmark
This document discusses Nyherji's use of IBM Tivoli Storage Manager (TSM) and FlashCopy Manager (FCM) to create high availability backup environments. Some key points:
- Nyherji manages around 50 TSM servers backing up 5-5,000 TB of data across various operating systems and hardware. They have transitioned to using deduplication and FCM where possible.
- Their goals are to have recovery time objectives (RTO) of less than 1 hour for important data and less than 6 hours for TSM servers. Solutions need to be cost effective.
- For VMware backups, they use FCM to take daily incremental and weekly full backups, achieving much faster
Db2 10 memory management uk db2 user group june 2013Carol Davis-Mann
DB2 10 for z/OS includes major enhancements to memory management that allow most DB2 storage objects to reside above the 2GB bar, providing up to a 10x increase in threads per subsystem. This reduces a key scalability limitation. To take advantage of these virtual storage improvements, additional real memory is required, typically a 10-30% increase over DB2 9 requirements. Customers should also monitor and manage real storage usage with new DB2 10 functions to avoid paging issues. The virtual storage changes along with other DB2 10 capabilities could allow for reduced DB2 subsystem counts and improved performance.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
Reduce planned database down time with Oracle technologyKirill Loifman
How to design an Oracle database system to minimize planned interruptions? That depends on the requirements, goals, SLAs etc. The presentation will follow top-down approach. First we will describe major types of planned maintenance, prioritize those and then based on the system availability requirements find the best cost-effective technics to address those. A bit of planning, strategy and of course modern database and OS technics including latest Oracle 12c features.
Connect2014: BP105 A Performance Boost for your Notes ClientFranziska Tanner
This document provides an overview of performance boosting techniques for IBM Lotus Notes clients. It discusses factors that can slow down Notes client startup and performance, such as outdated hardware, large data directories, and old ODS versions of databases. It also presents methods for improving startup speed and performance, like upgrading ODS versions, reducing unnecessary files in the data directory, enabling TCP/IP port compression, and standardizing client configurations using policies. While policies can help optimize many settings, they have limitations like depending on a properly configured client and not providing full customization.
AdminCamp 2018 - IBM Notes V10 Performance BoostChristoph Adler
IBM Notes mit einer besseren Performance auszustatten, muss nicht kompliziert sein. In einer bereits für IBM Notes V10 (Beta-2) aktualisierten Version, zeigt Christoph Adler Ihnen, was eingestellt werden muss, um die bestmögliche Performance zu erreichen. In diesem Zuge werden Themen wie ClientClocking, ODS, Netzwerk-Latenzen und gesteigerte Applikations-Performance behandelt. BestPractices bzgl. Arbeitsumgebungs- und Verbindungsdokumente und warum die catalog.nsf so wichtig ist. Verbessern Sie Ihre IBM Notes 10 (Beta-2) Installation so, um Benutzer (wieder) glücklich zu machen. Denn "glückliche Benutzer == glückliche Admins".
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
This presentation discusses managing the performance of address spaces in a z/OS system. It notes that typical systems have hundreds to thousands of diverse address spaces across LPARs. The presentation centers around SMF Type 30 records, discussing when to rely on common instrumentation for all address spaces versus using specific data for certain address spaces like CICS or data set records. It covers treating each address space as a "black box" initially, then distinguishing between long-running address spaces like CICS and DB2 versus batch jobs. Timestamp analysis of records is recommended to analyze steps in batch jobs.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
The Forefront of the Development for NVDIMM on Linux KernelYasunori Goto
This is talk for Open Source Summit Japan 2020
--------------------------
NVDIMM (Non Volatile DIMM) is the most interesting device, because it has not only characteristic of memory but also storage. To support NVDIMM, Linux kernel provides three access methods for users. - Storage (Sector) mode - Filesystem DAX(=Direct Access) mode - Device DAX mode. In the above three methods, Filesystem DAX is the most expected access method, because applications can write data to the NVDIMM area directory, and it is easier to use than Device DAX mode. So, some software already uses it with official support. However, Filesystem-DAX is still "experimental status" in the upstream community due to some difficult issues . In this session, Yasunori Goto will talk to the forefront of the development of NVDIMM, and Ruan Shiyang will talk about his challenge with the latest status from CLK2019.
Similar to DB2 10 & 11 for z/OS System Performance Monitoring and Optimisation (20)
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
DB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
1. #IDUG
DB2 10/11 for z/OS System Performance
Monitoring and Optimisation
John Campbell, Florence Dubois
IBM DB2 for z/OS Development
One-day education seminar
Tuesday, September 9, 2014 – 09:30 AM - 04:30 PM | Platform: DB2 for z/OS