Source for the CAGR was from an IBM presentation citing “IBM and Industry Studies” as the source
DB2 uses the bootstrap data set (BSDS) to manage recovery and other DB2 subsystemwide information. The BSDS contains information needed to restart and to recover DB2 from any abnormal circumstance. For example, all log data sets (active and archive) are automatically recorded within the BSDS. While DB2 is active, the BSDS is open and is updated.
The I/O supervisor enqueues this request on a device number for the channel subsystem. The channel program consists of standard commands, described in the ECKD disk architecture, that specify I/O demand to the control unit. The control unit executes these commands, propagates them and controls their requests to logical volumes and physical devices. It also manages data delivery to the channel subsystem. The channel subsystem manages the transfer of channel commands and of data through links to the control unit. This linking can be complex and involves ESCON Directors, channel extenders, and even telecommunication devices for remote I/Os. There are two views of a disk control unit. Physically it is a storage server to which disk drives are attached and the channel links from hosts are connected. The storage server contains all the facilities to perform the I/O operations. Logically the disk control unit is an aggregate of subunits known as logical control units (LCUs) or control unit images, doing the I/O operations.
A base table exists with the data set name *I0001.*. The table is cloned and the clone’s data set is initially named *.I0002.*. After an exchange, the base objects are named *.I0002.* and the clones are named *I0001.*. Each time that an exchange happens, the instance numbers that represent the base and the clone objects change, which immediately changes the data contained in the base and clone tables and indexes.
OK… so where can you find DB2—related storage information? How about the DB2 Catalog? STOSPACE RUNSTATS Real-time Statistics
Installations that are SMS managed can define STOGROUP with VOLUMES(*). This specification implies that SMS assigns a volume to the table and index spaces in that STOGROUP. In order to do this, SMS uses ACS routines to assign a Storage Class, a Management Class and a Storage Group to the table or index space. SMS Storage Groups: • Cannot share a volume. • Cannot share data sets. • Must contain whole volumes. • Must contain volumes of the same device geometry. • Can contain multi-volume data sets. • Must contain a VTOC and a VVDS.
Volume separation is easy when you have hundreds of volumes available. But this separation is good only if your volumes have separate access paths. Path separation is important to achieve high parallel data transfer rates. Without DFSMS, the user is responsible for distributing DB2 data sets among disks. This process needs to be reviewed periodically, either when the workload changes, or when the storage server configuration changes. With DFSMS, the user can distribute the DFSMS Storage Groups among storage servers with the purpose of optimizing access parallelism. Another purpose could be managing availability for disaster recovery planning. This can be combined with the previous purpose by letting DFSMS automatically fill in these Storage Groups with data sets, by applying policies defined in the automatic class selection routines.
Many customers do not assign any Data Class for their DB2 data sets. Storage Administrators can set up many different Data Classes based on user requirements. Some of the most common reasons to use a Data Class in a DB2 environment include -- Enabling EF and/or EA; Bypassing the 255 extent rule for data sets; Bypassing the 5 extent rule for allocations of data sets; Reducing space requirements when no volume meets the space requirement; VSAM and/or sequential striping (Striping requires the Storage Class use of SDR as well); Allocating data sets with common DCB and/or space characteristics; Specifying additional volumes for DB2 or utility data sets; Specifying different data sets types, such as PDSE, large format, etc. In the past Storage Administrators would have to spend a significant amount of time planning out Storage Class requirements. The requirements involved such things as the speed of storage devices, amount of cache available, if the disk box allowed for concurrent copy or FlashCopy, as well as other issues relating to the disk box itself. Generally, in today’s technology most of the Storage Class information is no longer required. The three most common options for a DB2 environment are: Enabling Guaranteed Space; Set striping (Striping also requires an associated Data Class with EF enabled); Enabling the use of multi-tiered Storage Groups. Multi-tiered Storage Groups are specified in the Storage Group, however you must still enable them in the Storage Class. Management Class can be used for a variety of functions: Expire data sets; Specify if a user or Data Class can specify retention period; Partial release of data sets (For DB2 LDSes, partial release is for data sets with EF enabled, but without Guaranteed Space); Migrate to Level 1 for data sets not used for a specific time; Migrate to Level 2 for data sets not used for a specific time; Migrate from Level 1 to Level 2 for data sets not used for a specific time; Specify if migration should be automatic, by command, or both; Migrate based on the number if GDG objects; Determine action for rolled off GDG objects. Storage Groups contain: Volsers - a volume can only belong to one Storage Group; Determining if DFSMShsm migration, dump, or incremental backup should be enabled; Allow a Storage Group to be an overflow volume; Allow a Storage Group to be an extend Storage Group (An overflow Storage Group can also be an extended Storage Group); Specify a Copy Pool backup Storage Group name; HIGH and LOW values for the Storage Group; Specify a Break Point Value (BPV) for EAV devices.
The DS8000 is the most popular disk system for IBM System z mainframe platforms. It is built for activities like Online Transaction Processing (OLTP) and large databases, supporting ESCON and FICON attachment to high-speed 15K RPM FC drives.
All the latest Disk Storage arrays still emulate 3390 track architectures as that’s what z/OS understands. You may well have heard of your site using 3390 Model 3 disks. The Model number refers to the old triple capacity 3390 models, which could hold just under 3Gb of data. In fact modern arrays can emulate almost any volume configuration, up to an architectural limit of 65,520 cylinders. Recently z/OS 1.10 added support for Extended Address Volumes which will be supported by IBM DS8000 arrays and by other vendors. The EAV architecture supports a huge maximum theoretical volume size, although in the initial implementation the limit is 262,668 cylinders per volume or approximately 223Gb.
that combines multiple disk devices into an array that is perceived by the system as a single disk drive. There are many levels of RAID technology that deliver different degrees of fault-tolerance and performance.
RAID 0 - data striping without parity RAID 1 - dual copy RAID 2 - synchronized access with separate error correction disks RAID 3 - synchronized access with fixed parity disk RAID 4 - independent access with fixed parity disk RAID 5 - independent access with floating parity RAID 6 - dual redundancy with floating parity RAID 10 (DS8000 and some ESS) - RAID 0 + RAID 1, no parity Parity is additional data, “internal” to the RAID subsystem, that enables a RAID device to regenerate complete data when a portion of the data is missing. Parity works on the principle that you can sum the individual bits that make up a data block or byte across separate disk drives to arrive at an odd or even sum.
From a conceptual point, the function of cache and buffer pools are similar. • From a storage perspective, DB2 data is typically considered “unfriendly” because of the relatively low reuse of data in cache. • DB2 will use the data residing in the buffer pool when available. It may not require the data in disk cache at all. Reading from cache is exponentially faster than from disk. • No need to go to disk, find the data, and bring it back through cache. • Just because your buffer pool casts out data, it does not mean that it is no longer retained in cache. • Newer disk controllers have very large cache sizes and can retain data forlonger periods.
FICON – FIBRE CONNECTIVITY MIDAW is essentially a technical improvement to the Channel Instruction. It allows Media Manager to fully exploit the Track Level command operations of z/Architecture • Reduces number of Control Words required for an I/O • For an EF dataset reduces from 24 Control Words to 1 • Using MIDAW reduces EF dataset performance penalty • Requires z9 processor or above • Requires z/OS 1.7 or (retrofitted to 1.6 with APAR)
Prior to MIDAWs, the maximum log throughput using the DS8000 and FICON Express 2 was 84 MBps, and striping the log increased the bandwidth only slightly. Nobody should ever allocate their logs as EF with a single stripe, because there is no advantage to doing so. (DB2 does not support Extended Addressability for log data sets.) Nevertheless, Figure 12 on page 13 shows how a single-stripe EF log would perform, so that we can compare two stripes to one stripe. Given a log with two stripes, Figure 12 on page 13 shows that MIDAWs increased the log bandwidth by 31%, reaching 116 MBps with two stripes.
Figure 6 shows the I/O response times of DB2 prefetch I/Os with 4 KB pages, with and without MIDAWs, for both EF and non-EF data sets. Prior to MIDAWs the response time for EF was 1.9 ms versus 1.2 ms for non-EF data sets. MIDAWs did not improve the response time for non-EF data sets, but MIDAWs did lower the response time of EF data sets such that EF and non-EF data sets performed identical to each other. The response time of non-EF data sets remained at 1.2 ms. MIDAWs reduce the number of CCWs per track for EF data sets from 24 to 1. It would be easy to presume that reducing the number of CCWs is the only contributing factor to improving performance. However, MIDAWs also reduce the number of CCWs per track for non-EF data sets from 12 to 1, yet non-EF response time does not change. On the other hand, the channel utilization for non-EF data sets was reduced by half, from 51.5% to 26.2%, as shown in Figure 7.
ROT – less than 10 extents per data set
the table space or index space has a SECQTY setting of greater than zero, the primary space allocation of each subsequent data set is the larger of the SECQTY setting and the value that is derived from a sliding scale algorithm.
All this sliding scale information is in Chapter 3 of the DBA Guide
Beware REORG with manual allocations and multiple extents! Better results if the volume is not fragmented
the clone is created in the same table space as the base table, but in a different VSAM dataset. A page set normally has the format catname .DSNDB x.dbname.spname . I000y .A001 It's the I000y portion of the page set name that we are interested in. The base table will have I0001 in the VSAM name while the clone will have I0002 . The SQL EXCHANGE statement flips the VSAM datasets.
Define the data classes for your table space data sets and index data sets. Code the SMS automatic class selection (ACS) routines to assign indexes to one SMS storage class and to assign table spaces to a different SMS storage class.
Note: The EXTENTS column in the RTS will be updated only when an update or applicable utility is run for the object. A simple start after extent reduction or a read based on SELECT will not update the EXTENTS column (same issue as the catalog). 48 Disk storage access with DB2 for z/OS