The document discusses database layout recommendations for SAP installations using DB2 UDB. It recommends striping all database objects like tablespaces and file systems across all available storage devices. This distributes I/O load and avoids potential hotspots. It also discusses maintaining balanced containers within tablespaces and techniques for growing the database size over time. The examples are based on an IBM Enterprise Storage Server but the concepts can apply to other hardware.
The document discusses parallel databases and their architectures. It introduces parallel databases as systems that seek to improve performance through parallelizing operations like loading data, building indexes, and evaluating queries using multiple CPUs and disks. It describes three main architectures for parallel databases: shared memory, shared disk, and shared nothing. The shared nothing architecture provides linear scale-up and speed-up but is more difficult to program. The document also discusses measuring performance improvements from parallelization through speed-up and scale-up.
Ibm db2 analytics accelerator high availability and disaster recoverybupbechanhgmail
This document discusses high availability (HA) and disaster recovery (DR) strategies for IBM DB2 Analytics Accelerator. It describes built-in HA capabilities of the accelerator like redundant Netezza performance server hosts, S-Blades, networking, and disk arrays. It also discusses ways to integrate the accelerator into existing HA architectures, including workload balancing across multiple accelerators, maintaining consistent data across accelerators, and using incremental updates. The goal is to provide the desired recovery time objectives (RTOs) and recovery point objectives (RPOs) for analytical query workloads processed by the accelerator.
This document defines the file sharing and failover cluster design for a customer. It was decided to use a centralized file cluster with Microsoft failover clustering to reduce support costs. Storage will be provisioned across multiple LUNs for different file types and services. A physical failover cluster will be used due to virtualization support limitations. LUN sizes are specified for various file types to support 500 users initially with room to scale.
The document discusses how databases are stored on disks and managed in files. It describes how disks provide secondary storage that is cheaper than RAM but slower to access. The buffer manager brings frequently used disk pages into memory to allow for faster data access. Files are used to organize records on disks and different file structures like heap files and indexes are used to support data insertion, deletion, and retrieval.
Coerced Cache Eviction: Dealing with Misbehaving Disks through Discreet-Mode ...vchidambaram
"Coerced Cache Eviction: Dealing with Misbehaving Disks through Discreet-Mode Journaling", presented at DSN 2011. For more details check out http://pages.cs.wisc.edu/~vijayc/cce.htm
A new multi tiered solid state disk using slc mlc combined flash memoryijcseit
Storing digital information, ensuring the accuracy, steady and uninterrupted access to the data are
considered as fundamental challenges in enterprise-class organizations and companies. In recent years,
new types of storage systems such as solid state disks (SSD) have been introduced. Unlike hard disks that
have mechanical structure, SSDs are based on flash memory and thus have electronic structure. Generally
a SSD consists of a number of flash memory chips, some buffers of the volatile memory type, and an
embedded microprocessor, which have been interconnected by a port. This microprocessor run a small file
system which called flash translation layer (FTL). This software controls and schedules buffers, data
transfers and all flash memory tasks. SSDs have some advantages over hard disks such as high speed, low
energy consumption, lower heat and noise, resistance against damage, and smaller size. Besides, some
disadvantages such as limited endurance and high price are still challenging. In this study, the effort is to
combine two common technologies - SLC and MLC chips - used in the manufacture of SSDs in a single
SSD to decrease the side effects of current SSDs. The idea of using multi-layer SSD is regarded as an
efficient solution in this field.
Ch 1-final-file organization from korthRupali Rana
This document summarizes key concepts about file organization from the textbook "Database System Concepts". It discusses different types of physical storage media like main memory, disks, tapes and their properties. It describes the storage hierarchy with primary, secondary and tertiary storage. It also covers file systems, file organization techniques for fixed and variable length records, file access methods, and techniques for handling record deletion in files with fixed-length records.
The document discusses parallel databases and their architectures. It introduces parallel databases as systems that seek to improve performance through parallelizing operations like loading data, building indexes, and evaluating queries using multiple CPUs and disks. It describes three main architectures for parallel databases: shared memory, shared disk, and shared nothing. The shared nothing architecture provides linear scale-up and speed-up but is more difficult to program. The document also discusses measuring performance improvements from parallelization through speed-up and scale-up.
Ibm db2 analytics accelerator high availability and disaster recoverybupbechanhgmail
This document discusses high availability (HA) and disaster recovery (DR) strategies for IBM DB2 Analytics Accelerator. It describes built-in HA capabilities of the accelerator like redundant Netezza performance server hosts, S-Blades, networking, and disk arrays. It also discusses ways to integrate the accelerator into existing HA architectures, including workload balancing across multiple accelerators, maintaining consistent data across accelerators, and using incremental updates. The goal is to provide the desired recovery time objectives (RTOs) and recovery point objectives (RPOs) for analytical query workloads processed by the accelerator.
This document defines the file sharing and failover cluster design for a customer. It was decided to use a centralized file cluster with Microsoft failover clustering to reduce support costs. Storage will be provisioned across multiple LUNs for different file types and services. A physical failover cluster will be used due to virtualization support limitations. LUN sizes are specified for various file types to support 500 users initially with room to scale.
The document discusses how databases are stored on disks and managed in files. It describes how disks provide secondary storage that is cheaper than RAM but slower to access. The buffer manager brings frequently used disk pages into memory to allow for faster data access. Files are used to organize records on disks and different file structures like heap files and indexes are used to support data insertion, deletion, and retrieval.
Coerced Cache Eviction: Dealing with Misbehaving Disks through Discreet-Mode ...vchidambaram
"Coerced Cache Eviction: Dealing with Misbehaving Disks through Discreet-Mode Journaling", presented at DSN 2011. For more details check out http://pages.cs.wisc.edu/~vijayc/cce.htm
A new multi tiered solid state disk using slc mlc combined flash memoryijcseit
Storing digital information, ensuring the accuracy, steady and uninterrupted access to the data are
considered as fundamental challenges in enterprise-class organizations and companies. In recent years,
new types of storage systems such as solid state disks (SSD) have been introduced. Unlike hard disks that
have mechanical structure, SSDs are based on flash memory and thus have electronic structure. Generally
a SSD consists of a number of flash memory chips, some buffers of the volatile memory type, and an
embedded microprocessor, which have been interconnected by a port. This microprocessor run a small file
system which called flash translation layer (FTL). This software controls and schedules buffers, data
transfers and all flash memory tasks. SSDs have some advantages over hard disks such as high speed, low
energy consumption, lower heat and noise, resistance against damage, and smaller size. Besides, some
disadvantages such as limited endurance and high price are still challenging. In this study, the effort is to
combine two common technologies - SLC and MLC chips - used in the manufacture of SSDs in a single
SSD to decrease the side effects of current SSDs. The idea of using multi-layer SSD is regarded as an
efficient solution in this field.
Ch 1-final-file organization from korthRupali Rana
This document summarizes key concepts about file organization from the textbook "Database System Concepts". It discusses different types of physical storage media like main memory, disks, tapes and their properties. It describes the storage hierarchy with primary, secondary and tertiary storage. It also covers file systems, file organization techniques for fixed and variable length records, file access methods, and techniques for handling record deletion in files with fixed-length records.
This document discusses memory-based database management systems (MDBMS). Key points include:
- An MDBMS stores the database in main memory rather than disk storage for faster access speed. However, data is transient and could be lost if power is lost.
- MDBMS are well-suited for applications with frequent data reads, shared databases with many users, or where performance is critical. They are less suitable when data persistence is required.
- Sybase implemented an MDBMS that uses memory as a virtual disk volume, retaining the SQL interface. Transactions are stored in a transfer table then committed to the original disk-based database.
This presentation discusses SQL Server data compression. It defines compression, explains how it works, the different types (row and page), and objects that can be compressed. It provides guidance on determining if compression is worthwhile, when to use it, and how to implement it. The presentation highlights best practices like testing compression first and compressing during maintenance windows. It also reviews advantages like disk space savings and performance gains, and disadvantages like additional CPU usage. An example of successfully compressing a 570GB database to 160GB is provided.
The document discusses the database environment and advantages of a database management system (DBMS). It describes how a DBMS provides a central repository of shared data that applications can access. This reduces data redundancy, improves data sharing and integrity, and increases development productivity compared to file-based data storage. The document provides examples of database applications from personal to enterprise-wide and outlines the typical components involved, from CASE tools to end users.
This document discusses database system applications and the advantages of database systems over traditional file processing systems. It provides examples of common database applications in various industries like banking, retail, healthcare, education, and telecommunications. It also outlines some of the key disadvantages of file processing systems like data redundancy, difficulty in data access, isolation of data, integrity issues, and security problems. The document introduces fundamental database concepts like data definition language, data manipulation language, data models, database schema, and relational databases. It provides a high-level overview of SQL as the most widely used database language. Finally, it lists an assignment for student groups to summarize the material and propose sample database tables for various popular websites and applications.
Database management system by Neeraj Bhandari ( Surkhet.Nepal )Neeraj Bhandari
A database is an organized collection of structured data stored electronically in a computer system. A database management system (DBMS) is a complex software system used to create and manage databases and properly maintain large and complex databases. A DBMS provides logical and physical views of the data and allows for different external views for different users. It also provides languages to define, manipulate and control access to the data.
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
DB2 introduced universal table spaces in version 9 to address the need for a table space type that provides both partitioned and segmented organization. Universal table spaces allow tables to be larger than 64GB, provide inter-partition parallelism, and support fast insert and delete operations while avoiding the overhead of partitioning by a ROWID column.
The document discusses various aspects of disk management in computer systems, including disk structure, disk scheduling, disk formatting, boot blocks, bad block recovery, swap space management, and the file system and I/O management in Windows 2000. Specifically, it covers topics like logical vs physical disk addressing, seek and rotational latency, improving access time through scheduling, low-level vs logical formatting, bootstrapping from disk, handling defective sectors, allocating and managing virtual memory using swap space, and the role of the kernel, virtual memory manager, and I/O manager in Windows 2000.
The document discusses using the Oracle Database Configuration Assistant (DBCA) to create Oracle databases. It covers planning database design, choosing character sets, using DBCA to create a database including configuring files and memory, creating database design templates, and performing additional tasks with DBCA like deleting databases. The end summarizes how to create a database, generate scripts, manage templates, and perform additional DBCA tasks.
1) SSD provides significantly higher performance than spinning disks by using flash memory instead of spinning platters to store data.
2) There are several form factors for SSD including drives that replace spinning disks, PCIe cards, and memory appliances with SSD DIMMs.
3) The best locations to implement SSD are where they can provide global acceleration benefits across many applications, such as in a storage array controller or memory appliance connected to a storage controller.
Xd planning guide - storage best practicesNuno Alves
This document provides guidelines for planning storage infrastructure for Citrix XenDesktop environments. It discusses organizational requirements like alignment with IT strategy and high availability needs. Technical requirements covered include performance needs like typical I/O rates and functional requirements like supported protocols. The document recommends avoiding bottlenecks, choosing appropriate RAID levels based on read/write ratios, validating storage performance, and involving storage vendors in planning.
This white paper discusses key parameters for calculating the usable life of Netlist solid state drives (SSDs). It covers topics like NAND flash basics, wear leveling, read-disturb effects, garbage collection, TRIM commands, endurance, input/output operations per second (IOPS), write amplification, data compression levels, and formulas for calculating SSD usable life based on these factors. Examples are provided to demonstrate how to calculate usable life for specific SSDs and usage scenarios.
Geek Sync I Need for Speed: In-Memory Databases in Oracle and SQL ServerIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/S6MG50A5ok5
Microsoft introduced IN-MEMORY OLTP, widely referred to as “Hekaton” in SQL Server 2014. Hekaton allows for the creation of fully transactionally consistent memory-resident tables designed for high concurrency and no blocking. With SQL 2016, many of the original restrictions and limitations of this feature have been reduced. IDERA’s Vicky Harp will give an overview of this feature, including how to compile T-SQL code into machine code for an even greater performance boost.
There’s also been a lot of buzz about Oracle 12c’s new IN-MEMORY COLUMN STORE. Oracle ACE Bert Scalzo will cover this new feature, how it works, it’s benefits, scripts to measure/monitor it and more. He will also touch on performance observations from benchmarking this new feature against more traditional SGA memory allocations plus Oracle 11g R2’s Database Smart Flash Cache. All findings, scripts and conclusions from this exercise will be shared. In addition, two very popular database benchmarking tools will be highlighted.
This document provides an in-depth look at solid state drive (SSD) performance in the IBM DS8000 storage system. It discusses SSD performance best practices, such as placing hot data on SSDs for applications requiring low response times. It also covers selecting which data is best suited for SSDs versus HDDs on the DS8000, which now supports SSDs as a high performance tier along with 15K RPM and 7K RPM HDDs. Tools for analyzing I/O patterns on AIX and System z servers are also described to help identify hot data candidates for migration to SSDs.
The document discusses Windows 7 enhancements that improve the performance and endurance of solid-state drives (SSDs). It outlines how Windows 7 identifies SSDs differently from HDDs to optimize defragmentation and trim features. It also discusses the importance of aligning the NTFS partition to the SSD geometry. Proposed Windows 7 logo requirements related to SSDs are presented. Challenges like varying SSD performance and ensuring data retention are discussed.
General Information About Information Technologiestechgajanan
The document provides definitions for various information technology terms from A-D, including:
- ADSL, AGP, ATA, attachments, AVI, bandwidth, binary, BIOS, bitmap, blog, Bluetooth, browser, cache, CMOS, codec, cookie, CPU, cursor, data, database, defragmentation, desktop, DDR, DIMM, directory, disk drive, DLL, and DMA. It provides brief explanations of each term.
This document discusses distributed databases and client-server architectures. It covers topics such as distributed database concepts, data fragmentation and replication techniques, types of distributed database systems, query processing, concurrency control, and Oracle's implementation of distributed databases.
assignment
1.Internal components are the devices that are inside the main computer tower. These devices include the Central Processing Unit (CPU), Motherboard and the modem.
Computer Hardware is the physical part of a computer, as distinguished from thecomputer softwarethat executes or runs on the hardware. The hardware of a computer isinfrequently changed, while software and data are modified frequently. The term "soft" refers to readily created, modified, or erased. Theseare unlike the physical components within the computer which are "hard".
Inside Computer
Motherboard
The motherboard is the "body" or mainframe of the computer, through which all other componentsinterface. It is thecentral circuit board making up a complex electronic system. A motherboard provides the electrical connections by which the other components of the systemcommunicate. The mother board includes many components such as: centralprocessing unit (CPU), random access memory (RAM), firmware, and internal and external buses.
Motherboard
Central Processing Unit
The Central Processing Unit (CPU; sometimes just called processor) is amachine that can executecomputer programs It is sometimes referred to as the "brain" of the computer.
CPU Diagram
There are four steps that nearly all CPUs use in their operation:fetch, decode, execute, and writeback. The firststep, fetch, involves retrieving an instruction from program memory. In thedecode step, the instruction is broken up into parts that have significance toother portions of theCpu. During the execute step various portions of the CPU such as the arithmeticlogic unit (ALU) and thefloating point unit (FPU) are connected so they can perform the desired operation. The final step, writeback, simply "writes back" the results of the execute step to some form of memory.
Random Access Memory
Random access memory (RAM) is fast-access memory that is cleared when the computer is power-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running. RAM is a set of integrated circuits that allow the stored data to be accessed in any order (why it is called random). There are many different types of RAM. Distinctions between these different types include: writable vs. read-only, static vs. dynamic, volatile vs. non-volatile, etc.
RAM
Firmware
Firmware is loaded from the Read only memory (ROM) run from the BasicInput-Output System (BIOS). It is a computer program that is embedded in a hardware device, for example a microcontroller. As it name suggests, firmware is somewhere between hardware and software. Like software, it is a computer program which is executed by a microprocessor or a microcontroller. But it is also tightly linked to a piece of hardware, and has little meaning outside of it. Most devices attached to modern systems are special-purpose computers intheir own right, running their own software. Some of these devices store that software ("firmware") in a ROM within the device itself
Power Supply
The power supply as its name might suggest is the device that supplies power to all the components in the computer. Its case holds a transformer, voltage control, and (usually) a cooling fan. The power supply converts about 100-120 volts of AC power to low-voltage DC power for the internal components to use. The most common computer power supplies are built to conform with the form factor. This enables different power supplies to be interchangable with different components inside the computer. ATX power supplies also are designed to turn on and off using a signal from the motherboard, and provide support for modern functions such as standby mode.
Removable Media Devices
If your putting something in your computer and taking it out is most likely a form of removable media. There are many different removable media devices. The most popular are probably CD and DVD drives which almost every computer these days has at least one of. There are some new disc drives such as Bl
This document summarizes the activities of the IDEMA Long Data Sector Committee, which was formed in 2000 to address compatibility issues between increasing HDD areal density and maintaining data integrity at the existing 512-byte sector format. The committee worked with Microsoft to gain support for sector sizes up to 4096 bytes in Windows Vista. Transitioning to a long sector format requires backward compatibility and changes to software like BIOS, OS installers, and device drivers. Larger ECC block sizes provide benefits like increased error correction efficiency. Storage systems also require adaptations to support drives with different sector sizes.
Managing large chain of Hotels and ERP database comprises of core areas such as HRMS & PIP.HRMS (Human Resource Management System), which further includes areas such as Soft Joining, Promotion, Transfer, Confirmation, Leave Attendance and Exit, etc. PIP (Payroll Information Portal), wherein employees can view their individual Salary details, submit investment declaration, Reimbursement claim & CTC structuring, etc. Management of Large Chain of Hotels and ERP Database in AWS Cloud involves continuous monitoring with regards to the areas such as Performance of resource usages and optimization techniques relating to the use of PL/SQL. High Availability (HA) of data is accomplished through the Backup and Recovery mechanism and security of the data by Encryption & Decryption mechanism.
WSC Net App storage for windows challenges and solutionsAccenture
NetApp storage solutions can help address key challenges that organizations face with Windows storage environments. These include having separate "islands" of storage for different applications, which leads to inefficient administration and utilization. NetApp provides a consolidated storage system that can store all Windows data more efficiently and simply. It reduces management costs through features like simplified administration and improved backup and recovery. NetApp storage also improves scalability and availability through technologies like clustering and replication. Organizations have been able to significantly reduce their Windows storage costs, by as much as 50%, by adopting NetApp solutions.
The document discusses database design and NoSQL databases like Couchbase. It covers topics such as data structures, the differences between relational and non-relational databases, handling conflicts in Couchbase, and optimizing performance in Couchbase by using efficient document structures and SDK methods. Effective document structures and database configuration can improve the read and write efficiency of Couchbase applications.
This document discusses memory-based database management systems (MDBMS). Key points include:
- An MDBMS stores the database in main memory rather than disk storage for faster access speed. However, data is transient and could be lost if power is lost.
- MDBMS are well-suited for applications with frequent data reads, shared databases with many users, or where performance is critical. They are less suitable when data persistence is required.
- Sybase implemented an MDBMS that uses memory as a virtual disk volume, retaining the SQL interface. Transactions are stored in a transfer table then committed to the original disk-based database.
This presentation discusses SQL Server data compression. It defines compression, explains how it works, the different types (row and page), and objects that can be compressed. It provides guidance on determining if compression is worthwhile, when to use it, and how to implement it. The presentation highlights best practices like testing compression first and compressing during maintenance windows. It also reviews advantages like disk space savings and performance gains, and disadvantages like additional CPU usage. An example of successfully compressing a 570GB database to 160GB is provided.
The document discusses the database environment and advantages of a database management system (DBMS). It describes how a DBMS provides a central repository of shared data that applications can access. This reduces data redundancy, improves data sharing and integrity, and increases development productivity compared to file-based data storage. The document provides examples of database applications from personal to enterprise-wide and outlines the typical components involved, from CASE tools to end users.
This document discusses database system applications and the advantages of database systems over traditional file processing systems. It provides examples of common database applications in various industries like banking, retail, healthcare, education, and telecommunications. It also outlines some of the key disadvantages of file processing systems like data redundancy, difficulty in data access, isolation of data, integrity issues, and security problems. The document introduces fundamental database concepts like data definition language, data manipulation language, data models, database schema, and relational databases. It provides a high-level overview of SQL as the most widely used database language. Finally, it lists an assignment for student groups to summarize the material and propose sample database tables for various popular websites and applications.
Database management system by Neeraj Bhandari ( Surkhet.Nepal )Neeraj Bhandari
A database is an organized collection of structured data stored electronically in a computer system. A database management system (DBMS) is a complex software system used to create and manage databases and properly maintain large and complex databases. A DBMS provides logical and physical views of the data and allows for different external views for different users. It also provides languages to define, manipulate and control access to the data.
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
DB2 introduced universal table spaces in version 9 to address the need for a table space type that provides both partitioned and segmented organization. Universal table spaces allow tables to be larger than 64GB, provide inter-partition parallelism, and support fast insert and delete operations while avoiding the overhead of partitioning by a ROWID column.
The document discusses various aspects of disk management in computer systems, including disk structure, disk scheduling, disk formatting, boot blocks, bad block recovery, swap space management, and the file system and I/O management in Windows 2000. Specifically, it covers topics like logical vs physical disk addressing, seek and rotational latency, improving access time through scheduling, low-level vs logical formatting, bootstrapping from disk, handling defective sectors, allocating and managing virtual memory using swap space, and the role of the kernel, virtual memory manager, and I/O manager in Windows 2000.
The document discusses using the Oracle Database Configuration Assistant (DBCA) to create Oracle databases. It covers planning database design, choosing character sets, using DBCA to create a database including configuring files and memory, creating database design templates, and performing additional tasks with DBCA like deleting databases. The end summarizes how to create a database, generate scripts, manage templates, and perform additional DBCA tasks.
1) SSD provides significantly higher performance than spinning disks by using flash memory instead of spinning platters to store data.
2) There are several form factors for SSD including drives that replace spinning disks, PCIe cards, and memory appliances with SSD DIMMs.
3) The best locations to implement SSD are where they can provide global acceleration benefits across many applications, such as in a storage array controller or memory appliance connected to a storage controller.
Xd planning guide - storage best practicesNuno Alves
This document provides guidelines for planning storage infrastructure for Citrix XenDesktop environments. It discusses organizational requirements like alignment with IT strategy and high availability needs. Technical requirements covered include performance needs like typical I/O rates and functional requirements like supported protocols. The document recommends avoiding bottlenecks, choosing appropriate RAID levels based on read/write ratios, validating storage performance, and involving storage vendors in planning.
This white paper discusses key parameters for calculating the usable life of Netlist solid state drives (SSDs). It covers topics like NAND flash basics, wear leveling, read-disturb effects, garbage collection, TRIM commands, endurance, input/output operations per second (IOPS), write amplification, data compression levels, and formulas for calculating SSD usable life based on these factors. Examples are provided to demonstrate how to calculate usable life for specific SSDs and usage scenarios.
Geek Sync I Need for Speed: In-Memory Databases in Oracle and SQL ServerIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/S6MG50A5ok5
Microsoft introduced IN-MEMORY OLTP, widely referred to as “Hekaton” in SQL Server 2014. Hekaton allows for the creation of fully transactionally consistent memory-resident tables designed for high concurrency and no blocking. With SQL 2016, many of the original restrictions and limitations of this feature have been reduced. IDERA’s Vicky Harp will give an overview of this feature, including how to compile T-SQL code into machine code for an even greater performance boost.
There’s also been a lot of buzz about Oracle 12c’s new IN-MEMORY COLUMN STORE. Oracle ACE Bert Scalzo will cover this new feature, how it works, it’s benefits, scripts to measure/monitor it and more. He will also touch on performance observations from benchmarking this new feature against more traditional SGA memory allocations plus Oracle 11g R2’s Database Smart Flash Cache. All findings, scripts and conclusions from this exercise will be shared. In addition, two very popular database benchmarking tools will be highlighted.
This document provides an in-depth look at solid state drive (SSD) performance in the IBM DS8000 storage system. It discusses SSD performance best practices, such as placing hot data on SSDs for applications requiring low response times. It also covers selecting which data is best suited for SSDs versus HDDs on the DS8000, which now supports SSDs as a high performance tier along with 15K RPM and 7K RPM HDDs. Tools for analyzing I/O patterns on AIX and System z servers are also described to help identify hot data candidates for migration to SSDs.
The document discusses Windows 7 enhancements that improve the performance and endurance of solid-state drives (SSDs). It outlines how Windows 7 identifies SSDs differently from HDDs to optimize defragmentation and trim features. It also discusses the importance of aligning the NTFS partition to the SSD geometry. Proposed Windows 7 logo requirements related to SSDs are presented. Challenges like varying SSD performance and ensuring data retention are discussed.
General Information About Information Technologiestechgajanan
The document provides definitions for various information technology terms from A-D, including:
- ADSL, AGP, ATA, attachments, AVI, bandwidth, binary, BIOS, bitmap, blog, Bluetooth, browser, cache, CMOS, codec, cookie, CPU, cursor, data, database, defragmentation, desktop, DDR, DIMM, directory, disk drive, DLL, and DMA. It provides brief explanations of each term.
This document discusses distributed databases and client-server architectures. It covers topics such as distributed database concepts, data fragmentation and replication techniques, types of distributed database systems, query processing, concurrency control, and Oracle's implementation of distributed databases.
assignment
1.Internal components are the devices that are inside the main computer tower. These devices include the Central Processing Unit (CPU), Motherboard and the modem.
Computer Hardware is the physical part of a computer, as distinguished from thecomputer softwarethat executes or runs on the hardware. The hardware of a computer isinfrequently changed, while software and data are modified frequently. The term "soft" refers to readily created, modified, or erased. Theseare unlike the physical components within the computer which are "hard".
Inside Computer
Motherboard
The motherboard is the "body" or mainframe of the computer, through which all other componentsinterface. It is thecentral circuit board making up a complex electronic system. A motherboard provides the electrical connections by which the other components of the systemcommunicate. The mother board includes many components such as: centralprocessing unit (CPU), random access memory (RAM), firmware, and internal and external buses.
Motherboard
Central Processing Unit
The Central Processing Unit (CPU; sometimes just called processor) is amachine that can executecomputer programs It is sometimes referred to as the "brain" of the computer.
CPU Diagram
There are four steps that nearly all CPUs use in their operation:fetch, decode, execute, and writeback. The firststep, fetch, involves retrieving an instruction from program memory. In thedecode step, the instruction is broken up into parts that have significance toother portions of theCpu. During the execute step various portions of the CPU such as the arithmeticlogic unit (ALU) and thefloating point unit (FPU) are connected so they can perform the desired operation. The final step, writeback, simply "writes back" the results of the execute step to some form of memory.
Random Access Memory
Random access memory (RAM) is fast-access memory that is cleared when the computer is power-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running. RAM is a set of integrated circuits that allow the stored data to be accessed in any order (why it is called random). There are many different types of RAM. Distinctions between these different types include: writable vs. read-only, static vs. dynamic, volatile vs. non-volatile, etc.
RAM
Firmware
Firmware is loaded from the Read only memory (ROM) run from the BasicInput-Output System (BIOS). It is a computer program that is embedded in a hardware device, for example a microcontroller. As it name suggests, firmware is somewhere between hardware and software. Like software, it is a computer program which is executed by a microprocessor or a microcontroller. But it is also tightly linked to a piece of hardware, and has little meaning outside of it. Most devices attached to modern systems are special-purpose computers intheir own right, running their own software. Some of these devices store that software ("firmware") in a ROM within the device itself
Power Supply
The power supply as its name might suggest is the device that supplies power to all the components in the computer. Its case holds a transformer, voltage control, and (usually) a cooling fan. The power supply converts about 100-120 volts of AC power to low-voltage DC power for the internal components to use. The most common computer power supplies are built to conform with the form factor. This enables different power supplies to be interchangable with different components inside the computer. ATX power supplies also are designed to turn on and off using a signal from the motherboard, and provide support for modern functions such as standby mode.
Removable Media Devices
If your putting something in your computer and taking it out is most likely a form of removable media. There are many different removable media devices. The most popular are probably CD and DVD drives which almost every computer these days has at least one of. There are some new disc drives such as Bl
This document summarizes the activities of the IDEMA Long Data Sector Committee, which was formed in 2000 to address compatibility issues between increasing HDD areal density and maintaining data integrity at the existing 512-byte sector format. The committee worked with Microsoft to gain support for sector sizes up to 4096 bytes in Windows Vista. Transitioning to a long sector format requires backward compatibility and changes to software like BIOS, OS installers, and device drivers. Larger ECC block sizes provide benefits like increased error correction efficiency. Storage systems also require adaptations to support drives with different sector sizes.
Managing large chain of Hotels and ERP database comprises of core areas such as HRMS & PIP.HRMS (Human Resource Management System), which further includes areas such as Soft Joining, Promotion, Transfer, Confirmation, Leave Attendance and Exit, etc. PIP (Payroll Information Portal), wherein employees can view their individual Salary details, submit investment declaration, Reimbursement claim & CTC structuring, etc. Management of Large Chain of Hotels and ERP Database in AWS Cloud involves continuous monitoring with regards to the areas such as Performance of resource usages and optimization techniques relating to the use of PL/SQL. High Availability (HA) of data is accomplished through the Backup and Recovery mechanism and security of the data by Encryption & Decryption mechanism.
WSC Net App storage for windows challenges and solutionsAccenture
NetApp storage solutions can help address key challenges that organizations face with Windows storage environments. These include having separate "islands" of storage for different applications, which leads to inefficient administration and utilization. NetApp provides a consolidated storage system that can store all Windows data more efficiently and simply. It reduces management costs through features like simplified administration and improved backup and recovery. NetApp storage also improves scalability and availability through technologies like clustering and replication. Organizations have been able to significantly reduce their Windows storage costs, by as much as 50%, by adopting NetApp solutions.
The document discusses database design and NoSQL databases like Couchbase. It covers topics such as data structures, the differences between relational and non-relational databases, handling conflicts in Couchbase, and optimizing performance in Couchbase by using efficient document structures and SDK methods. Effective document structures and database configuration can improve the read and write efficiency of Couchbase applications.
Whitepaper: Running Oracle e-Business Suite Database on Oracle Database Appli...Maris Elsins
This is the whitepaper for my Collaborate 13 presentation with the same title. It describes how Pythian completed a migration project of eBS R12 database top ODA (Oracle Appliance Kit v2.2).
This document discusses the relationship between DB2 and storage management on IBM mainframes. It begins by describing how DBAs and storage administrators typically have different focuses, with DBAs more focused on database objects and storage administrators focused on overall storage capacity. It then discusses how DB2 uses storage, including for tablespaces, indexes, logs, and backups. It also covers DB2's integration with DFSMS for storage management capabilities like storage groups, data placement, and space management. Finally, it discusses how modern storage architectures have reduced the importance of careful data set placement that was previously recommended for database performance.
The document provides information about database administration including:
1. It discusses different database management system (DBMS) architectures like enterprise, departmental, personal, mobile, and cloud.
2. It describes factors to consider when choosing a DBMS like operating system support, organization type, benchmarks, scalability, tools availability, technicians availability, and cost of ownership.
3. It outlines the Oracle database installation process including hardware and software requirements, available installation options, and tools for database administration.
The document provides information about database administration including:
1. It discusses different database management system (DBMS) architectures like enterprise, departmental, personal, mobile, and cloud.
2. It describes factors to consider when choosing a DBMS like operating system support, organization type, benchmarks, scalability, tools availability, technicians availability, and cost of ownership.
3. It outlines the Oracle database installation process including hardware and software requirements, available installation options, and tools for database administration.
The document discusses DB2 architecture and concepts. It explains that each DB2 installation has a Database Administration Server (DAS) that provides remote administration support. It also discusses the DB2 Profile Registry, which stores configurable settings. The document then covers the instance concept, noting that an instance is a set of processes, disk, and memory allocations that provide database services and can contain one or more databases.
Best Practices for Deploying Hadoop (BigInsights) in the CloudLeons Petražickis
This document provides best practices for optimizing the performance of InfoSphere BigInsights and InfoSphere Streams when deployed in the cloud. It discusses optimizing disk performance by choosing cloud providers and instances with good disk I/O, partitioning and formatting disks correctly, and configuring HDFS to use multiple data directories. It also discusses optimizing Java performance by correctly configuring JVM memory and optimizing MapReduce performance by setting appropriate values for map and reduce tasks based on machine resources.
Pass chapter meeting - november - partitioning for database availability - ch...Charley Hanania
Charley Hanania discusses logically partitioning databases to improve performance and availability. Logically partitioning involves separating database objects into different filegroups and files based on criticality and usage. This allows placing high performance objects on faster storage. It also enables partial database availability so that core functions can still operate if a disk fails without affecting unrelated objects. The presentation provides examples to incorporate logical partitioning into application database designs for better performance, management, and disaster recovery.
Tech days 2011 - database design patterns for keeping your database applicati...Charley Hanania
Charley Hanania presented on database design patterns for keeping database applications available and performing well. He discussed logically partitioning databases for clarity, performance, and availability. This includes separating objects into schemas and filegroups based on criticality and performance needs. Partial database availability can be achieved such that failures of individual filegroups do not affect unrelated objects or subsystems.
The document summarizes techniques for optimizing database performance across different platforms as a high performance DBA. It discusses strategies for storage management, performance management, and capacity management. Embarcadero products like Performance Center and DBArtisan with Space Analyst are presented as tools to help automate monitoring and diagnosis of storage issues and performance bottlenecks across databases.
In this slides we describe about the databases of Amazon Web Services and what are there features and there other functionality and uses in real time scenario and types of database available in Amazon web services.
Efficient and scalable multitenant placement approach for in memory database ...CSITiaesprime
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
The document provides information about I/O systems and a case study, including details about disk structure, disk scheduling algorithms, disk management techniques, direct memory access, swap space management, RAID structure, disk attachment methods, and features of the Windows 2000 and MS-DOS operating systems. Key points covered include how disks are addressed as logical blocks, techniques for minimizing seek time and maximizing disk bandwidth, common disk scheduling algorithms like SSTF and SCAN, and how swap space is allocated and managed in different operating systems.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
This document evaluates the Lenovo S3200 storage array's ability to support multiple workloads simultaneously. Testing showed that while an all-HDD configuration met performance requirements, one application suffered high latency. Enabling SSD caching or tiering significantly improved performance for that application specifically, reducing latency by 70% and increasing bandwidth by up to 7x, without impacting other applications. The Lenovo S3200 is suitable for consolidating diverse workloads due to its flexibility to configure HDDs with SSDs for optimized performance tailored to each use case.
CS 542 Putting it all together -- Storage ManagementJ Singh
The document provides an overview and plan for a lecture on database management systems. Key points include:
- By the second break, the lecture will cover storage hierarchies, secondary storage management, and system catalogs.
- After the second break, the topics will include data modeling and storage hierarchies.
- Storage hierarchies involve multiple storage levels from main memory to disk and beyond. The cost and performance of each level differs.
- Techniques like caching aim to keep frequently used data in faster storage levels like memory.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
2. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
Contents
Contents .............................................................................................................................2
1 Disclaimer .......................................................................................................................3
2 Abstract ...........................................................................................................................3
3 Version History ...............................................................................................................3
4 Introduction.....................................................................................................................4
5 Understanding UDB Architectural Constraints............................................................4
5.1 Table Space Size Limitation ................................................................................................................... 4
5.2 Balanced Containers ............................................................................................................................... 5
6 Design Aspects of the Layout .......................................................................................6
6.1 Concept..................................................................................................................................................... 6
6.2 Example of Layout ................................................................................................................................... 7
6.3 Page Size .................................................................................................................................................. 8
6.4 Isolating Tables from Standard Table Spaces.................................................................................... 10
6.5 Container Size and Number of Containers ......................................................................................... 10
6.6 Logging................................................................................................................................................... 11
6.7 File Systems: Temporary Table Spaces etc....................................................................................... 12
6.8 Performance Considerations................................................................................................................ 12
7 Growing the Database/Log Space...............................................................................12
8 Miscellaneous ...............................................................................................................13
8.1 EEE Specific ........................................................................................................................................... 13
8.2 AIX Specific ............................................................................................................................................ 13
8.3 IBM ESS Specific ................................................................................................................................... 14
9 Summary .......................................................................................................................15
10 References ...............................................................................................................15
29.03.2001 Page 2
3. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
1 Disclaimer
No guarantee is given for any information presented here. Recommendations are generally derived from
SAP’s and customers’ experience. They may or may not apply to specific customer installations.
2 Abstract
This paper summarizes SAP-specific aspects of the physical design of a DB2 UDB database. Primarily, the
placement and layout of DB2 UDB objects (table spaces and directories) on storage devices is discussed.
Recommendations for strategic decisions on configuring the table spaces are given and explained. The
fundamental concept is to distribute all kinds of data across all devices available. This has been proposed
recently for state of the art storage systems and for other database management systems. The advantage of
this approach is that the full bandwidth of the whole I/O subsystem is available to each individual unit (table
space, directory) of the database.
The examples in this paper are based on an IBM ESS (Shark) storage system. Comments regarding
different hardware configurations are welcome. Feedback on anything else that is of interest for this paper is
requested, too.
3 Version History
Date Author Comment
January 31st, 2001 Jens.Claussen@sap.com first draft for internal review
February 15th, 2001 Jens.Claussen@sap.com new layout, completed
March 28th, 2001 Jens.Claussen@sap.com feedback porting team, TCC, ..
29.03.2001 Page 3
4. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
4 Introduction
Customer experience has shown that if SAP installations suffer from bad database I/O performance, this
problem is often caused by an inappropriate placement of database objects on disks. In some cases,
although more than one hundred (logical) disks are available, most of the I/O activity is concentrated on a
few disks. Carefully planning the database storage layout for today’s and tomorrow’s need helps to avoid
these problems. This paper summarizes design aspects for SAP installations based on a DB2 UDB
database.
Primarily, the placement and layout of DB2 UDB objects (table spaces and directories) on storage devices is
discussed. Recommendations for strategic decisions on configuring the table spaces are given and
explained. The fundamental concept is to distribute all kinds of data across all devices available. This has
been proposed recently for state of the art storage systems and for other database management systems
(see references [2], [3]). The advantage of this approach is that the full bandwidth of the whole I/O
subsystem is available to each individual unit (table space, directory) of the database.
Former database layouts often tried to separate table spaces onto different devices, e.g., separating data
and index table spaces and putting each data table space onto different drives. The consequence was that
the database often ended in hotspots on devices that carried a single table space. Manual intervention was
necessary in each case to alleviate the problem. As time went by, changing application profiles or data
growth led to other hotspots or restructuring demands. The goal of the proposed layout is to avoid these
problems. Hotspots will rarely occur since each type of data is striped across multiple devices. Consequently,
no manual intervention will be necessary to correct the layout. This concept must also be implemented when
growing single table spaces to cope with an increasing amount of data. Another goal of the database layout
is to minimize planned downtime for database reconfiguration tasks like restructuring table spaces.
The examples in this paper are based on an IBM Enterprise Storage Server (Shark) storage system [9].
Comments regarding different hardware configurations are welcome.
This paper does not try to compete with other DB2 UDB specific documentation like DB2 manuals (e.g., [6]),
IBM redbooks [7] and other material about database administration and tuning. Instead, only supplemental
information that is specific to SAP systems is given.
In the remainder of this paper, the terms DB2 UDB, UDB, and DB2 always refer to the Unix and Windows
release of DB2 UDB Enterprise Edition (EE) Version 7 (fixpak 2). Other members of the DB2 family like DB2
UDB for OS/390 (zSeries) and DB2 UDB for AS/400 (iSeries) are not covered here. Furthermore, most SAP
applications built upon the 4.6 basis release are covered.
5 Understanding UDB Architectural Constraints
DB2 UDB suffers from two limitations that have important impact on the freedom to design the database
layout: The size limit for table spaces and the data balancing for containers within a table space. The
proposed layout takes these two limitations into account.
5.1 Table Space Size Limitation
24
The architecture of DB2 limits the size of each database managed (DMS) table space to 2 data pages. For
the standard page size of 4kB, this means that a table space cannot be larger than 64GB. This upper limit
must be considered when sizing the database and designing the layout of the database.
There are several ways to alleviate the impact of this limitation:
1. Use system managed (SMS) table spaces
SMS table spaces do not restrict the size of a table space. Instead, the size of each table cannot
24
exceed the number of 2 pages. SAP, however, does not recommend using SMS for regular table
spaces for performance reasons. SMS is only recommended for temporary table spaces.
2. Isolate large tables from the standard SAP table spaces
By default, all tables delivered by SAP are collected into a small number of table spaces (e.g., 13
data and 13 index table spaces for R/3). If one table space reaches its maximum size, some of the
largest tables within this table space can be moved to different, newly created table spaces.
29.03.2001 Page 4
5. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
Depending on the estimated table sizes, either multiple tables are moved to a single new table
space, or each table is moved to a different new table space. Section 6.4 discusses the isolation of
tables in more detail.
3. Use the Enterprise – Extended Edition (EEE) of DB2
The Enterprise – Extended Edition of DB2 allows to spread a database across multiple hosts and to
24
install multiple partitions on a single host. The limitation of 2 pages per table space refers to a
single partition of the database. Consequently, if a table space is distributed across multiple
partitions (nodes) of the EEE instance either on a single host or on multiple hosts, the limit is raised
24
to n * 2 pages per table space (n is the number of partitions on which the table space is stored).
SAP, however, currently supports EEE only for a few mySAP components.
4. Use larger database page sizes
24
The 2 page limitation yields a total amount of useable space of 64GB for the standard page size of
4kB. DB2 permits to choose the page size individually for each table space. Possible page sizes are
4kB (default), 8kB, 16kB, and 32kB. By choosing larger pages for table spaces, the total size limit for
24
a table space can be raised up to 512GB for 32kB pages without exceeding the total number of 2
pages. There are, however, several restrictions in choosing the page size. Details are covered in
Section 6.3.
5.2 Balanced Containers
27 Unbalanced extents in extra space of
Container 3 – potential hot spot
25 26
19 22 20 23 21 24
13 16 14 17 15 18 Extents balanced between
containers as long as space is
7 10 8 11 9 12 available in all containers
1 4 2 5 3 6
Container 1 Container 2 Container 3
Figure 1: Table Space with Unbalanced Containers
If a DB2 table space consists of multiple containers of the same size, the DBMS tries to evenly distribute the
data and thereby I/O load across all containers. This is achieved by allocating extents for database objects
alternating by a round robin strategy from the different containers. If the containers do not have the same
size, only parts of the data are balanced. In the example shown in Figure 1, container 3 carries more data
extents and represents therefore a potential hotspot. To re-establish a balanced container layout, the table
space needs to be recreated, e.g., by a redirected restore. The extend option of the alter tablespace
command allows to extend individual containers. One might assume that extending containers could be used
to adapt the smaller containers, yielding again containers of equal size and a well-balanced table space. This
way, however, the space maps inside the table space (and thereby the extents) are not rebalanced and
remain skewed.
The recommended container layout is to start with several containers of the same size and to keep them
balanced by either adding only containers of the same size or by extending all containers simultaneously by
the same amount.
29.03.2001 Page 5
6. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
19
rebalance
13 16 14 17 15 18 17 18 19
7 10 8 11 9 12 9 13 10 14 11 15 12 16
1 4 2 5 3 6 1 5 2 6 3 7 4 8
Container 1 Container 2 Container 3 Container 4 Container 1 Container 2 Container 3 Container 4
(newly added)
Figure 2: Rebalance after Adding a new Container
Figure 2 shows the scenario if a new container with the same size as the existing containers is added to a
balanced table space ( alter tablespace add …): The complete table space is rebalanced such that
afterwards all containers show the same utilization. Of course, the rebalancing process poses some
additional load onto the database host. If multiple containers have to be added to a table space, it makes
sense to add all containers within the same command to get along with a single rebalance process.
Starting with Version 7, DB2 offers an alternative way of adding space to an existing table space and
keeping it balanced: The command alter tablespace extend (or resize) allows to enlarge the
balanced containers of a table space (see Figure 3). This way all containers scale by the same amount of
storage space. This extension requires, however, room for growing the already existing underlying file
systems or raw devices of the containers.
Space added to containers
Existing containers
7 8 9
1 4 2 5 3 6
Container 1 Container 2 Container 3
Figure 3: Extending Existing Containers
6 Design Aspects of the Layout
The new layout takes the changing hardware and software equipment at customer sites into account.
Currently, most new customer installations start with a storage area network (SAN) environment based on
advanced storage systems as disk subsystems. Since many UDB customers use RS/6000 AIX machines as
database hosts and the IBM Enterprise Storage Server (ESS) as disk subsystem, the examples in the
following are based on this scenario. They should be easily transferable to other hardware configurations.
Furthermore, Section Fehler! Verweisquelle konnte nicht gefunden werden. covers some technical
details that are specific to AIX and ESS.
6.1 Concept
The central idea is to stripe all data objects across all available devices. This is the opposite of placing each
table space on a set of devices of its own. There are several advantages of choosing a large number of thin
fragments of a table space instead of a few thick fragments: First, there always tend to be hotspots on a
subset of the table spaces, sometimes even on a single table space. By striping these table spaces across
all devices, the cumulative performance of all devices and device adapters is available to each individual
table space. Second, the hotspot table space is not always known in advance or may change with changing
29.03.2001 Page 6
7. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
workload on the system. If all table spaces or at least all frequently used table spaces are striped at
maximum width, no care must be taken to identify the heavily loaded table spaces. There is hardly any
additional cost of striping table spaces across all devices, so all table spaces can be striped. Third, the
amount of space allocated on each single volume contained within the stripe for a single table space is
minimized if the number of volumes is maximized. As a matter of fact, the seek time for placing the disk
head(s) on the desired track increases (although not linearly) if the zone where the covered data grows.
Consequently, the seek time is minimized if the data zone on each single disk is minimized (assuming
adjacent data blocks for a single table space on the disks). On the other hand, of course, the seek time for
switching between different kinds of data placed on the same disk is added. In summary, the proposed
layout concept should yield at least as good seek times as other approaches that concentrate each kind of
data on a few disks.
Host adapters (e.g., SCSI, Fibre Channel)
Control Unit and Cache of storage system
Device adapters
UDB Table space PSAPSTABD
PSAPBTABD
File system
/db2/SID/log_dir
ZSAPMARCD
/db2/SID/log_archive
RAID 5 volumes
Figure 4: Concept of the Layout: Stripe Everything Across all Devices
6.2 Example of Layout
Figure 4 shows parts of an example layout on an IBM ESS storage system [9]. The vertical boxes denote
RAID 5 volumes (“ranks”) consisting of several physical disks. The ranks are connected via device adapters
to the control unit that contains, e.g., cache and NVRAM. The control unit attaches to host systems via SCSI
or fiber channel adapters.
Each UDB table space and each file system is striped across all RAID 5 volumes. For each table space, the
cumulative bandwidth of all device adapters (adapters between RAID volumes and the storage system
central processing unit) is available to the table space. If the host adapters are configured accordingly, also
the cumulative bandwidth of all host adapters is available to the individual table space.
29.03.2001 Page 7
8. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
Certainly, the disk and adapter bandwidth needs to be shared among all table spaces. If there is substantial
load on multiple table spaces, only a fraction of the bandwidth is available to each single table space.
However, as long as there is no severe preference with respect to performance for any kind of data, this is
still the optimum configuration since bandwidth is “allocated” automatically and dynamically on demand to
the table spaces.
There is special interest in placing the online log volumes on the disks with maximum performance since
minimum disk response time for online log operations is performance-critical for all database modification
operations. For this reason, placing the stripe for the online log in the middle of the disk is proposed. This on
average yields minimum seek time from all head positions. Particular alternatives for log volumes are
discussed in Section 6.6.
6.3 Page Size
As already mentioned in Section 5.1, there is a size limit of 64GB for table spaces with the default page size
of 4kB. This limit can be raised up to 128GB, 256GB, or 512GB by means of increasing the page size to 8kB,
16kB, or 32kB, respectively. Furthermore, performance experiments have shown that larger pages also
increase the performance for certain kinds of queries (Largest performance gains have been observed for
queries involving heavy sequential I/O accesses, as they are observed, e.g., for OLAP workloads.). The side
effects of using page sizes other than 4kB are discussed in the following.
First of all, there is no way to switch the whole database to larger pages. At least the catalog table space
(SYSCATSPACE) has to remain at 4kB. Next, each page size used requires a distinct buffer pool with
corresponding page size. Consequently, for each newly introduced page size a new buffer pool must be
allocated. If temporary table spaces are used for table reorganization, then also for each page size a
corresponding temporary table space is required.
DB2 limits the number of data rows on a single page to 255. For large page sizes and small rows the
capacity is limited by the maximum number of rows per page and not by the number of bytes usable per
page. This may lead to pages filled only partially and, as a consequence, to wasted storage and buffer pool
space. Figure 5 shows the fill rates of different page sizes for the same row size. Assume a data page is
filled with data rows of 50 byte length including all overhead. Then, a 4kB page can carry about 80 data
rows. An 8kB page has room for 162 rows. A four times larger page of 32kB has room for about 650 rows.
Due to the upper limit for the number of rows, however, the page can only contain 255 rows. This means that
because of the small rows about 60% of a 32kB page are wasted.
On the other hand, DB2 is not able to split large data rows between multiple pages. Very large data rows
may therefore require larger page sizes. Furthermore, on average half of a row size per page is unused
since no more rows fit on the page. The amount of space wasted due to this page alignment grows with
larger row sizes. Larger page sizes reduce this unused space: Doubling the page size yields only half as
many pages for the same amount of storage and consequently half as many times half a row size unused
space.
Table 1 lists for each page size the maximum table space size, the maximum data row length [6], and the
minimum average data row length (excluding all overhead) that would still yield an approximately 100% filled
page. For example, using 16kB pages a table with data rows consisting of a single char(53) column would
almost completely fill 16kB pages, but shorter rows would not.
Page Size Max. Table Space Size Max. Row Length Min. avg. Row Length
4 kB 64 GB 4,005 B 5B
8 kB 128 GB 8,101 B 21 B
16 kB 256 GB 16,293 B 53 B
32 kB 512 GB 32,677 B 118 B
Table 1: Some Parameters Influenced by the Page Size
There is no unambiguous insight that larger pages generally yield better database performance. Experiments
have shown that especially I/O-intensive queries perform better with larger database pages. On the other
29.03.2001 Page 8
9. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
hand, more buffer pool memory is required to keep the same number of pages for random I/O workload on
larger pages. That is, larger pages waste more buffer pool space with unwanted rows. Furthermore, the
optimizer support for page sizes other than 4kB is sub optimal. For this reason, the probability that the
optimizer chooses a sub optimal access plan (e.g., using not the most adequate join method) is a little higher
on pages larger than 4kB.
For index table spaces, there is an additional advantage of larger pages: Since larger pages can carry more
index key entries, the reduced total number of index pages may allow to save an index level, such that one
I/O access is saved for an index access.
For ease of administration, it is recommended to choose only one additional page size apart from 4kB. This
minimizes the number of buffer pools and simplifies the optimizer’s work. Different sizes for data and index
pages are possible. Only a single temporary table space is required for the database. When reorganizing
tables and using temporary table spaces for the reorganization, however, a temporary table space with the
same page size as the table space that carries the table to be reorganized must be available. Figure 6 shows
an example configuration with 4kB and 16kB pages. Here, the table spaces PSAPSTABD, PSAPSTABI,
PSAPBTABD, PSAPBTABI are moved to 16kB pages. The required 16kB buffer pool is named SAP16BP. If
reorganization of at least one of the four 16kB table spaces is supposed to use a temporary table space, the
16kB temporary table space (named ZSAPTEMP16 in the figure) is required. Since temporary table spaces
should use SMS, multiple temporary table spaces could share the same file system in order to minimize
space requirements.
More information about the use of different page sizes is given, for example, in the DB2 Administration Guide
([6], Chapter 8 “Physical Database Design“).
255 rows
32kB
24kB
Fill rate of a single data 255 rows
page for different page 16kB
sizes assuming rows of 162 rows
50 byte length 8kB
80 rows
(including all overhead)
0kB
4kB 8kB 16kB 32kB
Figure 5: Effect of Different Page Sizes on the Fill Rate of a Page
Regular data and
index table spaces
Temporary Buffer pools
Table spaces SYSCATSPACE
4 kB table spaces / buffer pool
PSAPTEMP IBMDEFAULTBP
PSAPES..
16kB table spaces / buffer pool
PSAPBTABD
PSAPBTABI
ZSAPTEMP16 SAP16BP
PSAPSTABD
PSAPSTABI
Figure 6: Table Spaces and Buffer pools with two Different Page Sizes
29.03.2001 Page 9
10. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
6.4 Isolating Tables from Standard Table Spaces
A complementary approach to overcome the 64GB table space size limitation is the isolation of large tables.
It can be applied instead of using different page sizes or in combination with larger pages. If a table space
reaches its size limit, the major space consumption is commonly due to a few large tables within the table
space. Moving some of these tables to a newly created table space solves the problem. Depending on the
current and expected sizes of the tables, it may be beneficial to migrate either each table to its own new
table space or to move a group of tables from one table space to a new one. SAP Note #136702 [4] explains
the procedure of moving tables between table spaces. Note that moving tables in general requires downtime.
Furthermore, isolating tables and remaining at 4kB pages does not raise the 64GB size limit for single tables.
6.5 Container Size and Number of Containers
For each DMS table space one or more table space containers (operating system files or raw devices) must
be allocated. For containers larger than 2GB, the operating system must be large file enabled. If the
operating system does not pose a limit on the file size, there is a choice of setting the number of containers
for each table space, selecting between raw devices and file containers, and deciding where to put the
containers on the storage layout. These decisions will be discussed in the following.
Typically raw device containers yield higher I/O throughput than file containers, since the additional layer of
the file system buffer cache is circumvented by raw devices. So from a performance point of view, raw
devices should generally be preferred to files. On the other hand, file containers are sometimes considered
to be easier to handle from an administration point of view. Furthermore, there are situations where the file
system buffer cache can serve I/O requests that otherwise would go to disk. In particular, table spaces for
long data should use file containers since long data is not cached in the buffer pool. [7] discusses the aspect
in more detail.
Performance experiments with raw devices under AIX have shown that the I/O performance of DB2 is
substantially higher if multiple containers are used for a table space rather than a single one, even if
DB2_PARALLEL_IO is enabled for the table space. Derived from this, multiple containers for each table
space with significant traffic are recommended. It may even be useful to allocate multiple containers on the
same RAID5 volume. In order to keep the containers balanced, all containers should have the same size.
Since DB2 applies a striping approach on extent basis the individual containers should not be striped once
again across multiple volumes. Instead, each container should be kept locally on a single volume (disk). A
clear one-to-one mapping between containers and disk/RAID volumes also simplifies administration.
Figure 7 shows example layouts of two table spaces. The first table space consists of a single container that
is striped across all available RAID volumes (vertical boxes). While striping the table space across all
volumes is recommended, using a single container yields sub optimal performance. The preferred solution is
shown for the second table space: It is also striped across all RAID volumes, but it is divided into twelve
containers. Two containers are mapped to each RAID volume. (Recommendations about the number of
containers to map to a RAID volume vary. But more than one may be useful for larger containers.) When
using file containers, put them into multiple file systems – at least one per RAID device.
Single-
container
table space –
not
recommended
Multi-
container
table space –
recommended
configuration
Figure 7: Layout of a Single DMS Table Space
29.03.2001 Page 10
11. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
6.6 Logging
The general recommendation is to use the fastest disks (to be more concrete: disks with minimum response
time) for online log files.
One approach for the placement of the online log files has already been presented in Figure 4 (repeated in
Figure 8(a)): The file system for the online log is striped across all devices, just as any other file system and
table space. While this is a good configuration for performance, customers may require different layouts for
security reasons. Separating data volumes, online log, and archive log is recommended, for example, in the
installation guide. Additional mirroring of the online log may also be desired.
PSAPSTA BD
PSAPBTABD
(a) /db2/SID/log_dir
ZSAPMARCD
/db2/SID/log_archive
PSAPSTA BD
/db2/SID/log
PSAPBTABD
_dir
(b)
ZSAPMARCD
/db2/SID/log_archive
PSAPSTA BD
PSAPBTABD
(c)
ZSAPMARCD
Mirro r /db2/SID/log_archive /db2/SID/log_archive
/db2/SID/log_dir Mirro r /db2/SID/log_dir
Figure 8: Three Different Approaches to Place Log Files on the System
In addition to the already known variant (a), Figure 8 shows two more feasible approaches to place online
and archive log. In configuration (b), the online log is placed on a physically separate volume. Assuming that
each RAID volume is driven by a distinct RAID adapter, strong isolation of the online log is achieved. With
respect to performance, this layout may however be sub optimal.
29.03.2001 Page 11
12. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
Configuration (c) in Figure 8 shows an alternative using mirroring. Both online and archive log are replicated
by a mirror established on logical volume manager basis. However, the logs still share the same physical
volumes as the data table spaces.
6.7 File Systems: Temporary Table Spaces etc.
The same basic recommendation that is given for DB2 table spaces also holds for file systems. That is,
stripe all file systems that carry substantial I/O traffic across multiple devices. The two file systems for
logging (online log and archive log) were already covered in the previous subsection. The most prominent file
systems not yet covered are file systems for SMS table spaces, in particular SMS temporary table spaces.
File systems for temporary table spaces should also be striped across multiple devices. If using multiple
containers (directories) for a SMS temporary table space, it may be beneficial to distribute these across
multiple file systems in order to avoid lock contention on a single file system.
6.8 Performance Considerations
This paper is not intended to cover parameter settings or other detailed DB2 tuning. Several tuning details
are given, e.g., in [6] and [7]. In the following, only a few areas of configuration for performance optimization
are enumerated:
The DB2 registry variables DB2_STRIPED_CONTAINERS=ON should be obligatorily activated before
creating any table space that resides on striped devices. It enables the alignment of device stripes with DB2
extents. The extent size chosen at the time of table space creation should be chosen such that the extent
size is equal to or a multiple of the RAID strip size (Strip size is the amount of data placed consecutively on a
single device within the RAID stripe set.). Furthermore, the variable DB2_PARALLEL_IO should at least be
active for table spaces that consist of containers striped across multiple devices (as this may occur, e.g., for
containers residing on a RAID5 rank of an IBM ESS).
Table space parameters in general (extent size, prefetch size, transferrate, accesstime) should be
considered.
The different kinds of parallelism (cf. Chapter 4 in [6]) also give room for tuning. For example, setting the
number of asynchronous ioservers (prefetchers) and iocleaners to reasonable values is recommended.
Other common tasks like maintaining sufficiently current database statistics and performing reorganizations if
necessary are also mandatory and cannot be replaced by choosing the right database layout.
7 Growing the Database/Log Space
Typically SAP installations reach a point where the preconfigured database space is no longer sufficient and
needs to be extended. For performance and scalability reasons, it is crucial to keep the initial layout as
proposed in the previous sections in mind and to continue the concept of striping across all devices.
If a DB2 table space reaches the state of being almost filled, it either has to be enlarged, or – if this is not
possible – it has to be complemented by an additional table space. In the latter case, the new table space
should be created according to all criteria discussed before and individual, typically quite large tables are
moved from the filled table space to the newly created table space, as described in Section 5.1.
24
If the table space has not yet reached its 2 pages size limit and it can be enlarged, then there are two
approaches to add space to the table space: Adding containers or extending the existing containers. These
two variants are discussed in Section 5.2. These variants will be considered under the aspect of load
balancing across devices here.
The best solution for adding containers is to always add a full stripe set of containers to the table space. The
new containers should have the same size as the existing ones to enable DB2 to evenly rebalance the data.
If a full stripe set is not an acceptable solution, Care should be taken anyway not to create hotspots. This can
be achieved, e.g., by distributing the newly added space at least among a few devices. This is shown in the
upper part of Figure 9: The table space initially consists of twelve containers. When enlarging the table
space, three new containers are added, distributed among three devices. After adding the containers
(simultaneously), the rebalancing takes place. The next extension of the table space should add another
29.03.2001 Page 12
13. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
three containers, placed on the remaining three devices (indicated by hollow boxes in the figure) to complete
the stripe set.
The second alternative for enlarging a table space is to extend all containers simultaneously. This is
sketched in the bottom part of Figure 9: Each container of the table space grows by a certain amount.
Extending the table space works only if all containers have the same size. The major advantage of extending
the containers as opposed to adding containers is that the well-balanced placement of data is retained
without rebalancing. Extending requires, however, that all operating system volumes carrying containers (file
systems and raw devices) have to provide enough space for the extension. If the volumes are also extended,
e.g., via a logical volume manager, care must be taken to also physically keep all storage locations for each
volume adjacent. It does not help to use space from a single new device to extend all existing volumes that
carry containers.
Initially
allocated
tablespace
Added
containers
Initially
allocated
tablespace
Container
extensions
Figure 9: Several Feasible Approaches to Enlarge a Table Space
8 Miscellaneous
This section gathers detailed comments on specific operation system and storage platforms, supplemented
by a few comments on EEE.
8.1 EEE Specific
DB2 UDB EEE is currently only supported for some mySAP components (see SAPNet and release notes for
details). Reference [5] and the installation guide describe some items to consider when planning and
operating a multi-partition EEE installation for BW.
8.2 AIX Specific
The AIX logical volume manager (LVM) groups physical volumes (hdisks, either physical disks attached to
the host or virtual disks provided by a storage system) into volume groups. Logical volumes are allocated
within a single volume group. File systems are placed either implicitly or explicitly (by first allocating the
logical volume and then creating a file system on top of it) on top of a logical volume. Raw devices (e.g., for
UDB table space containers) are placed directly into logical volumes.
29.03.2001 Page 13
14. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
Logical volumes cannot belong to more than one volume group. In order to allow the striping of file systems
across all available devices, the physical volumes should be collected in such a way into volume groups that
physical volumes of many different disk adapters and RAID volumes meet in a single volume group.
Referring to the ESS example, a volume group should cross all ranks available. In general, having as few
volume groups as possible yields maximum flexibility for logical volume layout. If single logical volumes are
allocated for table space containers, the volume group layout is less important.
There are multiple alternatives for allocating logical volumes in the AIX logical volume manager. First, the
range of physical volumes can be chosen between minimum (try to concentrate the logical volume on as few
physical volumes as possible) and maximum (try to evenly distribute the logical volume across all physical
volumes). The latter alternative distributes the physical partitions (chunks of 8-32MB) by a round robin
strategy between the physical volumes. This alternative is recommended at least for the IBM ESS. Derived
from the unit of storage allocation (physical partitions with size of 8-32MB) this is also called “PP striping“.
Alternatively, the LVM also offers striping in units of 4-128kB. This is not recommended since DB2 already
performs extent-bases striping in this granularity. Another option for allocating logical volumes is to restrict
them to single physical volumes within a volume group. This is useful when allocating space for single UDB
containers.
In order to get access to all LVM-options for logical volumes, file systems should be in a two-step approach:
First allocate the logical volume and choose the appropriate options for the logical volume. Then create the
file system on top of the existing logical volume. Letting the administration tool automatically create the
logical volume hides some options for the logical volume. Furthermore, when creating file systems care
should be taken to use “large file enabled“ file systems wherever needed (e.g., for temporary SMS table
spaces).
8.3 IBM ESS Specific
The IBM Enterprise Storage Server (ESS) has already been mentioned multiple times in the paper and it has
been used for the configuration examples. There is plenty of information on the IBM ESS web site [9]. In the
following, a few additional recommendations are collected:
Try to use as many ranks as possible for each SAP system. Each rank has only a certain bandwidth
capacity, so using many ranks makes the cumulative bandwidth of all ranks available to the system. When
installing multiple SAP systems on an ESS, all systems should generally be distributed across all ranks. An
exception is given in the case that the performance of a single critical SAP system must not be touched by
any other system.
Each rank is primarily assigned to one of the two clusters of the ESS. Try to balance the number of used
ranks between Cluster 1 and 2. This way, the resources (e.g., cache and NVRAM) are equally exploited on
both clusters. Similarly, try to use ranks from as many device adapter pairs and SSA loops as possible. This
again yields the cumulative bandwidth of all resources.
The size of the volumes allocated on the ESS does not influence performance. To limit the number of
physical volumes to administrate on host side (and thereby to limit the number of volume groups), the ESS
volumes should be chosen large enough. ESS volumes that belong to the same kind of data (e.g. a single
UDB table space) should be assigned to as many SCSI/FC host adapters as possible. Once again, this
offers the cumulative bandwidth of the host adapters to a single table space. The flexibility of assigning ESS
volumes to hosts systems, host adapters, and finally volume groups increases with smaller ESS volumes. A
reasonable volume size would be 8-25GB. A volume size starting at >=16GB should be preferred to simplify
administration.
The subsystem device driver (SDD, formerly DPO) can be used for load balancing and fail over purposes. It
offers the option to use more than one SCSI/FC path from an ESS volume to the host system. Failure of one
path (e.g., SCSI cable or host adapter failure) keeps the volume available. If the allocation of ESS volumes
to host adapters is well balanced, the SDD does not yield performance gains (but still increased availability).
The ESS offers copy services for local and remote volume copy. IBM offers a split mirror backup solution
employing these copy services for UDB that allows low impact backup and recovery solutions. The proposed
data distribution between all ranks (to be correct, between all ESS logical subsystems) is also optimal for the
application of ESS local copies (flash copy) since source and target volumes of a flash copy pair must be
located on the same logical subsystem. A forthcoming white paper from IBM describes the solutions in detail.
29.03.2001 Page 14
15. Database Layout for SAP Installations with DB2 UDB for Unix and Windows
9 Summary
This paper discusses multiple aspects to consider when designing the storage layout of a SAP installation
based on DB2 UDB. Several alternatives are described to circumvent the table space size limitation. The
most prominent are choosing large page sizes and isolating tables into different table spaces. As
fundamental concept to get maximum performance from the whole system installation striping across all
devices available is proposed. Details on how to implement this concept are pointed out for the different
aspects of DB2 objects. In summary, this paper should help to circumvent many pitfalls of an SAP installation
with DB2 UDB.
10 References
1. SAP Bluebook “Fundamentals of Database Layout”.
http://ency.wdf.sap-ag.de:1080/Bluebooks/BME048.02-FundamentalsOfDatabaseLayout.pdf
2. “Database Layout for R/3 Installations under Oracle” http://service.sap.com/atg or
https://www005.sap-ag.de/~sapidb/011000358700003877782000E.pdf
3. “Database Layout for SAP Installations with Informix” http://service.sap.com/atg or
https://www005.sap-ag.de/~sapidb/011000358700005530252000E
4. SAP Note 136702: “Moving tables to other DB2 table spaces” http://service.sap.com/notes
5. “DB2 UDB EEE on UNIX & Windows NT: SAP BW Administration Tasks in Multi-Partition
Installations”; SAP Documentation
6. “DB2 Administration Guide: Performance Version 7” SC09-2945-00
7. “DB2 UDB V7.1 Performance Tuning Guide”, IBM Redbook SG24-6012-00,
http://www.redbooks.ibm.com/abstracts/sg246012.html
8. Split Mirror References: Forthcoming...
9. IBM Enterprise Storage Server Home Page:
http://www.storage.ibm.com/hardsoft/products/ess/ess.htm
29.03.2001 Page 15