• Like
DB2 Version 9.1 Overview
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

DB2 Version 9.1 Overview



  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads


Total Views
On SlideShare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide
  • Notes: DB2 Version 9 Express Edition for Windows or Linux may be downloaded from the website http://www.software.ibm.com/data/db2. DB2 Version 9 Express Edition will support 2 processors and is able to address 4 GB of memory.
  • Notes: What table partitioning does for you is allow you to create a table where each range of the data in the table is stored separately. For example, you can partition a table by month. Now all data for a given month will be kept together. In fact, internally, the database represents each range as a separate table. This has a couple of advantages. One is that if you do a query that only needs to scan certain ranges, the database will scan only those ranges that are relevant. We call this "partition elimination" – think of it as skipping partitions that don't need to be scanned. Another advantage is that you can store a lot more data in a single table now. Since each range can hold as much data as a non-partitioned table, a single partitioned table can hold literally thousands of times as much data as the old limits on table size. A table may contain 32767 data partitions (see Table Partitioning Feature Chapter 2). The most exciting feature that table partitioning enables is that it is very quick and easy to attach or detach ranges of data from a partitioned table. This is very useful for rolling in or out batches of data in a data warehouse application. I want to emphasize that all of this is managed internally and is completely transparent to you as the user. Once the table has been created, all queries and DML against that table work exactly as if it were a non-partitioned table, except faster in those cases where the database is able to take advantage of partitioning. The advantages of table partitioning are very similar to those of union all views. The difference is that all the complicated parts of it are taken care of for you internally. The Table Partitioning Feature is available with the DB2 UDB Enterprise Server Edition Table Partitioning License.
  • Notes: This diagram shows how table partitioning works together with database partitioning and multi-dimensional clustering. I like to think of these three features as a sort of hierarchy for placing data. Imagine a row being inserted into a table that is database partitioned, table partitioned AND multi-dimensionally clustered. First, we hash on the distribution key to find which node the row should reside on. Then we look at the partitioning column to determine which range the row is in. Finally, we look at the columns the table is clustered by to determine which MDC cell it belongs in. The syntax for these features has intentionally been made uniform to remind you of the relationship. DISTRIBUTE/PARTITION/ORGANIZE by HASH/RANGE/DIMENSIONS Note: that table partitioning does NOT distribute rows across nodes. You must use DPF for that.
  • Notes: Here is a simple example of creating a partitioned table. It is partitioned on the column "sale_date". There are 4 ranges. You need to specify at least enough starting and ending boundaries for the ranges to make it un-ambiguous. In this example, each starting bound is specified, plus the last ending bound. The other ending bounds can be determined from context. Holes are allowed. For example, you could create a table with ranges for Jan, Mar, May leaving gaps for Feb and Apr. There is more syntax that I am not showing here to allow you to specify things like which tablespace each range should be placed in and how you want NULLs treated. There is also syntax for making the first or last range in the table open ended. For example, the first range could be defined to hold any date prior to 2000. It is imperative that you give a partition a name. When a partition is removed (detached) from a table, a table with the name of the partition is created.
  • Notes: If I have 3 table spaces, and use CREATE TABLE t1 (c1 INT) IN tbsp1, tbsp2, tbsp3 PARTITION BY RANGE (C1) (STARTING FROM (1) ENDING (100) EVERY (25) 4 partitions will be created. Round Robin through the table spaces.
  • Notes: Index is global. If you do not specify what table space the index is resides, the index will be created in the first table spaces.
  • Notes: You can add a new range to an existing table using ALTER TABLE ... ADD PARITITION. This creates a new empty range. In this example, the new range has been explicitly placed in the tablespace TBSPACE1. An important point about table partitioning is that the ranges cannot overlap. You will get an error if you try to add a range that overlaps any existing ranges.
  • Notes: The most exciting feature within table partition is enhanced roll-in and roll-out. We have created two new operations to accomplish this. ALTER TABLE ... ATTACH takes an existing table and incorporates it into a partitioned table as a new range. ALTER TABLE ... DETACH is the inverse operation. It takes one range of a partitioned table and splits it off as a stand alone table. The key point about these operations is that there is no data movement. Internally, they are mostly just manipulating entries in the system catalogs. This means that the actual ATTACH or DETACH operations are very fast - on the order of a few seconds. When you do an ATTACH, you must follow up with SET INTEGRITY. This will incorporate the data into the indexes on the table. At the same time, it will also validate that the data is all within the boundaries defined for that range. SET INTEGRITY has been made online so that this longer running portion of roll-in can take place while applications continue to read and write the existing data in the table.
  • Notes: For rolling data out of a partitioned table, the recommended method is to use ALTER TABLE ... DETACH to split an existing range off as a separate table. You can then do whatever you need to with this table. One common thing to do would be to load it with new, current data to be roll-ed in and ATTACH it. Or you could archive the table, or just drop it. Recall that in ATTACH, the new data is not visible until it has been incorporated into the table via SET INTEGRITY. In contrast, DETACH makes the data invisible instantly. That is, as soon as the DETACH completes, queries against the partitioned table will no longer show those rows. There is no need to run SET INTEGRITY on the partitioned table after DETACH. If there are MQTs involved, you *will* need to run SET INTEGRITY on them. MQTs go offline when you do DETACH. (This applies only to MQTs that are maintained by the database - REFRESH IMMEDIATE MQTs) The other thing that you should be aware of with DETACH is that index entries for the partition that has been detached need to be cleaned up. There is a new feature that does this for you automatically. It is triggered by the DETACH and runs as a low priority background process to remove all the index entries corresponding to data that has been rolled out. Meanwhile, the entries are present, but invisible to normal queries.
  • Notes: XML column cannot participate in a partitioned table. Table partitioning feature is available with the DB2 UDB for ESE (Enterprise Server Edition) Table Partitioning license. In DB2 UDB V8.2 a table created in an SMS (System Managed Space) tablespace the data, index and long (large object) portion of the table had to reside in the same tablespace. In DB2 UDB V9.1 a table created in SMS tablespace the index may be placed into a separate tablespace, if the table is a partitioned table.
  • Notes: Using the Automatic Database Setup Wizard a DBA is able to schedule a maintenance window,
  • Notes: There is one default bufferpool name IBMDEFAULTBP. On DB2 V9.1 the IBMDEFAULT buffer pool will be set for automatic tuning. If automatic tuning is not enable the default buffer pool size on Windows will be 250 4K pages. On UNIX the bufferpool size is 1000 4K pages. There are 4 Hidden buffer pools. Each of these buffer pools are related to the corresponding page sizes of 4K, 8K, 16K and 32K will be 16 pages in size.
  • Notes: A large percentage of tuning time could be spent on alter buffer pools not necessarily a concern, just something to keep in mind.
  • Notes: On DB2 Version 9.1 a new task is associated with STMM. The task is db2stmm. When a database is started using the LIST APPLICATIONS command you will see this task.
  • Notes: One path specification, storage added using the SQL API.
  • Notes: The DMS tablespace TOR#BTABD will increase in size by 50M each time the tablespace becomes full.
  • Notes: Benefits of XML include platform independence,no fixed format and is fully unicode compliant. XML is a mark up language similar to HTML. HTML is used for displaying data, while XML was designed for describing data. When using HTML, the tags are predefined. XML allows the creator to define his/her own tags. The first line of the XML document is the XML declaration. It describes the XML version and the character encoding used in the document. The second line describes the root element. The next two lines describe the child elements. XML tags are case sensitive. The website http://www.w3schools.com is an excellent site for learning XML.
  • Notes: XML Extensible Markup Language. XML document is made of self-describing data structures. XML is platform independent. Goal = integration of XML and relational capabilities so that applications can combine XML and relational data We did this be creating a new datatype in DB2: XML type XML type is visible on the server and the client New language: XQuery, new parser (side by side of SQL parser) Native storage: hierarchical, optimized for parent/child traversal. Can use SQL or XQuery to query a mix of relational and XML data in DB2. We leverage the DB2 engine capabilities. We have added a new storage for XML. Different parsers for SQL and XQuery but then same complier handles the queries hence we can mix SQL and XQuery. Because a new data type has been added, it requires support throughout the system. Moreover, XML is not just another numeric type, but a significant change, so special efforts are necessary in almost all system components, tools and utilities. In DB2 Native XML, XML data is stored in a parsed, annotated tree form, similar to the DOM. The XML data is formatted to data pages which are buffered. The benefits of this format include faster navigation which results in faster query execution as well as simpler indexing of data.
  • Notes: A column of type XML can be created in a relational table. Insert - Parse and insert the XML document Retrieve – returns serialized rep of XML doc (text rep of XML) Query – SQL is iterating over all the rows. Perform searches within an XML document and return parts of it. Returns some relational data and some XML data Doing XML natively implies all the things that we can do with SQL today can be done with XML type. A column of type XML can be created in a relational table. This has no max. length unlike a BLOB or a CLOB and no mandatory constraining XML schemas. Any well-formed XML document can be inserted into that column. SQL can have XQuery statements inside it for more powerful queries into XML data. The approach that IBM has taken is to support both shredded and true native storage. Support for shredding is important because XML can be used to feed existing relational schemas. Since documents can grow large and will be updating in many cases, the advantages of storing it at a node level of granularity instead of at the document level are significant.
  • Notes: “ Native” means that XML documents are stored on disk pages in tree structures matching the XML data model. This avoids the mapping between XML and relational structures, and the impedance mismatch. XML is now a first-class data type in DB2, just like any other SQL type. XML data type can be used in a “create table” statement to define one or more columns of type XML. Since XML has no different status than any other types, tables can contain any combination of XML and relational columns. A column of type XML can hold one well-formed XML document for every row of the table. The NULL value is used to indicate the absence of an XML document. Though, every XML document is logically associated with a row of a table, XML and relational columns are stored differently. Relational and XML data are stored in different formats that match their respective data models. XML schema is not required in order to define an XML column or to insert or query XML data. An XML column can hold schema-less documents as well as documents for many different or evolving XML schemas. Schema validation is optional on a per-document basis. Thus the association between schemas and documents is per document and not per column, which provides maximum flexibility. Unlike a varchar or a CLOB type, the XML type has no length assoc with it. Currently, only the client-server communication protocol limits XML bind-in and bind-out to 2GB per doc. With very few exceptions, this is acceptable for all XML applications. The XML type can be used not only as a column type but also as a datatype for host variables in languages such as C, Java and COBOL. XML type is also allowed for parameters and variables in SQL stored procedures, UDFs and external stored procs written in C and Java.
  • Notes: The SQL SELECT statement may retrieve both relational and XML data in the same query. The XQuery will return the XML column, when using the db2-fn:xmlcolumn function. In this example all the rows of the table that has a valid XML data will be returned. There is no search argument specified. The XMLQUERY function will return the data element name without the XML data tags (text( )).
  • Notes: XMLTABLE – Constructs a table from an XML document XMLTable, allows users to convert portions of XML documents (or any XML data value) into table values within SQL query expressions. Return results from XML docs as rows and columns This is like shredding but in one table only. XMLTable - XPath version: Generate relational rowsets from XML (to leverage query processing in SQL) Using XPath expressions to find rows and columns In the future we will also look into, providing XQuery expressions to locate rows and columns. Explanation of the query po is an XML column in the PORDERS table What we want to construct is a table with columns CID, Name, Zip Type, Zip. The values of these columns come from the following paths in XML doc. These values are being bound to the 4 columns. CID is an integer whose value comes from the attribute ID and that attribute is on element customer which is in the document $po and that document is in column po in table PORDERS.
  • Notes: For is an iterator, iterating over a collection of XML documents Let allows variable assignments. All variable names start with $ The for and let clauses specify a sequence of tuples Where filters the sequence of tuples Order orders the remaining tuples Return declares what should be returned, it executes once for every remaining tuple, in order. In the example above: FOR binds the variable $movie to the xml column movies in table 1 and iterates over this set of documents. LET assigns a sequence of actor nodes to the variable $actors WHERE filters the documents to include only those with duration > 90 ORDER by sorts the documents by the attribute year RETURN constructs a new <movie> element with children <title> and the sequence of <actor> elements from the $actors variable Notes: Example says I have a table1 with a movies column. Iterator variable is $movie. Find all the actor elements within a movie document. Only retrieve the movies that run more than 90 minutes. Pull out the title element of the movie object column.
  • Notes: A dictionary based symbol table is created for compressing and decompressing data. Data is compressed on the disk, buffer pools and the logs. Data must be uncompressed before the data is evaluated. If the table data is compressed, then the log data is also compressed.
  • Notes: Compression is done across column boundaries.
  • Notes: The compression Dictionary will be left in memory, the first time the table is active. It will reside in memory until the database is shut down. You cannot query the Dictionary table. Compression is done for the whole table. Oracle will do comparison compression only for the same page.
  • Notes: There are two parts to implement compression. Part one is to make a table eligible for compression using the CREATE TABLE or ALTER table command. The ALTER table statement makes the table eligible for compression. When a partition table is compressed, a new attached partition is compressed. The second part is to build the dictionary using the REORG (offline) utility.
  • Notes: The offline REORG will create the compression dictionary table. All rows in the table will participate in data compression, if compression is active. Once the dictionary is created rows that INSERT, IMPORT, LOAD or UPDATE will participate in compression. db2 REORG TABLE ARTISTS USE TEMPSPACE1 RESETDICTIONARY
  • Notes: The Compression Estimation Utility will summarize how much space may be saved by using compression. The utility does not look at individual data pages. The utility will scan the system catalog pages. The The output of the compression Estimation Utility will be placed in the SQLLIBDB2 directory. The output must be formatted using the utility db2inspf.
  • Notes: Compression Estimator tool will estimate the compression results. The Estimator tool will determine how much space will be saved by using DB2 Compression. Minimum compression will show the tables whose compression result is greater than the selected percentage. Minimum Table Size will show only the tables whose size is greater the megabytes specified. Statistics Show all tables will display the results of all tables otherwise only the rows where statistics were run will display on the output.
  • Notes: The DB2 Compression Estimator shows table with a compression estimate less than 15 percent will be shown in red. If the compression estimate is greater than 50 percent the panel will show green.
  • Notes: Compression is supported with the Data Partitioning Feature (Parallel Editon, Extended Enterprise Edition) . Classic REORG is required to compress a table. The INSPECT command is used to determine if a compression dictionary table exists.
  • Notes: With no LBAC (label based Access Control) the query would be show all rows that satisfy the WHERE clause. If a user has Top Secret authority, the user would see all rows that have the label ‘Top Secret’. The ‘Top Secret’ user would see all the rows with a lower classification.
  • Notes: A new administrative stored procedure called ADMIN_COPY_SCHEMA provides the ability to choose a single (source) schema, and create a new copy of most objects qualified by this schema (objects not recreated by copy schema will be discussed later). The new (target) schema objects will be created using the same object names as the objects in the source schema, but with the target schema qualifier. It will be possible to create copies of tables with or without the data of the original table.
  • Notes: These are FREE sessions. They will be held at IBM 8401 Greensboro Drive, McLean, VA 22102. If you would like to attend any of the seminars, please contact Keith Gardenhire (keithgar@us.ibm.com).


  • 1. DB2 Version 9.1 Overview Keith E. Gardenhire [email_address]
  • 2. Objectives
    • DB2 Version 9.1 Packaging
    • Partitioning
    • Autonomic Computing
    • Throttling Utilities
    • XML
    • LBAC
  • 3. Packaged Features Compared
    • Oracle
    • Oracle Lite
      • Windows CE, Palm, EPOC, Windows 95/98/NT
    • Oracle Personal Edition
      • Windows
    • Oracle Standard Edition
      • Windows CE, LINUX (2 dist.), AIX, HP-UX, HP Compaq, Solaris, Tru64
    • Oracle Enterprise Edition
      • Windows CE, LINUX (2 dist.), AIX, HP-UX, HP Compaq, Solaris, Tru64
    • Oracle Real Application Clusters
      • Windows CE, LINUX (2 dist.), AIX, HP-UX, HP Compaq, Solaris, Tru64
    • DB2
    • DB2 Everyplace
      • Windows CE, Palm, EPOC, Windows XP/Tablet PC, QNX Neutrino, Symbian, Embedded LINUX
    • DB2 Personal Edition
      • Windows, LINUX (200 dist.)
    • DB2 Express-C
      • Windows, Linux
    • DB2 Workgroup Server Edition
      • Windows, LINUX (200 dist.), AIX, HP-UX, Solaris
    • DB2 Enterprise Server Edition
      • Windows, LINUX (200 dist.), AIX, HP-UX, Solaris, LINUX/390
      • Database Partitioning Feature optional purchase DPF
  • 4. Partitioning
  • 5. Table Partitioning in DB2 LUW
    • Partition a table by range
    • Each range can be in a different tablespace
    • Ranges can be scanned independently
    • Use new ALTER ATTACH/DETACH statements for roll-in/roll-out
  • 6. Data Placement - Grand Unification
    • Three ways to spread data
      • DISTRIBUTE BY HASH - currently in EEE/DPF
      • PARTITION BY RANGE – aka table partitioning
    Node 1 Node 2 Node 3 TS1 TS2 TS1 TS2 TS1 TS2 T1 Distributed across 3 database partitions Distribute Partition Organize East West East West East West East West East West East West North South North South North South North South North South North South Jan Feb Jan Feb Jan Feb
  • 7. Defining Ranges (Long Syntax)
    • Use STARTING … ENDING … to specify ranges
    • CREATE TABLE sales(sale_date DATE, customer INT, …)
    • PARTITION BY RANGE(sale_date)
    • (Partition Sales1Q2000 STARTING ‘1/1/2000’ in DMS01,
    • Partition Sales2Q2000 STARTING ‘4/1/2000’ in DMS02,
    • Partition Sales3Q2000 STARTING ‘7/1/2000’ in DMS03,
    • Partition SalesEND STARTING ‘10/1/2000’ ENDING ’12/31/2004’ in DMS04);
    • Creates 4 ranges
  • 8. Creating a Range Partitioned Table Short Form CREATE TABLE t1(c1 INT) IN tbsp1, tbsp2, tbsp3 PARTITION BY RANGE(c1) (STARTING FROM (1) ENDING( 100) EVERY (33)) tbsp3 tbsp2 tbsp1 t1.p1 t1.p2 t1.p3
    • Short and Long Forms
    • Partitioning column(s)
      • Must be base types (eg. No LOBS, LONG VARCHARS)
      • Can specify multiple columns
      • Can specify generated columns
    • Notes
      • Special values, MINVALUE, MAXVALUE can be used to specify open ended ranges, eg:
    1 <= c1 < 34 34 <= c1 < 67 67 <= c1 <= 100 Long Form CREATE TABLE t1(c1 INT) PARTITION BY RANGE(a) (STARTING FROM (1) ENDING(34)IN tbsp1, ENDING(67) IN tbsp2, ENDING(100) IN tbsp3)
  • 9. Storage Mapping: Indexes are Global in DB2 9 tbsp3 tbsp2 tbsp1 t1.p1 t1.p2 t1.p3
    • Indexes are global (in DB2 9)
    • Each index is in a separate storage object
      • By default, in the same tablespace as the first data partition
      • Can be created in different tablespaces, via
        • INDEX IN clause on CREATE TABLE (default is tablespace of first partition)
        • New IN clause on CREATE INDEX
    • Recommendation
      • Place indexes in LARGE tablespaces
    CREATE TABLE t1(c1 INT, c2 INT, …) IN tbsp1, tbsp2, tbsp3 INDEX IN tbsp4 PARTITION BY RANGE(a) (STARTING FROM (1) ENDING (100) EVERY (33)) CREATE INDEX i1(c1) CREATE INDEX i2 (c2) IN tbsp5 tbsp5 tbsp4 i2 i1
  • 10. Adding New Ranges
    • Use ALTER ADD to add new ranges to an existing partitioned table
    • STARTING ‘1/1/2001’ ENDING ‘3/31/2001’ IN TBSPACE1;
    • Creates a new empty range in TBSPACE1
  • 11. Roll-In
    • Load data into a separate table
    • Perform any data transformation, cleansing
    • ATTACH it to partitioned table
    • Use SET INTEGRITY to accomplish
      • Index maintenance
      • Checking of range and other constraints
      • MQT maintenance
      • Generated column maintenance
    • Table is online through out process except for ATTACH
    • New data becomes visible at end of SET INTEGRITY
  • 12. Roll-Out
    • Use DETACH to roll-out a range of data
      • Rolled-out data is available in a new, separate table
      • Data disappears from view immediately upon DETACH
    • Rolled-out data can be dropped, archived, moved to HSM
    • Queries are drained and table locked by DETACH
    • Dependent MQTs go offline and need to be refreshed via SET INTEGRITY
  • 13. A Partition Table
    • Table may have 32767 partitions
    • Backup and Restore individual partitions
    • Indexes may be placed in separate tablespaces
  • 14. Autonomic Computing
  • 15. Autonomic Management Automatic Database Setup Wizard
    • Creates a New database on Disk
    • Configures new database for performance
    • Turns on automatic maintenance and health monitoring
    • Configures notification by e-mail or pager
  • 16. Control Center Support Throttle Utilities
    • Allows DBA to reduce impact of running resource-intensive utilities on operational workload
        • Backup, Rebalance, Runstats
    • Support settling and display for utility execution priority for the supported utilities:
      • backup, rebalance (and reorg, load and runstats)
      • set priority in utility dialogs
      • set priority in SHOW COMMAND
    • Display priority of executing utilities in Task Center and the current priority set for the session
  • 17. Autonomic Technology Thoughts
    • Automatic configuration of Databases
      • Invoke configuration advisors on CREATE DATABASE
      • Make the defaults intelligent
        • Based on environment
        • “ Good” DBAs can override values
    • Adaptive Self Tuning
      • Memory
      • Sort heaps, bufferpools, package cache, lock list
      • Maximize usage of resources to
      • achieve optimal performance
  • 18. DB2’s database memory model – shared memory
    • All memory heaps are contained within the database shared memory set
      • On non-Windows platforms the set memory is all allocated at database startup and can not grow beyond its allocated size
      • On Windows the memory is allocated at startup but can grow or shrink as needed
    • On 32 bit platforms set size is limited
    • On 64 bit platforms set size virtually unlimited
  • 19. Autonomic Technology – Additional Thoughts
    • Automatic Storage Provisioning
    • Greatly simplify the task of apportioning storage for DB2 logs and data
      • Administrator control, if desired
      • Dynamic allocation
      • Policy specification allows refinement of behaviour
    • Progressive Re-optimization
    • Evolve from statistical profiles (V8.2) to adaptive runtime learning and improvement
      • “ The learning optimizer” (LEO)
  • 20. STMM and the buffer pools
    • Trades memory between buffer pools based on relative need
      • New metrics determine where memory is most needed such that total system time is reduced
    • Zero, one or more buffer pools can be set to AUTOMATIC
      • In newly created Viper databases, all buffer pools default to AUTOMATIC
    • Works with buffer pools of any page size
      • Transfers from a buffer pool with 8 k pages to one with 4 k are 1:2
    • Decreasing the buffer pools can take a lot of time
      • Must write out all dirty pages in memory being freed
      • If pages are in use the resize may wait on locks
    • STMM tunes DATABASE_MEMORY if it is set to AUTOMATIC or a numeric value
      • If set to AUTOMATIC, memory is taken from, and returned to, the OS if required by the database
        • DBA need not know how much memory to allocate to DB2
        • This is the default for newly created Viper databases
      • If set to a numeric value, memory is given to AUTOMATIC heaps up to the numeric value
        • Allows DBA to set total memory consumption for the database
        • DB2 will then distribute the memory to optimize performance
      • If set to COMPUTED, no DATABASE_MEMORY tuning will occur
        • When database starts, memory requirements are computed based on the heap configuration
        • Once the database starts, the database shared memory set is allocated based on the computation
        • Version 8 AUTOMATIC behavior
  • 22. Tailoring STMM with DPF
    • If SELF_TUNING_MEMORY is off at a particular node, no tuning will occur
      • Tuning should be turned off for atypical nodes
        • Catalog nodes with no data
        • Coordinator nodes that don’t directly process queries
    • Tuning can be turned off for one or more parameters on any given node
      • If STMM configuration update arrives at a node and that parameter isn’t set to AUTOMATIC at that node, nothing changes
      • Only parameters set to AUTOMATIC on the tuning node will generate configuration updates
  • 23. Single Point of Storage Management (SPSM)
    • A concept that is made up of three distinct features available in DB2 V8.2.2
      • Storage paths that are pre-defined for a database
      • Automatic storage table spaces whereby DB2 will create the table space containers on the storage paths defined for the database
      • The ability for DB2 to automatically extend existing containers (or create new ones) as the table space fills up
    • What DBAs are going to like?
      • The ability to auto-extend a DMS file table space
        • The simplicity of SMS and the power and flexibility on DMS
        • SAP uses DMS table spaces but want to offer users more “ease of use” characteristics
      • The ability to provide a single point of storage management for the database
  • 24. Automatic Storage Examples
    • Single point of storage
      • For existing database
      • For new databases
        • Specify storage areas for DB2 to automatically create tablespace containers - paths
        • Ability to specify initialize and growth size CREATE DATABASE TOR AUTOMATIC STORAGE YES ON /db2/TOR/storagepath001, /db2/TOR/storagepath002, /db2/TOR/storagepath003 AUTORESIZE YES INITIALSIZE 5 G INCREASESIZE 100 M MAXSIZE NONE
  • 25. XML
  • 26. What is XML? XML Technology XML = E x tensible M arkup L anguage Self-describing data structures XML Tags describe each element and their attributes <? xml version=“1.0” ?> <purchaseOrder id=‘12345” secretKey=‘4x%$^’> <customer id=“A6789”> <name>John Smith Co</name> <address> <street>1234 W. Main St</street> <city>Toledo</city> <state>OH</state> <zip>95141</zip> </address> </customer> <itemList> <item> <partNo>A54</partNo> <quantity>12</quantity> </item> <item> <partNo>985</partno> <quantity>1</quantity> </item> </itemList> </purchaseOrder>
  • 27. Integration of XML & Relational Capabilities DB2 SERVER CLIENT SQL/XML XQuery DB2 Engine XML Interface Relational Interface XML DB2 Storage: DB2 Client / Customer Client Application
      • Applications combine XML & relational data
      • Native XML data type (server & client side)
      • XML Capabilities in all DB2 components
  • 28. XML – A First Class Citizen
    • Data Definition
    • create table dept (deptID int , deptdoc xml );
    • Insert
      • insert into dept ( deptID, deptdoc) values ( ?,?)
    • Retrieve
    • select deptdoc from dept where deptID = ?
    • Query
      • select deptID, xmlquery (' $d/dept/name ' passing deptdoc as “d&quot;) from dept where deptID <> “PR27”;
  • 29. Native XML Storage
    • DB2 will store XML in parsed hierarchical format (similar to the DOM representation)
    • “ Native” = the best-suited on-disk representation of XML
    • create table dept (deptID char(8),…, doc xml );
    • Relational columns are stored in relational format
    • XML columns are stored natively
    • All XML data is stored in XML-typed columns
  • 30. How to use XML in DB2?
    • create table T (i int, doc xml )
    • insert into T values (22, ? )
    • select i, doc from T
    • xquery for $d in db2-fn:xmlcolumn(&quot;T.DOC&quot;)//address return $d
    XQuery as the primary language select i, xmlquery(‘$d/CustomerInfo/name/text()’ passing doc as “d”) from T where i = ?
  • 31. XMLTable: make table from XML SELECT X.* from XMLTABLE (‘db2-fn:xmlcolumn(“PORDERS.PO”)//customer’ COLUMNS “CID” INTEGER PATH ‘@id’, “Name” VARCHAR(30) PATH ‘name’, “ZipType” CHAR(2) PATH ‘zip/@type’, “Zip” XML PATH ‘zip’ ) AS “X” <zip>95023<zip> US Henrik 4711 <zip>33129<zip> US Bobby 1325 Zip ZipType Name CID
  • 32. The FLWOR Expression
    • F OR : iterates through a sequence, binding variable to items
    • L ET : binds a variable to a sequence
    • W HERE : eliminates items of the iteration
    • O RDER : reorders items of the iteration
    • R ETURN : constructs query results
    FOR $movie in db2-fn:xmlcolumn (‘table1.movies’) LET $actors := $movie//actor WHERE $movie/duration > 90 ORDER by $movie/@year RETURN <movie> {$movie/title, $actors} </movie> <movie> <title>Chicago</title> <actor>Renee Zellweger</actor> <actor>Richard Gere</actor> <actor>Catherine Zeta-Jones</actor> </movie>
  • 33. Objective
    • Data Row Compression Concepts
    • DDL
    • Creating Compression Dictionary table
  • 34. Data Row Compression Concepts Dictionary Based Data compressed: Disk, Buffer pools and logs
  • 35. Row Compression Using a Compression Dictionary
    • Repeating patterns within the data (and just within each row) is the key to good compression. Text data tends to compress well because of reoccurring strings as well as data with lots of repeating characters, leading or trailing blanks
    Dictionary … … Plano, TX, 24355 02 Dept 500 01 24355 TX Plano 10000 500 Fred 24355 TX Plano 20000 500 John ZipCode State City Salary Dept Name … 24355 TX Plano 20000 500 John 24355 TX Plano 10000 500 Fred … (02) 20000 (01) John (02) 10000 (01) Fred
  • 36. Row Compression Dictionary
    • Compression dictionary
      • Stores common sequences of consecutive bytes in a row
        • Such sequences can span consecutive columns
      • A table must have a compression dictionary before rows can be compressed
    • Compression dictionary storage
      • Directly in table partition
      • In special, internal, non-selectable rows which are linked together
      • Typically on the order of 100KB
    • Compression dictionary creation
      • In initial release, requires non-inplace REORG
    Dictionary Dictionary Dictionary Data Page Data Page Data Page Data Page
  • 37. Table DDL for Compression CREATE TABLE tablename ( col1 datatype, …) COMPRESS NO COMPRESS YES ALTER TABLE tablename COMPRESS NO COMPRESS YES
  • 38. Dictionary Building using Offline Reorg REORG < table name > INDEX < index name > ALLOW READ ACCESS ALLOW NO ACCESS USE <tablespace name > KEEPDICTIONARY LONGLOBDATA RESETDICTIONARY
  • 39. Compression Estimation Utility INSPECT ROWCOMPESTIMATE TABLE NAME table-name RESULTS KEEP file-name
  • 40. DB2 Compression Estimator
    • Product used to estimate compression savings.
  • 41. DB2 Compression Estimator the Results
  • 42. Compression Facts DPF supports compression Replication does not support compression XML column data is not compressed.
  • 43. Security
  • 44. Security - Label Based Access Control
    • Label Based Access Control (LBAC)
      • A “label” is associated with both user sessions and data rows
      • Rules for comparing users and data labels provide allow access controls to be applied at the row level
    • Labels may consist of multiple components
      • Hierarchical, group or tree types
      • Row labels appear as a single additional column in a protected table, regardless of the number of label components
      • User labels are granted by a security administrator
    • Similar to the label security support in DB2 for z/OS v8
  • 45. LBAC Query SELECT * FROM EMP WHERE SALARY >= 50000 No LBAC Top Secret SALARY ID Employee Secret 78000 200 83000 90 46000 253 33000 75 54000 100 82000 102 56000 250 30000 60 45000 50 70000 50 50000 100 60000 255
  • 46. Database Design
  • 47. Large Row Identifiers
    • Increase In table size limits and rows per page
      • Tablespace level definition
      • DMS Tablespace only
      • Tablespace is locked, definition is modified and catalogues are updated
      • Every index for every table will be marked bad
      • Indices will have will be rebuilt on first table access
  • 48. Current Tablespace Design 8 16 32 Pagesize 64G 128G 256G 512G 4 Tablespace Size Row ID (RID) 4 Bytes 4x10 9 Rows 255 16M
  • 49. New Tablespace Design 8 16 32 Pagesize 2T 4T 8T 16T 4 Tablespace Size Row ID (RID) 6 Bytes 1.5x10 12 Rows 512M 3K
  • 50. Rows on a Page
  • 51. Miscellaneous
  • 52. Copy Schema
    • Stored Procedure ADMIN_COPY_SCHEMA
    • Copy and Create Schema
    • Stored Procedure ADMIN_DROP_SCHEMA
  • 53. New SQL Functions
    • TRIM
    • STRIP
  • 54. Summary
    • Package Features
    • Partitioning
    • XML
    • Compression
    • Autonomic Computing
    • LBAC
  • 55. Education Sessions DB2 Version 9.1 Database Administration for the Oracle DBA seminar January 23 – 24, 2007 DB2 Version 9.1 LBAC (Label Based Access Control) Multi-Level Security for the Distributed Platforms January 25, 2007 DB2 Version 9.1 Stored Procedures, Development Center and XML seminar January 26, 2007