Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Teradata 13.10


Published on

Presentation by Todd Walters at the Teradata 3rd Party Influencers Meeting April 2010, San Diego CA

  • Be the first to comment

Teradata 13.10

  1. 1. Teradata Database 13.10 Overview Todd Walter CTO Teradata Labs
  2. 2. Fine Print <ul><li>Nothing in this presentation constitutes a commitment to deliver any specific functionality at any specific time. </li></ul><ul><li>Current planning date for 13.10 release in Q32010. </li></ul>
  3. 3. Key Features
  4. 4. What is a Temporal Database Definitions <ul><li>Temporal – the ability to store all historic states of a given set of data (a database row), and as part of the query select a point in time to reference the data.    Examples:  </li></ul><ul><ul><li>What was this account balance (share price, inventory level, asset value, etc) on this date?  </li></ul></ul><ul><ul><li>What data went into the calculation on 12/31/05, and what adjustments were made in 1Q06?  </li></ul></ul><ul><ul><li>On this historic date, what was the service level (contract status, customer value, insurance policy coverage) for said customer? </li></ul></ul><ul><li>Three Types of Temporal Tables </li></ul><ul><ul><li>Valid Time Tables </li></ul></ul><ul><ul><ul><li>When a fact is true in the modeled reality </li></ul></ul></ul><ul><ul><ul><li>User specified times </li></ul></ul></ul><ul><ul><li>Transaction Time Tables </li></ul></ul><ul><ul><ul><li>When a fact is stored in the database </li></ul></ul></ul><ul><ul><ul><li>System maintained time, no user control </li></ul></ul></ul><ul><ul><li>Bitemporal Tables </li></ul></ul><ul><ul><ul><li>Both Transaction Time and Valid Time </li></ul></ul></ul><ul><li>User Defined Time </li></ul><ul><ul><li>User can add time period columns, and take advantage of the added temporal operators </li></ul></ul><ul><ul><li>Database does not enforce any rules on user defined time columns </li></ul></ul>
  5. 5. Temporal Query Provide a list of members who were reported as covered on Jan. 15, 2000 in the Feb. 1, 2000 NCQA report, with names as accurate as our best data shows today. SELECT member.member_id, member.member_nm FROM edw.member_x_coverage VALIDTIME AS OF DATE ‘2000-01-15’ AND TRANSACTIONTIME AS OF DATE ‘2000-01-01’ ,edw.member WHERE member_x_coverage.member_id = member.member_id; select member.member_id ,member.member_nm from edw.member_x_coverage coverage ,edw.member where coverage.member_id = member.member_id and coverage.observation_start_dt <= '2000-02-01' and (coverage.observation_end_dt > '2000-02-01' or coverage.observation_end_dt is NULL) and coverage.effective_dt <= '2000-01-15' and (coverage.termination_dt > '2000-01-15' or coverage.termination_dt is NULL) With Temporal Support Without Temporal Support
  6. 6. Temporal Update – BiTemporal Table <ul><li>With Temporal Support </li></ul><ul><li>UPDATE objectlocation </li></ul><ul><li>SET LOCATION = ‘External’ </li></ul><ul><li>WHERE item_id = 125 </li></ul><ul><li>AND item_serial_num = 102 </li></ul><ul><li>Without Temporal Support </li></ul><ul><li>INSERT INTO objectlocation </li></ul><ul><li>SELECT item_id, item_serial_num, ‘External’, CURRENT_TIME, END(vt), CURRENT_TIME, ‘Until_Closed’ </li></ul><ul><li>FROM objectlocation </li></ul><ul><li>WHERE item_id = 125 AND item_serial_num = 102 </li></ul><ul><li>AND BEGIN(vt) <= CURRENT_TIME </li></ul><ul><li>AND END(vt) > CURRENT_TIME </li></ul><ul><li>AND END(tt) = ‘Until_Closed’; </li></ul><ul><li>INSERT INTO objectlocation </li></ul><ul><li>SELECT item_id, item_serial_num, location, BEGIN(vt), CURRENT_TIME, CURRENT_TIME, ‘Until_Closed’ </li></ul><ul><li>FROM objectlocation </li></ul><ul><li>WHERE item_id = 125 AND item_serial_num = 102 </li></ul><ul><li>AND BEGIN(vt) <= CURRENT_TIME </li></ul><ul><li>AND END(vt) > CURRENT_TIME </li></ul><ul><li>AND END(tt) = ‘Until_Closed’; </li></ul><ul><li>UPDATE objectlocation </li></ul><ul><li>SET END(tt) = CURRENT_TIME </li></ul><ul><li>WHERE item_id = 125 AND item_serial_num = 102 </li></ul><ul><li>AND BEGIN(vt) <= CURRENT_TIME </li></ul><ul><li>AND END(vt) > CURRENT_TIME </li></ul><ul><li>AND END(tt) = ‘Until_Closed’; </li></ul><ul><li>INSERT INTO objectlocation </li></ul><ul><li>SELECT item_id, item_serial_num, ‘External’, BEGIN(vt), END(vt), CURRENT_TIME, ‘Until_Closed’ </li></ul><ul><li>FROM objectlocation </li></ul><ul><li>WHERE item_id = 125 AND item_serial_num = 102 </li></ul><ul><li>AND BEGIN(vt) > CURRENT_TIME </li></ul><ul><li>AND END(tt) = ‘Until_Closed’ </li></ul><ul><li>UPDATE objectlocation </li></ul><ul><li>SET END(tt) = CURRENT_TIME </li></ul><ul><li>WHERE item_id =125 AND item_serial_num = 102 </li></ul><ul><li>AND BEGIN(vt) > CURRENT_TIME </li></ul><ul><li>AND END(vt) = ‘Until_Closed’ </li></ul>Current valid time, current transaction time Query Jeans (125,102) are sold today (2005-08-30)
  7. 7. Moving Current Date in PPI <ul><li>Description </li></ul><ul><ul><li>Support use of CURRENT_DATE and CURRENT_TIMESTAMP built-in functions in Partitioning Expression. </li></ul></ul><ul><ul><li>Ability to reconcile the values of these built-in functions to a newer date or timestamp using ALTER TABLE. </li></ul></ul><ul><ul><ul><li>Optimally reconciles the rows with the newly resolved date or timestamp value. </li></ul></ul></ul><ul><ul><ul><li>Reconciles the PPI expression. </li></ul></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Users can define with ‘moving’ date and timestamps with ease instead of manual redefinition of the PPI expression using constants. </li></ul></ul><ul><ul><ul><li>Date based partitioning is typical use for PPI. If a PPI is defined with ‘moving’ current date or current timestamp, the partition that contains the recent data can be as small as possible for efficient access. </li></ul></ul></ul><ul><ul><li>Required for Temporal semantics feature – provides the ability to define ‘current’ and ‘history’ partitions. </li></ul></ul>
  8. 8. Time Series Expansion Support <ul><li>Description </li></ul><ul><ul><li>New EXPAND ON clause added to SELECT to expand row with a period column into multiple rows </li></ul></ul><ul><ul><ul><li>EXPAND ON clause allowed in views and derived tables </li></ul></ul></ul><ul><ul><li>EXPAND ON syntax supports multiple ways to expand rows </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Permits time based analysis on period values </li></ul></ul><ul><ul><ul><li>Allows business questions such as ‘Get the month end average inventory cost during the last quarter of the year 2006’ </li></ul></ul></ul><ul><ul><ul><li>Allows OLAP analysis on period data </li></ul></ul></ul><ul><ul><li>Allows charting of period data in an excel format </li></ul></ul><ul><ul><li>Provides infrastructure for sequenced query semantics on Temporal tables </li></ul></ul>
  9. 9. Time series Expansion support <ul><li>What will it do? </li></ul><ul><ul><li>Expand a time period column and produce value equivalent rows one each for each time granule in the period </li></ul></ul><ul><ul><ul><li>Time granule is user specified </li></ul></ul></ul><ul><ul><ul><li>Permits a period representation of the row to be changed into an event representation </li></ul></ul></ul><ul><ul><li>Following forms of expansion provided: </li></ul></ul><ul><ul><ul><li>Interval expansion </li></ul></ul></ul><ul><ul><ul><ul><li>By the user specified intervals such as INTERVAL ‘1’ MONTH </li></ul></ul></ul></ul><ul><ul><ul><li>Anchor point expansion </li></ul></ul></ul><ul><ul><ul><ul><li>By the user specified anchored points in a time line </li></ul></ul></ul></ul><ul><ul><ul><li>Anchor period expansion </li></ul></ul></ul><ul><ul><ul><ul><li>By user specified anchored time durations in a time line </li></ul></ul></ul></ul>
  10. 10. Geospatial Enhancements <ul><li>Description </li></ul><ul><ul><li>Enhancements to the Teradata 13 Geospatial offering drastically increasing performance, adding functionality and providing integration points for partner tools </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Increased performance by changing UDF’s to Fast Path System functions </li></ul></ul><ul><ul><li>Replace the Shape File Generator client tool (org2org) with a stored procedure for tighter integration with the database and tools such as ESRI ARCGIS </li></ul></ul><ul><ul><li>Provide geodetic distance methods – SphericalBufferMBR() </li></ul></ul><ul><ul><li>WFS Server provides better tool integration support for MapInfo and ESRI products </li></ul></ul>
  11. 11. ESRI ArcGIS Connecting to Teradata via Safe Software FME <ul><li>FME connection in </li></ul><ul><li>ArcView </li></ul><ul><li>2. Connect to Teradata via TPT API </li></ul><ul><li>Select Teradata </li></ul><ul><li>tables for ArcView analysis </li></ul>
  12. 12. Projection of Impact Zone & Storm Path to Google Earth Where do I deploy my cat management team.
  13. 13. Algorithmic Compression <ul><li>Description </li></ul><ul><ul><li>Provide the capability that will allow users the option of defining compression/decompression algorithms that would be implemented as UDFs and that would be specified and applied to data at the column level in a row. Initially, Teradata will provide two compression/decompression algorithms; one set for UNICODE columns and another set for LATIN columns. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Data compression is the process by which data is encoded so that it consumes less physical storage space. This capability reduces both the overall storage capacity needs and the number of physical disk I/Os required for a given operation. Additionally, because less physical data is being operated on there is the potential to improve query response time as well. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>At some point, compressed data will have to be decompressed when required. This can cause the use of some extra CPU cycles but in general, the advantages of compression outweigh the extra cost of decompression. </li></ul></ul>
  14. 14. Multi-Value Compression For Varchar Columns <ul><li>Example – Multi-Value Compression for Varchar Column: </li></ul><ul><li>CREATE TABLE Customer </li></ul><ul><li>(Customer_Account_Number INTEGER </li></ul><ul><li>,Customer_Name VARCHAR(150) </li></ul><ul><li>COMPRESS (‘Rich’,‘Todd’) </li></ul><ul><li>,Customer_Address CHAR(200)); </li></ul>
  15. 15. Block Level Compression <ul><li>Description </li></ul><ul><ul><li>Feature provides the capability to perform compression on whole data blocks at the file system level before the data blocks are actually written to storage. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Block level compression yields benefit by reducing the actual storage required for storing the data, especially cool/cold data, and significantly reduce the I/O required to read the data. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>There is a CPU cost to perform the act of compression or decompression on whole data blocks and is generally considered a good trade since CPU cost is decreasing while I/O cost remains high. </li></ul></ul>
  16. 16. User-Defined SQL Operators <ul><li>Description </li></ul><ul><ul><li>This feature provides the capability that will allow users to define and encapsulate complex SQL expressions into a User Defined Function (UDF) database object. </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>The use of the SQL UDFs Feature allows users to define their own functions written using SQL expressions. Previously, the desired SQL expression would have to be written into the query for each use or alternatively, an external UDF could have been written in another programming language to provide the same capability. </li></ul></ul><ul><ul><li>Additionally, SQL UDFs allow one to define functions available in other databases and with alternative syntax (e.g. ANSI). </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>The Teradata SQL UDF feature is a subset of the SQL function feature described in the ANSI SQL:2003 standard. </li></ul></ul><ul><ul><li>Additionally, this feature does not introduce any changes to the definition of the Dictionary Tables per se, but will add additional rows into the DBC.TVM and DBC.UDFInfo tables to indicate the presence of a SQL UDF. </li></ul></ul>
  17. 17. SQL UDF - Example <ul><li>The “Months_Between” Function: </li></ul><ul><li>CREATE FUNCTION Months _ Between </li></ul><ul><li>(Date1 DATE, Date2 DATE) </li></ul><ul><li>RETURNS Interval Month (4) </li></ul><ul><li>LANGUAGE SQL </li></ul><ul><li>DETERMINISTIC </li></ul><ul><li>CONTAINS SQL </li></ul><ul><li>PARAMETER STYLE SQL </li></ul><ul><li>RETURN(CAST(Date1 AS DATE)- CAST(Date2 AS DATE)) MONTH (4); </li></ul><ul><li>   </li></ul><ul><li>SELECT MONTHS_BETWEEN ('2008-01-01', '2007-01-01'); </li></ul><ul><li>MONTHS_BETWEEN ('2008-01-01', '2007-01-01') </li></ul><ul><li>--------------------------------------------------- </li></ul><ul><li>12 </li></ul>
  18. 18. Performance
  19. 19. Character-Based PPI (CPPI) <ul><li>Description </li></ul><ul><ul><li>This feature leverages current Teradata Primary Partitioned Index (PPI) technology and extends this capability to allow the use of character data (CHAR, VARCHAR, GRAPHIC, VARGRAPHIC) as table partitioning mechanisms. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Currently, only an integer datatype is allowed to be used in a PPI scheme as a partitioning mechanism which facilitates superior query performance advantage via partition elimination. The extension of this capability to use character-based datatypes as a partitioning mechanism will allow for more partitioning options and in-turn yield similar query performance advantage as the current PPI technology gleans today. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>As with all Teradata indexes or partitioning database design choices, the Optimizer will determine the appropriate index/PPI to use that will provide the best-cost plan for executing the query. No end-user query modification is required. </li></ul></ul>
  20. 20. Timestamp Partitioning <ul><li>Description </li></ul><ul><ul><li>Provide the capability that allows users to explicitly specify a time zone for PPI tables involving DateTime partitioning expressions in order to make the expressions deterministic (e.g., not dependent on the session time zone). </li></ul></ul><ul><ul><li>Implement the enhancements that will extend the PPI partition elimination capability to include timestamp data types in partitioning expressions. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Insuring that DateTime partitioning expressions to be deterministic will eliminate the possibility of any errors that may occur as a result of incorrect dependence on session time zones. </li></ul></ul><ul><ul><li>The extension of this capability to use timestamp data types as a partitioning mechanism will allow for more partitioning options and in-turn yield similar query performance advantage as the current PPI technology gleans today. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Enhancements related to deterministic time zone handling will also be applied to sparse join index search conditions as well. </li></ul></ul>
  21. 21. Fastpath Functions <ul><li>Description </li></ul><ul><ul><li>The Fastpath Function project combines the extensibility, short development cycles, and ease-of-use aspects of UDFs with the high performance and ease-of-use aspects of Teradata system functions to yield and alternate development path by which Teradata Engineering software developers may add new Teradata system functions to the Teradata server. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>The Fastpath Function project will allow Teradata to use a shorter development cycle to fulfill many customer specific requests for implementing new system functions that additionally perform in the same manner as native Teradata system functions. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Source code and/or libraries used in the development of Teradata system functions must be solely managed and maintained by Teradata Engineering. End-users will not be able to develop Fastpath system functions. </li></ul></ul>
  22. 22. FastExport – Without Spooling <ul><li>Description </li></ul><ul><ul><li>Enhance the FastExport utility to provide an option that would allow the utility to execute in a mode that eliminates the requirement that the query data be spooled prior to the actual export process. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>The “direct without spooling” method will provide the mechanism to extract data from Teradata table quickly and efficiently with the main benefit being realized as a performance gain and minimum resource utilization. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>The “direct without spooling” method is not transparent to the user and must be specified as a discrete option when executing the FastExport utility. It is a user decision to choose between using either the “spool” or “no spool” method. </li></ul></ul>
  23. 23. Teradata Workload Management
  24. 24. TASM: Additional Workload Definitions <ul><li>Description </li></ul><ul><ul><li>Feature increases the number of available TASM Workload Definitions (WDs) to 250 (instead of 40). </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Complex mixed workloads require the ability to have a finer degree of granular control over the parts of the workload. Increasing the number of WDs will allow customers to better manage and report on resource usage of their system to meet either subject area (e.g. by country, application or division) resource distribution requirements, or category-of-work (e.g. high vs. low priority) resource distribution requirements. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Administrators should be aware that when defining a large number of workloads which will run concurrently, it will become difficult to create significant differentiation among the different workloads when the resource division granularity itself gets very small. </li></ul></ul>
  25. 25. TASM: Common Classifications <ul><li>Description </li></ul><ul><ul><li>This feature provides for capability to have Workload Definition classification criteria be available for Teradata Workload Management Category 1, 2 and 3 (Filters, System Throttles and Workload Definitions) and additionally, extends wildcard support to Filters and Throttles. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>The implementation of Common Classifications addresses the differences and delivers consistency between the TDWM categories (Filters, System Throttles and Workload Definitions), which improves the Teradata Workload Management user interface and it’s subsequent usability. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Consideration should be given to re-evaluating the current settings for the different categories insofar as common classification extends the ability to manage a workload in an easier and simpler fashion. </li></ul></ul>
  26. 26. TASM: Common Classifications <ul><li>“Who” Criteria </li></ul><ul><ul><li>Account String / Account Name </li></ul></ul><ul><ul><li>Teradata Username / Teradata Profile </li></ul></ul><ul><ul><li>Application Name </li></ul></ul><ul><ul><li>Client Address or Client Name </li></ul></ul><ul><ul><li>QueryBand </li></ul></ul><ul><li>“Where” Criteria (Data Objects) </li></ul><ul><ul><li>Databases </li></ul></ul><ul><ul><li>Tables / Views / Macros </li></ul></ul><ul><ul><li>Stored Procedures </li></ul></ul><ul><li>“What” Criteria </li></ul><ul><ul><li>Statement Type (SELECT, DDL, DML) </li></ul></ul><ul><ul><li>Utility Type </li></ul></ul><ul><ul><li>AMP Limits, Row Count, Final Row Count </li></ul></ul><ul><ul><li>Estimated Processing (CPU time) </li></ul></ul><ul><ul><li>Join Types </li></ul></ul><ul><ul><ul><li>ALL or no joins </li></ul></ul></ul><ul><ul><ul><li>ALL or no product joins </li></ul></ul></ul><ul><ul><ul><li>ALL or no unconstrained product joins </li></ul></ul></ul>
  27. 27. TASM Utility Management <ul><li>Description </li></ul><ul><ul><li>This feature enhances the TASM utility to augment the existing TD Utility Management capability to provide controls to be similar to the workload management of regular SQL requests and to provide for the automatic selection of the number of sessions used by Teradata utilities. </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Feature provides for more granular and centralized control of utility execution and allows deployment to a much wider audience of users and applications. Additionally, the use of Teradata utility sessions is moved inside the database and is automated to eliminate the detailed management of sessions in each job. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Consideration should be given to a reevaluation of current rule sets and settings to maximize control of the workload and relative utility execution. </li></ul></ul><ul><ul><li>Throttling in TASM eliminates need for Tenacity and Sleep. Execution of queued jobs becomes FIFO. Execution of queued jobs is immediate when resource available rather than at end of Sleep time” </li></ul></ul>
  28. 28. TASM Utility Session Configuration Rules <ul><li>For FastLoad, MultiLoad, and FastExport utilities, the DBS default for number of AMP sessions is one per AMP. </li></ul><ul><li>On a large system with hundreds or thousands of AMPs, this default becomes inappropriate. </li></ul><ul><li>Currently, a user can override this default by changing individual load/export script, or changing the MAXSESS parameter in the configuration file, or specifying through runtime parameters (i.e., MAXSESS or –M). </li></ul><ul><li>These overriding methods are inconvenient. </li></ul><ul><li>This feature allows a DBA to define TDWM rules in one central place that specifies the number of AMP sessions to be used based a combination of the following criteria: </li></ul><ul><ul><li>Utility Name </li></ul></ul><ul><ul><li>“Who” criteria (user, account, client address, query band, etc.) </li></ul></ul><ul><ul><li>Data size </li></ul></ul>
  29. 29. TASM Utility Session Configuration Rules <ul><li>Session configuration rules are optional. </li></ul><ul><li>These rules are active when any category of TDWM is enabled. </li></ul><ul><li>In each session configuration rule, the DBA specifies the criteria and the number of sessions to be used when these criteria are met. </li></ul><ul><li>For example, for stand alone MultiLoad jobs submitted by user Charucki , use 10 sessions. </li></ul><ul><li>Session configuration rules also support the Archive/Restore utility. </li></ul><ul><li>The DBA can define similar rules to specify the number of HUTPARSE sessions to be used for a specific set of criteria. </li></ul><ul><li>A new internal DBSControl field: DisableTDWMSessionRules is provided to disable user-defined session configuration rules and default sessions rules while TDWM is enabled. </li></ul><ul><li>When this field is set, Client and DBS will operate as in Teradata 13. </li></ul>
  30. 30. Availability, Serviceability, DBA Tasks Improvements
  31. 31. Fault Isolation <ul><li>Description </li></ul><ul><ul><li>Remove cases where faults can cause restarts </li></ul></ul><ul><ul><li>Specific cases </li></ul></ul><ul><ul><ul><li>EVL fault isolation </li></ul></ul></ul><ul><ul><ul><li>Unprotected UDFs </li></ul></ul></ul><ul><ul><ul><li>Dictionary cache re-initialization </li></ul></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Identify and isolate the fault to only the query or session </li></ul></ul><ul><ul><li>Issues in query calculation and qualification will be isolated </li></ul></ul><ul><ul><li>Badly behaving UDFs will have less opportunity to affect the system </li></ul></ul><ul><ul><li>Faults in the dictionary cache will result in the dictionary cache being flushed and reloaded rather than affecting the entire system </li></ul></ul>
  32. 32. AMP Fault Isolation <ul><li>Description </li></ul><ul><ul><li>This feature is intended to catch those AMP errors that currently cause DBS restarts where the error can be dealt with by taking a snapshot dump and aborting the transaction that caused the error </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>This feature can reduce the number of DBS restarts for customers, thus improving overall system availability </li></ul></ul><ul><li>What will it do? </li></ul><ul><ul><li>Current AMP Fault Isolation only avoids a full database restart for errors when accessing spool tables </li></ul></ul><ul><ul><li>The scope of fault isolation will be increased to cover ERRAMP* or ERRFIL* errors on permanent tables as well spools </li></ul></ul><ul><ul><li>Retrofitted to current supported releases </li></ul></ul>
  33. 33. Read From Fallback <ul><li>Description </li></ul><ul><ul><li>In the event of encountering a data block read error, either unreadable or corrupt data blocks, this feature will leverage the pre-existing Fallback Table facility to transparently retrieve the required data block from the fallback copy. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>When fallback is available, feature seriously improves fault tolerance and system availability. Significantly improves the value of having fallback and protects non-redundant (RAID 0 or JBOD) storage media, such as SSD, from data loss without restart/failover. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>Fallback does not need to be instantiated as system-wide property, because fallback is a table-level attribute, it can be applied selectively to the largest/most critical customer tables. </li></ul></ul><ul><ul><li>This facility does not in-and-of itself repair bad data blocks, but allows them to be read from fallback until they can be repaired. </li></ul></ul>
  34. 34. Read From Fallback - Particulars <ul><li>Reading data blocks from the Fallback copy is transparent to both a user and/or application. Manual intervention is not required whatsoever. </li></ul><ul><li>Feature does not require any special or particular locking mechanism. </li></ul><ul><li>A manual process is still required to rebuild the table to repair unreadable or corrupt data blocks. </li></ul><ul><li>Facility cannot recover from data block errors in the Cylinder Index, NUSI Secondary Index or Permanent Journals. </li></ul><ul><li>Read errors are fallback recoverable on TD Data Dictionary tables with the exception of the unhashed system tables such as the WAL log, Transient Journal and Space Accounting tables. </li></ul><ul><li>Facility applies to SQL Queries with data block read errors, SQL Insert…Select statements and the Archive utility where the block read error is on the source table only. </li></ul>
  35. 35. Transparent Cylinder Packing <ul><li>Description </li></ul><ul><ul><li>Develop a new file system background task that will pro-actively and transparently monitor and adjust the utilization (high or low) of user data cylinders and pack/unpack said cylinders accordingly with the goal of returning them to a more efficiently utilized state. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Cylinder Packing will result in cylinders having a higher datablock to cylinder index ratio making Cylinder Read operations more effective by reading less unoccupied sectors. </li></ul></ul><ul><ul><li>Higher cylinder utilization translates into data tables occupying less cylinders leaving more cylinders available for other purposes. </li></ul></ul><ul><ul><li>Diminishes the chances that a “mini-cylpack” operation will be executed and lessens the need for administrators to perform regularly scheduled Packdisk operations. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>This feature will have several customer tunable parameters in DBSControl that will allow customers to mange and adjust the level of impact of the Transparent Cylinder Packing operations. </li></ul></ul>
  36. 36. Merge Data Blocks During Full Table Modify Operations <ul><li>Description </li></ul><ul><ul><li>During full table modification operations such as Multiload, Insert Select and Update or Delete Where, combine adjacent blocks when small blocks are present. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Small data blocks increase the I/Os necessary to read a table and interferes with features such as compression and large cylinders. </li></ul></ul><ul><ul><li>Reduce the instances of small data blocks by combining them when doing work on those blocks or adjacent ones. </li></ul></ul>
  37. 37. Archive DBQL Rule Table <ul><li>Description </li></ul><ul><ul><li>Enhance the Teradata Archive utility to include two additional DBC tables to the DBC database (Dictionary) backup/restore: </li></ul></ul><ul><ul><ul><li>DBC.DBQLRuleTbl </li></ul></ul></ul><ul><ul><ul><li>DBC.DBQLRuleCountTbl </li></ul></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Inclusion of the additional DBC tables in the DBC Archive/Restore process will provide a mechanism by which these tables can be archived/restored and will altogether eliminate the cumbersome task of having to every time redefine the appropriate DBQL rules after a Dictionary initialization. </li></ul></ul><ul><ul><li>Implementation of this feature avoids the possibility of any table synchronicity issues and offers simplicity, convenience, and integrity when conducting a DBC archive/restore. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>DBC Archive will include these tables automatically in the Dictionary Archive; no user intervention is required. </li></ul></ul>
  38. 38. Be Aware Especially if Considering Tech Refresh
  39. 39. Large Cylinder Support <ul><li>Description </li></ul><ul><ul><li>This feature increases data storage cylinder size, the basic allocation unit for disk space in the Teradata file system. This also includes an increase in the Cylinder Index size thus allowing for a commensurate increase in storing more data blocks per cylinder. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>Eliminates the inefficiency associated with managing a large number of small cylinders on very large disk drives, allows larger AMP sizes (~10 TB per AMP), permits the more efficient storage of Large Objects and provides the foundation for block level compression by allowing more small blocks on a cylinder. </li></ul></ul><ul><li>Consideration </li></ul><ul><ul><li>This capability is only available starting in Teradata 13.10 and going forward and requires a System Initialization (SysInit) to be performed so that large cylinder support can be engaged. It is anticipated that typically this activity would be performed during technology refresh opportunities. </li></ul></ul>
  40. 40. Packed Row format for 64-bit platforms <ul><li>Description </li></ul><ul><ul><li>With the introduction of Teradata 13.10, data will now be stored on the database in byte-packed format whereas previously the data had been stored in byte-aligned format. </li></ul></ul><ul><li>Benefits </li></ul><ul><ul><li>Translates directly into a 4-7 % disk space savings insofar as less disk space is required to store byte-packed data than is required with byte-aligned data. Additionally, enables data rows to be accessed using fewer I/Os thus potentially enhancing the performance of some workloads. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>This capability is only available starting in Teradata 13.10 and going forward and requires a System Initialization (SysInit) to be performed so that packed row format support can be engaged. It is anticipated that typically this activity would be performed during technology refresh opportunities. </li></ul></ul>
  41. 41. Enhanced Teradata Hashing Algorithm <ul><li>Description </li></ul><ul><ul><li>Enhance the Teradata Hashing Algorithm to reduce the effects of irregularities in character data on hash results. </li></ul></ul><ul><li>Benefit </li></ul><ul><ul><li>This enhancement is targeted to reduce the number of hash collisions for character data stored as either Latin or Unicode, notably strings that contain primarily numeric data. Reduction in hash collisions reduces access time per AMP and produces a more balanced row distribution which in-turn improves parallelism. Reduced access time and increased parallelism translate directly to better performance. </li></ul></ul><ul><li>Considerations </li></ul><ul><ul><li>This capability is only available starting in Teradata 13.10 and going forward and requires a System Initialization (SysInit) to be performed so that the enhanced hashing algorithm can be engaged. It is anticipated that typically this activity would be performed during technology refresh opportunities. </li></ul></ul>
  42. 42. Teradata Database 13.10 3/18/10 <ul><li>Archive DBQL rule table </li></ul><ul><li>Enhanced trusted session security </li></ul><ul><li>External Directory support enhancements </li></ul><ul><li>Geospatial enhancements </li></ul><ul><li>Statement Info Parcel Enhancements (JDBC) </li></ul><ul><li>Support for IPv6 </li></ul><ul><li>Support unaligned row format for 64-bit platforms </li></ul><ul><li>Enhanced hashing algorithm </li></ul><ul><li>Large cylinder support </li></ul><ul><li>Algorithmic Compression for Character Data </li></ul><ul><li>VLC for VARCHAR columns </li></ul><ul><li>Block level compression </li></ul><ul><li>Variable fetch size (JDBC) </li></ul><ul><li>User Defined SQL Operators </li></ul><ul><li>Temporal Processing </li></ul><ul><ul><li>Temporal table support </li></ul></ul><ul><ul><li>Period data type enhancements </li></ul></ul><ul><ul><li>Replication support </li></ul></ul><ul><ul><li>Time series Expansion support </li></ul></ul>Enterprise Fit <ul><li>Moving current date in PPI </li></ul><ul><li>Automatic cylinder packing </li></ul><ul><li>Teradata 13.10 Teradata Express Edition </li></ul><ul><li>Domain Specific System Functions </li></ul>Ease of Use <ul><li>TASM: Utilities Management </li></ul><ul><li>TASM: Additional Workload Definitions </li></ul><ul><li>Restart time reduction </li></ul><ul><li>Read from Fallback </li></ul><ul><li>TASM: Workload Designer </li></ul>Active Enable <ul><li>Merge data blocks during full table modify operations </li></ul><ul><li>Statement independence </li></ul><ul><li>TVS Initial suggested temperature tables </li></ul><ul><li>FastExport without spooling </li></ul><ul><li>Character-based PPI </li></ul><ul><li>Timestamp partition elimination </li></ul><ul><li>User Defined Ordered Analytics </li></ul>Performance <ul><li>Dictionary cache re-initialization </li></ul><ul><li>EVL fault isolation and unprotected UDFs </li></ul><ul><li>AMP fault isolation </li></ul><ul><li>Parser diagnostic information capture </li></ul>Quality/ Support- ability
  43. 43. Teradata Developer Exchange <ul><li>What is it? </li></ul><ul><ul><li>Portal for technical insights </li></ul></ul><ul><ul><ul><li>Articles, blogs, podcasts </li></ul></ul></ul><ul><ul><ul><li>Forums, FAQs, “How to”, etc. </li></ul></ul></ul><ul><ul><li>Community of Teradata experts </li></ul></ul><ul><ul><ul><li>Customers, Teradata R&D and PS </li></ul></ul></ul><ul><ul><li>Share software </li></ul></ul><ul><ul><ul><li>Portlets, UDFs, SPs, scripts, etc. </li></ul></ul></ul><ul><ul><ul><li>Sample applications </li></ul></ul></ul><ul><li>Who can use it? </li></ul><ul><ul><li>Anyone (read only) </li></ul></ul><ul><ul><li>Registered contributors </li></ul></ul><ul><ul><ul><li>Blogs, code, ratings, articles, etc. </li></ul></ul></ul>