This document discusses partitioning in Oracle Database 11g. It introduces partitioning concepts and strategies including range, list, hash, interval and reference partitioning. It describes how partitioning can improve performance through pruning and partition-wise joins. It also explains how partitioning enhances manageability through maintenance operations on individual partitions and improves availability through partition independence. The document outlines Oracle Database 11g's extensions to partitioning including interval partitioning, reference partitioning, and virtual column-based partitioning.
The document summarizes Oracle's partitioning capabilities in Oracle Database 11g Release 2. It discusses the benefits of partitioning such as improved performance, manageability and availability. It describes the basic concepts of partitioning including different partitioning strategies and index types. New features in 11g like interval partitioning, reference partitioning and virtual column-based partitioning are introduced to provide more flexibility and manageability.
The document provides instructions for installing Ascential DataStage version 6.0 for the first time on Windows systems. It describes pre-install checks, hardware and software requirements, and outlines the installation process for the DataStage server and clients. It also briefly mentions installing DataStage components on mainframe platforms and the DataStage Parallel Extender.
This document provides an introduction to Netezza fundamentals for application developers. It describes Netezza's Asymmetric Massively Parallel Processing architecture, which uses an array of servers called S-Blades connected to disks and database accelerator cards to process large volumes of data in parallel. The document aims to help readers quickly understand and use the Netezza appliance through explanations of its components and query processing. It also defines key Netezza terminology and objects.
This white paper discusses Oracle to Netezza migration for a Fortune 100 retailer. It describes the key steps in the migration process including impact analysis, design and development, history load, and testing. Impact analysis identifies all database objects, ETL processes, and applications/reports impacted. Design considerations include data type mapping, SQL conversion, and report changes. History data can be loaded via flat files or ETL. Rigorous testing of database objects, SQL, ETL processes, and data is recommended to identify any issues.
DBArtisan XE6 is a database administration tool that helps DBAs manage databases across platforms more efficiently. It streamlines common tasks, reduces errors, and provides comprehensive capabilities for data management. As data volumes grow, the role of the DBA is evolving to handle multiple concurrent responsibilities. DBArtisan facilitates this role by providing performance monitoring, space and data management tools, and security management in a single interface.
Optimized dso data activation using massive parallel processing in sap net we...Nuthan Kishore
SAP NetWeaver BW 7.3 introduces optimized data activation for standard DataStore objects that uses massive parallel processing (MPP) on supported database platforms like IBM DB2. This allows the data activation to be performed directly in the database via parallel SQL statements, rather than processing records one by one in the application server. It can significantly improve performance over the previous method. The document describes how MPP-optimized activation works, its implementation for DB2, and recommendations for its use.
Parameter substitution in Aginity WorkbenchMary Uguet
This document discusses parameter substitution in Aginity Workbench, which allows developers to write SQL queries and scripts that include parameters. This enables running queries with different filters, date ranges, or table and column names by prompting for parameter values when executing. The feature streamlines testing queries with multiple variable values by avoiding multiple find-and-replace operations. Parameters are defined using a $ prefix, and the user is prompted to supply a value and data type when running a query containing parameters.
The document summarizes Oracle's partitioning capabilities in Oracle Database 11g Release 2. It discusses the benefits of partitioning such as improved performance, manageability and availability. It describes the basic concepts of partitioning including different partitioning strategies and index types. New features in 11g like interval partitioning, reference partitioning and virtual column-based partitioning are introduced to provide more flexibility and manageability.
The document provides instructions for installing Ascential DataStage version 6.0 for the first time on Windows systems. It describes pre-install checks, hardware and software requirements, and outlines the installation process for the DataStage server and clients. It also briefly mentions installing DataStage components on mainframe platforms and the DataStage Parallel Extender.
This document provides an introduction to Netezza fundamentals for application developers. It describes Netezza's Asymmetric Massively Parallel Processing architecture, which uses an array of servers called S-Blades connected to disks and database accelerator cards to process large volumes of data in parallel. The document aims to help readers quickly understand and use the Netezza appliance through explanations of its components and query processing. It also defines key Netezza terminology and objects.
This white paper discusses Oracle to Netezza migration for a Fortune 100 retailer. It describes the key steps in the migration process including impact analysis, design and development, history load, and testing. Impact analysis identifies all database objects, ETL processes, and applications/reports impacted. Design considerations include data type mapping, SQL conversion, and report changes. History data can be loaded via flat files or ETL. Rigorous testing of database objects, SQL, ETL processes, and data is recommended to identify any issues.
DBArtisan XE6 is a database administration tool that helps DBAs manage databases across platforms more efficiently. It streamlines common tasks, reduces errors, and provides comprehensive capabilities for data management. As data volumes grow, the role of the DBA is evolving to handle multiple concurrent responsibilities. DBArtisan facilitates this role by providing performance monitoring, space and data management tools, and security management in a single interface.
Optimized dso data activation using massive parallel processing in sap net we...Nuthan Kishore
SAP NetWeaver BW 7.3 introduces optimized data activation for standard DataStore objects that uses massive parallel processing (MPP) on supported database platforms like IBM DB2. This allows the data activation to be performed directly in the database via parallel SQL statements, rather than processing records one by one in the application server. It can significantly improve performance over the previous method. The document describes how MPP-optimized activation works, its implementation for DB2, and recommendations for its use.
Parameter substitution in Aginity WorkbenchMary Uguet
This document discusses parameter substitution in Aginity Workbench, which allows developers to write SQL queries and scripts that include parameters. This enables running queries with different filters, date ranges, or table and column names by prompting for parameter values when executing. The feature streamlines testing queries with multiple variable values by avoiding multiple find-and-replace operations. Parameters are defined using a $ prefix, and the user is prompted to supply a value and data type when running a query containing parameters.
This document summarizes new features in Teradata Database 13.10 including temporal database capabilities, geospatial enhancements, workload management improvements, and availability/serviceability enhancements. Key features include support for valid time, transaction time, and bitemporal tables, character-based primary partitioned indexes, timestamp partitioning, and increasing the number of available workload definitions in Teradata Active System Management.
The document summarizes several new features in SQL Server 2008 including policy-based management, data collection, resource governor, transparent data encryption, data auditing, backup compression, grouping sets, merge operator, change data capture, table valued parameters, spatial data types, sparse columns, and FILESTREAM data. These features provide capabilities such as centralized management, performance monitoring, resource allocation, data security, auditing, compression, and handling of large binary objects.
This document provides an overview of Oracle 11g data warehousing capabilities. It discusses key concepts like what a data warehouse is and its characteristics. It also outlines the common Oracle data warehousing tasks and steps for setting up a data warehouse system, including preparing the environment, configuring the database, and accessing Oracle Warehouse Builder.
NENUG Apr14 Talk - data modeling for netezzaBiju Nair
This document discusses considerations for data modeling on Netezza appliances to optimize performance. It recommends distributing data uniformly across snippet processors to maximize parallel processing. When joining tables, the distribution key should match join columns to keep processors independent. Zone maps and clustered tables can reduce data reads from disk. Materialized views on frequently accessed columns further improve performance for single table and join queries.
Managing user Online Training in IBM Netezza DBA Development by www.etraining...Ravikumar Nandigam
Dear Student,
Greetings from www.etraining.guru
We provide BEST online training in Hyderabad for IBM Netezza DBA and/or Development by a senior working professional. Our Netezza Trainer comes with a working experience of 10+ years, 6+ years in Netezza and an Netezza 7.1 certified professional.
DBA Course Content: http://www.etraining.guru/course/dba/online-training-ibm-netezza-puredata-dba
Development Course Content: http://www.etraining.guru/course/ibm/online-training-ibm-puredata-netezza-development
Course Cost: USD 300 (or) INR 18000
Number of Hours: 24 hours
*Please note the course also includes Netezza certification assitance.
If there is any opportunity, we will be very happy to serve you. Appreciate if you can explore other training opportunities in our website as well.
We can be reachable at info@etraining.guru (or) 91-996-669-2446 for any further info/details.
Regards,
Karthik
www.etraining.guru"
The document discusses using R for analytics on Netezza's TwinFin appliance. TwinFin is a massively parallel processing database management system designed specifically for performance. It utilizes field programmable gate arrays and an "on-stream analytics" approach. The document outlines how R interfaces with TwinFin through functions like nzapply and nztapply that allow running R functions on TwinFin's distributed data in parallel. It provides examples of building decision trees and linear models on TwinFin tables using these functions.
DB Optimizer Datasheet - Automated SQL Profiling & Tuning for Optimized Perfo...Embarcadero Technologies
Learn more about DB Optimizer and try it free at: http://embt.co/DBOptimizer
Embarcadero® DB Optimizer™ XE6 is an automated SQL optimization tool that maximizes database and application performance by quickly discovering, diagnosing, and optimizing poor-performing SQL code. DB Optimizer empowers DBAs and database developers to eliminate performance bottlenecks by graphically profiling key metrics inside the database, relating resource utilization to specific queries, and helping to visually tune problematic SQL.
This 24-hour training course covers administration and maintenance of the IBM Netezza data warehouse appliance. The course will teach students how to setup and configure the Netezza emulator, load and manage databases and data, perform backups and restores, tune performance, and monitor the system. Hands-on labs are included to practice administrative tasks like creating database objects, loading data, and running backups and restores. The detailed course outline covers all aspects of Netezza architecture, configuration, SQL usage, and maintenance.
This document outlines 6 golden rules for optimizing Teradata SQL queries: 1) Ensure statistic completeness and correctness, 2) Use primary indexes for joins whenever possible, 3) Leverage Teradata indexing techniques like secondary indexes and join indexes, 4) Rewrite queries when possible, 5) Monitor queries in real-time, and 6) Compare resource usage before and after optimization to measure improvement. Following these rules helps improve query performance by ensuring the optimizer selects efficient execution plans.
Getting to know oracle database objects iot, mviews, clusters and more…Aaron Shilo
This document provides an overview of various Oracle database objects and storage structures including:
- Index-organized tables store data within the index based on key values for faster access times and reduced storage.
- Materialized views store the results of a query for faster access instead of re-executing joins and aggregations.
- Virtual indexes allow testing whether a potential new index would be used by the optimizer before implementing.
The presenter discusses how different segment types like index-organized tables, materialized views, and clusters can reduce I/O and improve query performance by organizing data to reduce physical reads and consistent gets. Experienced Oracle DBAs use these features to minimize disk I/O, the greatest factor in
The document discusses topics related to data warehousing. It covers:
1. The key components involved in getting data into a data warehouse, which include extraction, transformation, cleansing, loading, and summarization of data.
2. An overview of the main components of a data warehouse architecture, including source data, data staging, data storage, information delivery, metadata management, and control components.
3. Various topics to be covered related to data warehousing, such as data marts, ERP, knowledge management, and customer relationship management.
This document provides an overview of the physical and logical structures of an Oracle database, including datafiles, control files, redo logs, and tablespaces. It also describes Oracle instances, the system global area (SGA), program global area (PGA), and background processes. Administrative tasks like backups, monitoring, and patching are discussed. Specific details are given about the Computer Science database, including its server, tablespaces, and 4mm DAT tape backup method.
Netezza uses a proprietary architecture called Asymmetric Massively Parallel Processing (AMPP). The AMPP architecture distributes data and query processing across multiple processing blades called S-Blades. Each S-Blade contains processors, memory, and is connected to disk arrays through a database accelerator card. This architecture allows Netezza to process large volumes of data in parallel across the S-Blades for high performance. Netezza also uses some unique tools and concepts compared to traditional databases, such as not enforcing constraints for improved load performance and using hidden columns to track transaction details instead of redo logs.
This document provides an overview of Oracle database concepts including physical and logical structures, the system global area (SGA) and program global area (PGA), background processes, and the computer science database instance details. Specifically, it describes datafiles, control files, redo logs, tablespaces, segments, and schemas as logical structures and explains how the SGA contains the database buffer cache, redo log buffer, and shared pool. It also outlines several important background processes like SMON, PMON, DBWR, LGWR, and CKPT.
Over 7+ years of experience in analysis, design, development, implementation and administration/support of Data Warehousing, Reporting and Client/Server applications using Oracle Business Intelligence Enterprise Edition (OBIEE), ODI and Informatica, Tableau.
Extensive experience in OBIEE Administration Tool, OBIEE Answers, OBIEE Intelligent Dashboards, BI Publisher and OBIEE Delivers with Siebel Analytics/OBIEE stand-alone/integrated Applications.
Proficient in developing OBIEE Repository at the Physical, Business and Presentation Layers (Data Modeling), Time Series Objects, Interactive Dashboards with drill-down & drill-across capabilities using global and local Filters, OBIEE/Siebel Security setup (users/group, access/query privileges), configuring OBIEE/Analytics Metadata objects (Subject Area, Table, Column), Presentation Services/Web Catalog objects (Dashboards, Pages, Folders, Reports), Scheduling iBots or Schedulers and OBI Cluster Controller.
This document provides a collection of 17 frequently asked questions (FAQs) about Oracle database concepts. It includes concise definitions and explanations of key terms such as Oracle, Oracle database, Oracle instance, parameter file, system global area, program global area, user account, schema, user role, and more. It also provides sample scripts and is intended as a learning and interview preparation guide for Oracle DBAs.
SQL Server 2016 includes several new features such as columnstore indexes, in-memory OLTP, live query statistics, temporal tables, and row-level security. It also features improved manage backup functionality, support for multiple tempdb files, and new ways to format and encrypt query results. Advanced capabilities like PolyBase and Stretch Database further enhance analytics and management of historical data.
Oracle8i introduces locally-managed tablespaces that allow the database to automatically manage space allocation through the use of internal bitmaps, eliminating the need for manual space management by DBAs. Locally-managed tablespaces simplify space management operations by setting or clearing bits rather than updating data dictionary tables, improving performance and scalability. They also automatically track adjacent free space and allow extents to be allocated and reused with no fragmentation or wasted space. The new feature frees DBAs from frequently managing tablespace sizes and reacting to fragmentation issues.
Dokumen tersebut membahas manipulasi string dalam bahasa pemrograman Pascal. Terdapat penjelasan tentang deklarasi string, operasi konkatenasi untuk menggabungkan string, pembandingan string menggunakan operator relasi, mengakses karakter individu dalam string, dan fungsi-fungsi standar untuk memanipulasi string seperti Concat, Length, dan Pos.
Este documento lista nueve fuentes principales de energía, incluyendo petróleo, carbón, energía hidroeléctrica, energía de las olas, energía geotérmica, biomasa, energía solar, energía nuclear y energía eólica. Cada fuente se describe brevemente por su origen y uso.
This document summarizes new features in Teradata Database 13.10 including temporal database capabilities, geospatial enhancements, workload management improvements, and availability/serviceability enhancements. Key features include support for valid time, transaction time, and bitemporal tables, character-based primary partitioned indexes, timestamp partitioning, and increasing the number of available workload definitions in Teradata Active System Management.
The document summarizes several new features in SQL Server 2008 including policy-based management, data collection, resource governor, transparent data encryption, data auditing, backup compression, grouping sets, merge operator, change data capture, table valued parameters, spatial data types, sparse columns, and FILESTREAM data. These features provide capabilities such as centralized management, performance monitoring, resource allocation, data security, auditing, compression, and handling of large binary objects.
This document provides an overview of Oracle 11g data warehousing capabilities. It discusses key concepts like what a data warehouse is and its characteristics. It also outlines the common Oracle data warehousing tasks and steps for setting up a data warehouse system, including preparing the environment, configuring the database, and accessing Oracle Warehouse Builder.
NENUG Apr14 Talk - data modeling for netezzaBiju Nair
This document discusses considerations for data modeling on Netezza appliances to optimize performance. It recommends distributing data uniformly across snippet processors to maximize parallel processing. When joining tables, the distribution key should match join columns to keep processors independent. Zone maps and clustered tables can reduce data reads from disk. Materialized views on frequently accessed columns further improve performance for single table and join queries.
Managing user Online Training in IBM Netezza DBA Development by www.etraining...Ravikumar Nandigam
Dear Student,
Greetings from www.etraining.guru
We provide BEST online training in Hyderabad for IBM Netezza DBA and/or Development by a senior working professional. Our Netezza Trainer comes with a working experience of 10+ years, 6+ years in Netezza and an Netezza 7.1 certified professional.
DBA Course Content: http://www.etraining.guru/course/dba/online-training-ibm-netezza-puredata-dba
Development Course Content: http://www.etraining.guru/course/ibm/online-training-ibm-puredata-netezza-development
Course Cost: USD 300 (or) INR 18000
Number of Hours: 24 hours
*Please note the course also includes Netezza certification assitance.
If there is any opportunity, we will be very happy to serve you. Appreciate if you can explore other training opportunities in our website as well.
We can be reachable at info@etraining.guru (or) 91-996-669-2446 for any further info/details.
Regards,
Karthik
www.etraining.guru"
The document discusses using R for analytics on Netezza's TwinFin appliance. TwinFin is a massively parallel processing database management system designed specifically for performance. It utilizes field programmable gate arrays and an "on-stream analytics" approach. The document outlines how R interfaces with TwinFin through functions like nzapply and nztapply that allow running R functions on TwinFin's distributed data in parallel. It provides examples of building decision trees and linear models on TwinFin tables using these functions.
DB Optimizer Datasheet - Automated SQL Profiling & Tuning for Optimized Perfo...Embarcadero Technologies
Learn more about DB Optimizer and try it free at: http://embt.co/DBOptimizer
Embarcadero® DB Optimizer™ XE6 is an automated SQL optimization tool that maximizes database and application performance by quickly discovering, diagnosing, and optimizing poor-performing SQL code. DB Optimizer empowers DBAs and database developers to eliminate performance bottlenecks by graphically profiling key metrics inside the database, relating resource utilization to specific queries, and helping to visually tune problematic SQL.
This 24-hour training course covers administration and maintenance of the IBM Netezza data warehouse appliance. The course will teach students how to setup and configure the Netezza emulator, load and manage databases and data, perform backups and restores, tune performance, and monitor the system. Hands-on labs are included to practice administrative tasks like creating database objects, loading data, and running backups and restores. The detailed course outline covers all aspects of Netezza architecture, configuration, SQL usage, and maintenance.
This document outlines 6 golden rules for optimizing Teradata SQL queries: 1) Ensure statistic completeness and correctness, 2) Use primary indexes for joins whenever possible, 3) Leverage Teradata indexing techniques like secondary indexes and join indexes, 4) Rewrite queries when possible, 5) Monitor queries in real-time, and 6) Compare resource usage before and after optimization to measure improvement. Following these rules helps improve query performance by ensuring the optimizer selects efficient execution plans.
Getting to know oracle database objects iot, mviews, clusters and more…Aaron Shilo
This document provides an overview of various Oracle database objects and storage structures including:
- Index-organized tables store data within the index based on key values for faster access times and reduced storage.
- Materialized views store the results of a query for faster access instead of re-executing joins and aggregations.
- Virtual indexes allow testing whether a potential new index would be used by the optimizer before implementing.
The presenter discusses how different segment types like index-organized tables, materialized views, and clusters can reduce I/O and improve query performance by organizing data to reduce physical reads and consistent gets. Experienced Oracle DBAs use these features to minimize disk I/O, the greatest factor in
The document discusses topics related to data warehousing. It covers:
1. The key components involved in getting data into a data warehouse, which include extraction, transformation, cleansing, loading, and summarization of data.
2. An overview of the main components of a data warehouse architecture, including source data, data staging, data storage, information delivery, metadata management, and control components.
3. Various topics to be covered related to data warehousing, such as data marts, ERP, knowledge management, and customer relationship management.
This document provides an overview of the physical and logical structures of an Oracle database, including datafiles, control files, redo logs, and tablespaces. It also describes Oracle instances, the system global area (SGA), program global area (PGA), and background processes. Administrative tasks like backups, monitoring, and patching are discussed. Specific details are given about the Computer Science database, including its server, tablespaces, and 4mm DAT tape backup method.
Netezza uses a proprietary architecture called Asymmetric Massively Parallel Processing (AMPP). The AMPP architecture distributes data and query processing across multiple processing blades called S-Blades. Each S-Blade contains processors, memory, and is connected to disk arrays through a database accelerator card. This architecture allows Netezza to process large volumes of data in parallel across the S-Blades for high performance. Netezza also uses some unique tools and concepts compared to traditional databases, such as not enforcing constraints for improved load performance and using hidden columns to track transaction details instead of redo logs.
This document provides an overview of Oracle database concepts including physical and logical structures, the system global area (SGA) and program global area (PGA), background processes, and the computer science database instance details. Specifically, it describes datafiles, control files, redo logs, tablespaces, segments, and schemas as logical structures and explains how the SGA contains the database buffer cache, redo log buffer, and shared pool. It also outlines several important background processes like SMON, PMON, DBWR, LGWR, and CKPT.
Over 7+ years of experience in analysis, design, development, implementation and administration/support of Data Warehousing, Reporting and Client/Server applications using Oracle Business Intelligence Enterprise Edition (OBIEE), ODI and Informatica, Tableau.
Extensive experience in OBIEE Administration Tool, OBIEE Answers, OBIEE Intelligent Dashboards, BI Publisher and OBIEE Delivers with Siebel Analytics/OBIEE stand-alone/integrated Applications.
Proficient in developing OBIEE Repository at the Physical, Business and Presentation Layers (Data Modeling), Time Series Objects, Interactive Dashboards with drill-down & drill-across capabilities using global and local Filters, OBIEE/Siebel Security setup (users/group, access/query privileges), configuring OBIEE/Analytics Metadata objects (Subject Area, Table, Column), Presentation Services/Web Catalog objects (Dashboards, Pages, Folders, Reports), Scheduling iBots or Schedulers and OBI Cluster Controller.
This document provides a collection of 17 frequently asked questions (FAQs) about Oracle database concepts. It includes concise definitions and explanations of key terms such as Oracle, Oracle database, Oracle instance, parameter file, system global area, program global area, user account, schema, user role, and more. It also provides sample scripts and is intended as a learning and interview preparation guide for Oracle DBAs.
SQL Server 2016 includes several new features such as columnstore indexes, in-memory OLTP, live query statistics, temporal tables, and row-level security. It also features improved manage backup functionality, support for multiple tempdb files, and new ways to format and encrypt query results. Advanced capabilities like PolyBase and Stretch Database further enhance analytics and management of historical data.
Oracle8i introduces locally-managed tablespaces that allow the database to automatically manage space allocation through the use of internal bitmaps, eliminating the need for manual space management by DBAs. Locally-managed tablespaces simplify space management operations by setting or clearing bits rather than updating data dictionary tables, improving performance and scalability. They also automatically track adjacent free space and allow extents to be allocated and reused with no fragmentation or wasted space. The new feature frees DBAs from frequently managing tablespace sizes and reacting to fragmentation issues.
Dokumen tersebut membahas manipulasi string dalam bahasa pemrograman Pascal. Terdapat penjelasan tentang deklarasi string, operasi konkatenasi untuk menggabungkan string, pembandingan string menggunakan operator relasi, mengakses karakter individu dalam string, dan fungsi-fungsi standar untuk memanipulasi string seperti Concat, Length, dan Pos.
Este documento lista nueve fuentes principales de energía, incluyendo petróleo, carbón, energía hidroeléctrica, energía de las olas, energía geotérmica, biomasa, energía solar, energía nuclear y energía eólica. Cada fuente se describe brevemente por su origen y uso.
This document discusses personalized learning, what it is, and what it isn't. It presents a logic model and core components for a personalized learning initiative. The fundamental understanding is that learning is a personal and autonomous process. The goal is to increase learning through more frequent and consistent connections between educators and learners. This is achieved by building shared knowledge of the learner, co-designing learning paths, and developing mutually agreed upon learning goals. The results should be learners with greater efficacy, ownership, and capacity for learning who are more engaged and prepared. Core components include innovation platforms, learning and teaching relationships and roles, and flexible structures and policies. Emerging results provide evidence of impacts on traditional measures, behaviors, engagement, and commitment
The resume summarizes Dana M. Jones' experience in warehouse work, security, data entry, and administrative duties. Jones has over 20 years of experience in these areas, including for the U.S. Army and various companies. Core skills include Microsoft Office, security, claims processing, and customer service.
La Semana Santa es uno de los momentos más significativos del calendario litúrgico cristiano. Comienza con el Domingo de Ramos, cuando se celebra la entrada triunfal de Jesús a Jerusalén. El Jueves Santo se conmemora la Última Cena y el lavado de los pies. El Viernes Santo se recuerda la Pasión y muerte de Jesucristo. El Sábado Santo es un día de silencio y espera. El Domingo de Pascua se celebra la resurrección de Jesús con alegría y renov
What if you could take all the repetitive tasks that your technicians handles on a daily basis and put 90% of them on autopilot? Without automation, MSPs are limited to managing a small handful of customers per technician. On this webcast we want to show you how you can automate the majority of your business allowing each tech to manage up to 3x the number of customers.
Preservação da informação na biblioteca digitalCariniana Rede
1. O documento discute bibliotecas digitais e serviços de preservação digital, incluindo desafios como obsolescência tecnológica.
2. É destacada a importância da acessibilidade permanente, arquivamento confiável e preservação digital para garantir o acesso a longo prazo.
3. Diferentes estratégias de preservação são descritas, como cópias de bits, migração, emulação e armazenamento distribuído.
The document announces the Women in Sales Awards taking place in December 2013. It provides a website and phone number for those interested in nominating or learning more about the awards. The awards are supported by media partners Sales Initiative Magazine and i.
Este documento resume la teoría de la acción comunicativa de Jürgen Habermas. Habermas redefine los postulados epistemológicos y metodológicos de las ciencias sociales basándose en la fenomenología de Husserl y la filosofía del lenguaje de Wittgenstein. Define la acción social en términos comunicativos y racionales, conceptualizando la racionalidad para propósitos comunicativos y el proceso evolutivo de la racionalización sociocultural. Finalmente, presenta el paradigma comunicativo para la reconstrucción
Preservação Digital da Informação Técnico CientíficaCariniana Rede
1) O documento discute as definições, necessidades e sistemas de preservação digital de informação técnico-científica.
2) É analisada a produção documental sobre preservação digital no contexto da gestão da informação científica.
3) A preservação digital é entendida como um componente do ciclo de vida da informação digital que envolve procedimentos de armazenamento, cópia e migração para assegurar a integridade da informação a longo prazo.
Hiking tours 2013
Jezra Travel and Jordan offer many possibilities for hiking trails and tracks. We have selected the most beautiful hiking trails and planned dates for 2013 in which you can join.
11 Days program with hike from Dana to Petra:
Discover Jordan by Foot! During this 11 day program you'll make a spectacular hike from Dana to Petra, visit Wadi Rum and end your stay with relaxation at a Dead Sea Spa. An ideal combination for active guests visiting Jordan!
Click here for the full program brochure including prices, dates and pictures!
8 Days program with hikes in Wadi Rum
Step by step through Wadi Rum! During this program you'll hike along the most interesting locations in the breathtaking Wadi Rum desert. Together with an English speaking and experienced guide you'll hike your way through the desert. End your adventure with a luxurious stay at the Dead Sea beach.
La Semana Santa conmemora los misterios de la salvación realizados por Cristo antes de su muerte y resurrección. Comienza con el Domingo de Ramos, cuando Jesús entra triunfalmente en Jerusalén. El Jueves Santo se celebra la Última Cena y el lavado de los pies. El Viernes Santo se conmemora la pasión y muerte de Cristo. El Sábado Santo es un día de silencio y espera. El Domingo de Pascua se celebra la resurrección de Cristo con alegría y renovación de
O documento discute a importância da preservação digital confiável em repositórios, destacando modelos como o OAIS e a necessidade de certificação. Ele fornece detalhes sobre iniciativas para desenvolver critérios de certificação para repositórios digitais confiáveis baseados no Audit Checklist.
El documento explica cómo resolver una sustracción con números negativos en dos pasos: 1) Cambiar el signo del sustraendo si es negativo, y 2) Resolver la sustracción resultante. Por ejemplo, (5 - (-3)) se resuelve como (5 + 3) = 8.
SQL Azure Database is a cloud database service from Microsoft. SQL Azure provides web-facing database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper provides an overview on some scale out strategies, challenges with scaling out on-premise and how you can benefit with scaling out with SQL Azure.
- Database tables are becoming very large, sometimes terabytes in size, which makes them difficult to manage.
- SQL Server 2005 introduced native table partitioning capabilities that allow databases to split large tables into smaller partitions that are easier to manage.
- Partitioning a table by dates, such as weekly, is common because data is often loaded and queried based on time periods. This allows maintenance and queries to focus on individual partitions rather than the entire large table.
This document discusses index-organized tables in Oracle8i. Index-organized tables store the entire contents of a table in an index structure, allowing both indexed and non-indexed columns to be retrieved with a single index access. This provides faster access times for queries using primary keys compared to conventional tables. The document outlines several applications that can benefit from index-organized tables, such as OLTP, e-commerce, and data warehousing applications involving large amounts of data accessed via primary keys. It also summarizes the results of a performance study showing index-organized tables outperforming conventional tables for primary key access.
Data Warehouse Physical Design,Physical Data Model, Tablespaces, Integrity Constraints, ETL (Extract-Transform-Load) ,OLAP Server Architectures, MOLAP vs. ROLAP, Distributed Data Warehouse ,
Managing large chain of Hotels and ERP database comprises of core areas such as HRMS & PIP.HRMS (Human Resource Management System), which further includes areas such as Soft Joining, Promotion, Transfer, Confirmation, Leave Attendance and Exit, etc. PIP (Payroll Information Portal), wherein employees can view their individual Salary details, submit investment declaration, Reimbursement claim & CTC structuring, etc. Management of Large Chain of Hotels and ERP Database in AWS Cloud involves continuous monitoring with regards to the areas such as Performance of resource usages and optimization techniques relating to the use of PL/SQL. High Availability (HA) of data is accomplished through the Backup and Recovery mechanism and security of the data by Encryption & Decryption mechanism.
As customer areas require more and more details to remain competitive, it has dropped to data base designers and directors to help ensure that the details are handled effectively and can be recovered for research efficiently.
The document summarizes techniques for optimizing database performance across different platforms as a high performance DBA. It discusses strategies for storage management, performance management, and capacity management. Embarcadero products like Performance Center and DBArtisan with Space Analyst are presented as tools to help automate monitoring and diagnosis of storage issues and performance bottlenecks across databases.
This document discusses features of various Oracle database releases including 8i, 9i, 10g, and 11g. It provides overviews of new capabilities in areas like interMedia, spatial, partitioning, availability, data warehousing, and performance. Graphs show Oracle's market share dominance over IBM and Microsoft. The document also outlines Oracle's strategies for .NET integration on Windows and grid computing.
A Survey And Comparison Of Relational And Non-Relational DatabaseKarla Adamson
This document provides a summary and comparison of relational and non-relational databases. It begins with an introduction describing the purpose and organization. The main sections describe the key aspects of relational databases, including their structure, tools like MySQL and Oracle, and shortcomings. Non-relational databases are then described, including different types (document stores, key-value stores, etc.), advantages over relational databases, and their own shortcomings. Comparisons are drawn between relational and non-relational databases and their common tools.
SURVEY ON IMPLEMANTATION OF COLUMN ORIENTED NOSQL DATA STORES ( BIGTABLE & CA...IJCERT JOURNAL
NOSQL is a database provides a mechanism for storage and retrieval of data that is modeled for huge amount of data which is used in big data and Cloud Computing . NOSQL systems are also called "Not only SQL" to emphasize that they may support SQL-like query languages. A basic classification of NOSQL is based on data model; they are like column, Document, Key-Value etc. The objective of this paper is to study and compare the implantation of various column oriented data stores like Bigtable, Cassandra.
Oracle 11G introduces several new features including Flashback Data Archive for extended data recovery, Database Replay for testing system changes, SQL Performance Analyzer for comparing SQL statement performance before and after changes, and Automatic Diagnostic Repository for proactive health checking and problem resolution. Other new features include online patching, simplified memory management with a single MEMORY_TARGET parameter, enhanced SQL Access Advisor, virtual columns, invisible indexes, and transparent tablespace encryption.
Whitepaper - Information management with oracleinfoMENTUM
The operation of corporations, enterprises, and other organisations relies on the management, understanding and efficient use of vast amounts of information. Increasingly, business value and operations depend on management, analysis and understanding of information that is not readily accessible without human or machine based interpretation.
This document provides an introduction to NoSQL databases. It discusses that NoSQL databases are non-relational, do not require a fixed table schema, and do not require SQL for data manipulation. It also covers characteristics of NoSQL such as not using SQL for queries, partitioning data across machines so JOINs cannot be used, and following the CAP theorem. Common classifications of NoSQL databases are also summarized such as key-value stores, document stores, and graph databases. Popular NoSQL products including Dynamo, BigTable, MongoDB, and Cassandra are also briefly mentioned.
This document discusses SQL Server table partitioning and provides guidance on when it is helpful to use partitioning. It describes the key concepts of partitioning such as partition functions, ranges, schemes and switching partitions. It also outlines some of the fine print around limitations, parallelism, locking and maintenance. The document concludes that the client should use partitioning if their workload exhibits queries by region, they can optimize queries for it, have the disk and memory resources to support it and can test it adequately.
This document discusses partitioning in OBIEE 11g to improve query performance. It describes three types of partitioning: fact-based partitioning which stores data in separate tables by dimension like year; level-based partitioning which stores aggregated data at different levels in separate tables; and value-based partitioning which splits data into tables by column values. It provides steps for creating requests against partitioned data, including importing metadata, creating physical joins, mapping sources, and checking the results and query logs.
1. Briefly describe the major components of a data warehouse archi.docxmonicafrancis71118
1. Briefly describe the major components of a data warehouse architecture?
Components in data warehouse
Data warehouse contains the collection of data that are used for decision making and used business intelligence.
· It is a subject-oriented, integrated, time- variant, and non-updateable data.
· Three components in the architecture of the data warehouse are
· Operational data
· Reconciled data
· Derived data
Diagrammatic representation of architecture of data warehouse is shown below:
Components in the data warehouse architecture:
Operational data:
· It maintains the data from the operational system throughout the organization.
Reconciled data
· It is a data stored in the enterprise data warehouse and an operational data store.
· it contains a current and detailed data and authoritative sources for decision support application.
Derived data
· Derives data is a data obtained from the data mart that is used for the end user decision support application.
· It contains the selected, formatted, and aggregated data.
· It is the data stored in every mart.
Types of metadata in the data warehouse architecture:
There are three types of metadata. They are,
· Operational metadata.
· Enterprises data warehouse (EDW)metadata.
· Data mart metadata.
Operational metadata:
It describes the data in the operational system that provides for the enterprise data warehouse.
It is available in various formats, but the quality is poor.
Enterprises data warehouse (EDW)metadata:
It describes the data of reconciled layer.
It provides the rules for converting the operational data into reconciled data.
It extracts from the enterprise data model.
Data mart metadata:
It describes the data of derived data layer.
It provides the rules for converting the reconciled data into derived data.
2. Explain how the volatility of a data warehouse is different from the volatility of a database for an operational information system?
Data warehouse
· Data warehouse contains the collection of data that are used for decision making and used business intelligence.
· It is a unique kind of database, so it focuses on business intelligence, time variant data, and external data.
· The term data warehouse usually denotes to the grouping of many different database across an entire enterprise.
· It is a subject-oriented, integrated, time- variant, and non-updateable data.
Operational database:
An operational database is the database which is usually accessed and restructured on a regular basis and generally handles the daily transactions for a business.
It is used to manage the dynamic data and modification in the real-time data.
Volatility of a data warehouse and operational database:
A key dissimilarity between a data warehouse and an operational system is the data stored type.
Data warehouse is based on the use of periodic data operational system is based on the use of the transient data.
A change in the existing record present in the stores that overwrites the previous reco.
Data management in cloud study of existing systems and future opportunitiesEditor Jacotech
This document discusses data management in cloud computing and provides an overview of existing NoSQL database systems and their advantages over traditional SQL databases. It begins by defining cloud computing and the need for scalable data storage. It then discusses key goals for cloud data management systems including availability, scalability, elasticity and performance. Several popular NoSQL databases are described, including BigTable, MongoDB and Dynamo. The advantages of NoSQL systems like elastic scaling and easier administration are contrasted with some limitations like limited transaction support. The document concludes by discussing opportunities for future research to improve scalability and queries in cloud data management systems.
Collaborate 2009 - Migrating a Data Warehouse from Microsoft SQL Server to Or...djkucera
The document discusses strategies for migrating a data warehouse from Microsoft SQL Server to Oracle 11g, including:
1) Gaining management buy-in by presenting metrics showing the need for migration and tying benefits to business goals.
2) Using Oracle technologies like Transparent Gateway and Stored Procedure wrappers to provide interim access to legacy data and applications during the multi-stage migration process.
3) Employing Oracle Streams Heterogeneous Replication to keep data synchronized between the legacy and new Oracle data warehouses during migration, reducing disruption to users and ETL processes.
EOUG95 - Client Server Very Large Databases - PaperDavid Walker
The document discusses building large scaleable client/server solutions. It describes breaking the solution into four server components: database server, application server, batch server, and print server. It focuses on the database server, discussing how to make it resilient through clustering and scaleable by partitioning applications and using parallel query options. It also covers backup and recovery strategies.
This document provides an overview of Oracle Data Integrator and how it can be used to integrate data across heterogeneous databases and platforms in a service-oriented architecture. Specifically, it summarizes:
1) How Oracle Data Integrator uses a modular repository and graphical modules to design and execute integration processes that can integrate data from various sources like databases, files, and web services.
2) An example of using Oracle Data Integrator to integrate orders and customer data from an Oracle database with employee data from a file, and load it into a SQL Server database in near real-time using Oracle's change data capture functionality.
3) How the example creates journals to capture changed data from the Oracle source, builds interfaces to
Similar to Partitioning 11g-whitepaper-159443 (20)
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Integrating Physical and Cybersecurity to Lower Risks in Healthcare!Alec Kassir cozmozone
The contemporary hospital setting is witnessing a growing convergence between physical security and cybersecurity. Because of advancements in technology and the rise in cyberattacks, healthcare facilities face unique challenges.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
2. NOTE:
The following is intended to outline our general product direction. It is intended
for information purposes only, and may not be incorporated into any contract. It is
not a commitment to deliver any material, code, or functionality, and should not
be relied upon in making purchasing decisions. The development,release, and
timing of any features or functionality described for Oracle’s products remains at
the sole discretionof Oracle.
Partitioning in Oracle Database 11g Page 2
3. Partitioning in Oracle Database 11g
Note:......................................................................................................2
Partitioning – Concepts..........................................................................5
Introduction............................................................................................5
Benefits of Partitioning .........................................................................5
Basics of Partitioning.........................................................................5
Partitioning for Manageability ...........................................................7
Partitioning for Performance .............................................................7
Partitioning for Availability ...............................................................8
Partitioning – Modeling for your Business.............................................9
Basic Partitioning Strategies...............................................................9
Partitioning Extensions.....................................................................10
Partition Advisor..............................................................................11
Partitioning Strategies and Extensions at a Glance...........................12
Information Lifecycle Management with Partitioning..........................12
Conclusion...........................................................................................13
Partitioning in Oracle Database 11g Page 3
4. Partitioning in Oracle Database 11g
PARTITIONING – CONCEPTS
INTRODUCTION
Oracle Partitioning, first introduced in Oracle 8.0 in 1997, is one of the most
important and successful functionalities ofthe Oracle database, improving the
performance,manageability, andavailability for tens of thousands of applications.
Oracle Database 11g introduces the 8th
generation of partitioning which continues
to offer ground-breaking new and enhanced functionality; new partitioning
techniques enable customers to modeleven more businessscenarios while a
complete new framework of partition advice and automationenables the usage of
Oracle Partitioning for everybody. Oracle Database 11g is considered the biggest
new release for partitioning since its first introduction, continuing to protect our
customers' investmentin partitioning for a decade.
BENEFITS OF PARTITIONING
Partitioning can provide tremendous benefits to a wide variety ofapplications by
improving manageability, performance, andavailability. It is not unusual for
partitioning to improve the performance ofcertain queries or maintenance
operations by an order of magnitude.Moreover, partitioning can greatly reduce
the total cost of data ownership, using a “tiered archiving” approach of keeping
older relevant information still online on low cost storage devices. Oracle
Partitioning enables an efficient and simple, yet very powerful approach when
considering Information Lifecycle Management for large environments.
Partitioning also enables databasedesigners and administratorsto tackle some of
the toughest problems posed by cutting-edge applications. Partitioning is a key
tool for building multi-terabyte systems or systems with extremely high
availability requirements.
Basics of Partitioning
Partitioning allows a table, index or index-organized table to be subdivided into
smaller pieces. Each piece of the databaseobject is called a partition. Each
partition has its own name, and may optionally have its own storage
characteristics. From the perspective ofa database administrator, a partitioned
object has multiple pieces that can be managedeither collectively or individually.
This gives the administratorconsiderable flexibility in managing partitioned
Partitioning in Oracle Database 11g Page 4
5. object. However,from the perspective ofthe application, a partitioned table is
identical to a non-partitioned table; no modifications are necessary when
accessing a partitioned table using SQL DML commands.
Figure 1: Application and DBA perspective of a partitioned table
Database objects - tables, indexes, andindex-organized tables - are partitioned
using a 'partitioning key', a set of columns which determine in whichpartition a
given row will reside. For example the sales table shown in figure 1 is range-
partitioned on sales date, using a monthly partitioning strategy; the table appears
to any application as a single, 'normal' table. However, the DBA can manage and
store each monthly partition individually, potentially usingdifferent storagetiers,
applying table compression to the older data, or store complete ranges of older
data in read only tablespaces.
Irrespective ofthe chosen index partitioning strategy, an index is either coupled or
uncoupled with the underlying partitioning strategy of the underlying table. The
appropriate index partitioning strategy is chosen based on the business
requirements,making partitioning well suited to support any kind of application.
Oracle Database 11g differentiates between three types of partitioned indexes.
Local Indexes:A local index is an index on a partitioned table that is
coupled with the underlying partitioned table, 'inheriting' the partitioning
strategy from the table. Consequently,each partition of a local index
corresponds to one - and only one - partition of the underlying table. The
coupling enables optimized partitionmaintenance; for example, when a
table partition is dropped, Oracle simply has to drop the corresponding
index partition as well. No costly index maintenance is required. Local
indexes are most common in data warehousing environments.
Global Partitioned Indexes:A global partitioned index is an index on a
partitioned or non-partitioned table that is partitioned using a different
partitioning-key or partitioning strategy than the table. Global-partitioned
indexes can be partitioned using range or hash partitioning and are
uncoupled from the underlying table. For example, a table could be range-
partitioned by month and have twelve partitions, while an indexon that
table could be range-partitioned using a different partitioning key and have
Partitioning in Oracle Database 11g Page 5
6. a different numberof partitions. Global partitioned indexes are more
common for OLTP than for data warehousing environments.
Global Non-Partitioned Indexes:A global non-partitioned index is
essentially identical to an index on a non-partitioned table. The index
structure is not partitioned and uncoupled from the underlying table. In data
warehousing environments,the most common usage of global non-
partitioned indexes is to enforce primarykey constraints.OLTP
environments on the other hand mostly rely onglobal non-partitioned
indexes.
Oracle additionally provides a comprehensiveset of SQL commands for
managing partitioning tables. These include commandsfor adding new partitions,
dropping, splitting, moving, merging, truncating, and optionally compressing
partitions.
Partitioning for Manageability
Oracle Partitioning allows tables and indexes to be partitioned into smaller, more
manageable units, providing database administrators with the ability to pursue a
"divide and conquer" approach to data management.
With partitioning, maintenance operationscan be focusedon particular portions of
tables. For example, a database administratorcould compress a single partition
containing say the data for the year 2006 of a table, rather than compressingthe
entire table. For maintenance operationsacross an entire database object, it is
possible to perform these operations on a per-partition basis, thus dividing the
maintenance processinto more manageable chunks.
A typical usage of partitioning for manageability is to support a 'rolling window'
load process in a data warehouse. Suppose that a DBA loads new data into a table
on weekly basis. That table could be range-partitioned so that each partition
contains one week of data. The load process is simply the addition of a new
partition. Adding a single partition is much more efficient than modifying the
entire table, since the DBA doesnot need to modify any other partitions.
Another advantage of using partitioning is when it is time to remove data,an
entire partition can be dropped which is very efficient and fast, compared to
deleting each row individually.
Partitioning for Performance
By limiting the amount of data to be examined or operated on, partitioning
provides a number of performance benefits. These features include:
Partitioning Pruning: Partitioning pruning (a.k.a. Partition elimination)
is the simplest and also the most substantialmeans to improve
performance using partitioning. Partition pruning can often improve
query performance byseveral orders of magnitude.For example, suppose
an application contains an ORDERStable containingan historical record
Partitioning in Oracle Database 11g Page 6
7. of orders, and that this table has been partitioned by week. A query
requesting orders for a single week would only access a single partition
of the ORDERS table. If the table had 2 years of historical data, this
query would access one partition instead of 104 partitions. This query
could potentially execute 100x faster simply because of partition pruning.
Partition pruning works with all of Oracle's other performancefeatures.
Oracle will utilize partition pruning in conjunction with any indexing
technique, join technique, or parallel access method.
Partition-wise Joins: Partitioning can also improve the performanceof
multi-table joins, by using a technique known as partition-wise joins.
Partition-wise joins can be applied whentwo tables are being joined
together, and at least one of these tables is partitioned on the join key.
Partition-wise joins break a large join into smaller joins of 'identical' data
sets for the joined tables. 'Identical' here is definedas covering exactly
the same set of partitioning key values on both sides of the join, thus
ensuring that only a join of these 'identical' data sets will produce a result
and that other data sets do not have to be considered.Oracle is using
either the fact of already (physical) equi-partitioned tables for the join or
is transparentlyredistributing (= “repartitioning”) one table at runtime to
create equi-partitioned data sets matching the partitioning of the other
table, completing the overall join inless time. This offerssignificant
performance benefits both for serial and parallel execution.
Partitioning for Availability
Partitioned database objects provide partition independence. This characteristic of
partition independence can be an important part of a high-availability strategy. For
example, if one partition of a partitioned table is unavailable, all of the other
partitions of the table remain online and available. The application can continue to
execute queries and transactions against this partitioned table, and these database
operations will run successfully if they do not need to access the unavailable
partition.
The database administrator can specify that each partition be stored in a separate
tablespace; this would allow the administrator to do backup and recovery
operations on each individual partition, independent of the other partitions in the
table. Therefore in the event of a disaster, the databasecould be recovered with
just the partitions comprising of the active data, and then the inactive data in the
other partitions could be recovered at a convenient time. Thus decreasing the
system down-time.
Moreover, partitioning can reduce scheduled downtime. The performance gains
provided by partitioning may enable database administratorsto complete
maintenance operationson large database objects in relatively small batch
windows.
Partitioning in Oracle Database 11g Page 7
8. PARTITIONING – MODELING FOR YOUR BUSINESS
Oracle Database 11g provides the most comprehensiveset of partitioning
strategies, allowing a customer to optimally align the data subdivision with the
actual business requirements. All available partitioning strategies rely on
fundamentaldata distribution methodsthat can be used for either single (one-
level) or compositepartitioned tables. Furthermore,Oracle provides a variety of
partitioning extensions, increasing the flexibility forthe partitioning key
selection, providing automated partition creation as-needed, and advising on
partitioning strategies for non-partitioned objects.
Basic Partitioning Strategies
Oracle Partitioning offers three fundamental data distributionmethods that control
how the data is actually going to placed into the variousindividual partitions,
namely:
Range: The data is distributed based on a range of values of the
partitioning key (for a date column as the partitioning key, the 'January-
2007' partition contains rowswith the partitioning-key values between
'01-JAN-2007' and '31-JAN-2007'). The data distribution is a continuum
without any holes and the lower boundary of a range is automatically
defined by the upper boundary of the preceding range.
List: The data distribution is defined by a list of values of the partitioning
key (for a region column as the partitioning key, the 'North America'
partition may contain values 'Canada', 'USA', and 'Mexico'). A special
'DEFAULT' partition can be definedto catch all values for a partition key
that are not explicitly defined by any of the lists.
Hash: A hash algorithm is applied to the partitioning key to determine
the partition for a given row. Unlike the othertwo data distribution
methods, hash does not provide any logical mapping between the data
and any partition.
Using the above-mentioned data distribution methods, a table can be partitioned
either as single or composite partitioned table:
Single (one-level) Partitioning:A table is defined by specifying one of
the data distribution methodologies, using one or more columnsas the
partitioning key. For example consider a table with a numbercolumn as
the partitioning key and two partitions 'less_than_five_hundred' and
'less_than_thousand', the 'less_than_thousand' partition contains rows
where the following condition is true: 500 <= Partitioning key <1000.
You can specify Range, List, and Hash partitioned tables.
Composite Partitioning:A combination of two data distribution
methods are used to define a composite partitioned table. First, the table
is partitioned by data distribution method one and then each partition is
Partitioning in Oracle Database 11g Page 8
9. further subdivided into subpartitions using a second data distribution
method. All sub-partitions for a given partition together represent a
logical subset of the data. For example, a range-hash composite
partitioned table is first range-partitioned, and then each individual range-
partition is further sub-partitioned using the hash partitioning technique.
Available composite partitioningtechniques are range-hash, range-list,
range-range, list-range, list-list, and list-hash.
Index-organized tables (IOTs) can be partitionedusing range, hash, and
list partitioning. Composite partitioningis not supported for IOTs.
Partitioning Extensions
In addition to the basic partitioning strategies, Oracle provides partitioning
extensions. The extensions in Oracle Database 11g mainly focus on two
objectives:
(a) Enhance the manageability ofa partitioned table significantly.
(b) Extend the flexibility indefining a partitioning key.
The extensions are namely:
Interval Partitioning:A new partitioning strategy in Oracle Database 11g,
Interval partitioning extends the capabilities of the range method to define equi-
partitioned ranges using an interval definition.Rather than specifying individual
ranges explicitly, Oracle will create any partition automatically as-needed
whenever data for a partition is inserted for the very first time. Interval
partitioning greatly improves the manageability of a partitioned table. For
example, an interval partitionedtable could be defined so that Oracle creates a
new partition for every month in a calendar year; a partition is then automatically
created for 'September 2007' as soon as the first record for this month is inserted
into the database.
The available techniques for an interval partitioned table are Interval, Interval-
List, Interval-Hash,and Interval-Range.
REF Partitioning: Oracle Database 11g allows to partition a table by leveraging
an existing parent-child relationship. The partitioning strategy of the parent table
is inherited to its child table without the necessity to store the parent's partitioning
key columns in the child table. Without REF Partitioning you have to duplicate all
partitioning key columns from the parent table to the child table if you want to
take advantage fromthe same partitioning strategy; REF Partitioning on the other
hand allows you to naturally partition tables according to the logical data model
without requiring to store the partitioning key columns, thus reducing the manual
overhead for denormalizationand saving space. REF Partitioning also
transparently inherits all partition maintenance operationsthat change the logical
shape of a table from the parent table to the child table. Furthermore,REF
Partitioning automatically enables partition-wise joins for the equi-partitions of
Partitioning in Oracle Database 11g Page 9
10. the parent and child table, improving the performancefor this operation. For
example, a parent table ORDERS isRange partitionedon the ORDER_DATE
column; its child table ORDER ITEMS does not contain the ORDER_DATE
column but can be partitioned by reference to the ORDERS table. If the ORDERS
table is partitioned by month, all order items for orders in 'Jan-2007' will then be
stored in a single partitionin the ORDER ITEMS table, equi-partitioned to the
parent table ORDERS. If apartition 'Feb-2007' is added to the ORDERS table
Oracle will transparently add the equivalent partition to the ORDER ITEMS table.
All basic partitioning strategies are available for REF Partitioning.
Virtual column-basedPartitioning: In previous versions of Oracle, a table could
only be partitioned if the partitioningkey physically existed inthe table. Virtual
columns, a new functionality in Oracle Database 11g, removes that restriction
and allows the partitioning key to be defined by an expression, using one or more
existing columns of a table, and storing the expression as metadata only.
Partitioning has been enhanced to allow a partitioning strategy being defined on
virtual columns, thus enabling a more comprehensivematch of the business
requirements.It is not uncommon to see columns being overloaded with
information; for example a 10 digit account ID can include an account branch
information as the leading three digits. With the extension of virtual column-
based Partitioning, the ACCOUNTStable containinga column ACCOUNT_ID
can be extended with a virtual (derived) column ACCOUNT_BRANCHthat is
derived from the first three digits of the ACCOUNT_ID column which becomes
the partitioning key for this table.
Virtual column-based Partitioning is supported with all basic partitioning
strategies.
Partition Advisor
The SQL Access Advisor in Oracle Database 11g has been enhanced to generate
partitioning recommendations,in addition to the ones it already provides for
indexes, materialized views and materialized view logs. Recommendations
generated by the SQL Access Advisor – either for Partitioning only or holistically
- will show the anticipated performance gains that will result if they are
implemented. The generated script can either be implemented manually or
submitted onto a queue within Oracle Enterprise Manager.
With the extension of partitioning advice, customers not only can get
recommendation specifically for partitioning but also a more comprehensive
holistic recommendation of SQL Access Advisor, improving the collective
performance of SQL statements overall.
The Partition Advisor, integrated into the SQL Access Advisor, is part of Oracle's
Tuning Pack, an extra licensable option. It can be used from within Enterprise
Manager or via a command line interface.
Partitioning in Oracle Database 11g Page 10
11. Partitioning Strategies and Extensions at a Glance
The following table gives a conceptual overviewof all available basic partitioning
strategies in Oracle Database 11g:
Partitioning Strategy Data Distribution Sample Business Case
Range Partitioning Based on consecutive ranges of
values.
Orders table range
partitioned by order_date
List Partitioning Based on unordered lists of
values.
Orders table list partitioned
by country
Hash Partitioning Based on a hash algorithm. Orders table hash partitioned
by customer_id
Composite Partitioning
Range-Range
Range-List
Range-Hash
List-List
List-Range
List-Hash
Based on a combination of two
of the above-mentioned basic
techniques of Range, List,
Hash, and Interval Partitioning
Orders table is range
partitioned by order_date
and sub-partitioned by hash
on customer_id
Orders table is range
partitioned by order_date
and sub-partitioned by range
on shipment_date
In addition to the available partitioning strategies, Oracle Database 11g provides
the following partitioning extensions:
Partitioning Extension Partitioning Key Sample Business Case
Interval Partitioning
Interval
Interval-Range
Interval-List
Interval-Hash
An extension to Range
Partition. Defined by an
interval, providing equi-width
ranges. With the exception of
the first partition all partitions
are automatically created on-
demand when matching data
arrives.
Orders table partitioned by
order_date with a predefined
daily interval, starting with
'01-Jan-2007'
REF Partitioning Partitioning for a child table is
inherited from the parent table
through a primary key –
foreign key relationship. The
partitioning keys are not stored
in actual columns in the child
table.
(Parent) Orders table range
partitioned by order_date
and inherits the partitioning
technique to (child) order
lines table. Column
order_date is only present in
the parent orders table
Virtual column based
Partitioning
Defined by one of the above-
mentioned partition techniques
and the partitioning key is
based on a virtual column.
Virtual columns are not stored
on disk and only exist as
metadata.
Orders table has a virtual
column that derives the sales
region based on the first
three digits of the customer
account number. The orders
table is then list partitioned
by sales region.
Partitioning in Oracle Database 11g Page 11
12. INFORMATION LIFECYCLE MANAGEMENT WITH PARTITIONING
Today's challenge of storing vast quantities ofdata for the lowest possible cost
can be optimally addressedusing Oracle Partitioning. The independence of
individual partitions is the key enabler for addressing the online portion of a
“tiered archiving”strategy. Specifically in tables containing historical data, the
importance - andaccess pattern – of the data heavily relies on the age of the data;
Partitioning enables individual partitions (or groups of partitions) to be stored on
different storage tiers, providing different physical attributes and price points. For
example an Orderstable containing2 years worth of data could have only the
most recent quarter beingstored on an expensive high-end storage tier and keep
the rest of the table (almost 90% of the data) on an inexpensivelow cost storage
tier. Through Oracle Partitioning, the storage costsare reduced by factors (cost
savings of 50% or more are not uncommon), without impacting the end user
access, thus optimizing the cost of ownership for the stored information.
The Oracle ILM Assistant which is a freely available tool downloadable from
OTN, can illustrate those cost savings, show you how to partition the table and
advise when it is time to move partitionsto other storage tiers.
CONCLUSION
Considering the new and improved functionality for Oracle Partitioning, Oracle
Database 11g is the most significant release since the introduction of Oracle
Partitioning in 1997. In every major release, Oracle has enhanced the functionality
of Partitioning, by either adding new partitioningtechniques, enhancing the
scalability, or extending the manageability and maintenance capabilities. Oracle
plans to continue to add new partitioningtechniques to ensure that an optimal
partitioning technique is available for every business requirement.
Partitioning is for everybody. Oracle Partitioning can greatly enhance the
manageability, performance, andavailability of almost any database application.
Partitioning can be applied tocutting-edge applications and indeed partitioning
can be a crucial technology ingredient to ensure these applications’ success.
Partitioning can also be applied to more commonplace database applications in
order to simplify the administration and costs of managing such applications.
Since partitioning is transparent to the application, it can be easily implemented
because no costly and time-consuming application changes are required.
Partitioning in Oracle Database 11g Page 12