Learn about developing Hierarchical queries using Informix features such as OLAP functions, setops operators and query rewrite. This presentation will cover building the hierarchical data model using existing relational schema in IDS. You learn about customer scenarios for designing hierarchical data model, in-depth knowledge of complex hierarchical queries, performance tips and references. This talk will provide details on how to identify hierarchical relationship and take advantage of using existing relational model.
This document discusses new query optimization features in MariaDB 10.3. It describes how MariaDB 10.3 improves on condition pushdown from 10.2 by allowing conditions to be pushed through window functions. It also explains a new "split grouping" optimization where grouping is done separately for each relevant group, rather than computing all groups at once, allowing indexes to be leveraged more efficiently. These optimizations can improve performance by filtering out unnecessary rows earlier in query execution.
MariaDB Optimizer - further down the rabbit holeSergey Petrunya
The document summarizes new features in the MariaDB 10.4 query optimizer including:
1) New default optimizer settings that take more factors into account for condition selectivity and use histograms by default.
2) Faster histogram collection using Bernoulli sampling rather than analyzing the whole data set.
3) Two new types of condition pushdown - from HAVING clauses into WHERE clauses, and into materialized IN subqueries.
Optimizer features in recent releases of other databasesSergey Petrunya
The document summarizes several recent optimizer features introduced in MySQL 8.0 and PostgreSQL versions:
- MySQL 8.0 introduced an iterator-based executor, hash joins, EXPLAIN ANALYZE, and optimizations for anti-joins, semi-joins, and subqueries.
- PostgreSQL improved query parallelism, added multi-column statistics, parallel index creation, and optimized non-recursive common table expressions.
- Both databases have focused on join algorithms, statistics gathering, and parallel query processing to improve performance. MySQL continues to adopt features from other databases in recent releases.
This document introduces windowing functions in Firebird 3. It explains that windowing functions allow calculations across sets of rows and provide access to values from related rows in the same table. It describes the syntax of window functions in Firebird and how they use windows to define partitions of rows and sort orders. Examples show how aggregate functions can be used as window functions to calculate moving and cumulative aggregates over window partitions.
Using histograms to provide better query performance in MariaDB. Histograms capture the distribution of values in columns to help the query optimizer select better execution plans. The optimizer needs statistics on data distributions to estimate query costs accurately. Histograms are not enabled by default but can be collected using ANALYZE TABLE with the PERSISTENT option. Making histograms available improves the performance of queries that have selective filters or ordering on non-indexed columns.
The document discusses various Transact-SQL functions and features including:
1) Sequence objects that generate unique numbers and can be used to automatically populate columns. String functions like CONCAT and FORMAT are demonstrated for concatenating and formatting strings.
2) Logical functions such as IIF and CHOOSE are shown. Date/time functions like DATEFROMPARTS and PARSE are used to extract dates from strings.
3) Paging results with OFFSET and FETCH is covered along with using sequences to generate unique IDs and row numbers when paging query results.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
This document discusses new query optimization features in MariaDB 10.3. It describes how MariaDB 10.3 improves on condition pushdown from 10.2 by allowing conditions to be pushed through window functions. It also explains a new "split grouping" optimization where grouping is done separately for each relevant group, rather than computing all groups at once, allowing indexes to be leveraged more efficiently. These optimizations can improve performance by filtering out unnecessary rows earlier in query execution.
MariaDB Optimizer - further down the rabbit holeSergey Petrunya
The document summarizes new features in the MariaDB 10.4 query optimizer including:
1) New default optimizer settings that take more factors into account for condition selectivity and use histograms by default.
2) Faster histogram collection using Bernoulli sampling rather than analyzing the whole data set.
3) Two new types of condition pushdown - from HAVING clauses into WHERE clauses, and into materialized IN subqueries.
Optimizer features in recent releases of other databasesSergey Petrunya
The document summarizes several recent optimizer features introduced in MySQL 8.0 and PostgreSQL versions:
- MySQL 8.0 introduced an iterator-based executor, hash joins, EXPLAIN ANALYZE, and optimizations for anti-joins, semi-joins, and subqueries.
- PostgreSQL improved query parallelism, added multi-column statistics, parallel index creation, and optimized non-recursive common table expressions.
- Both databases have focused on join algorithms, statistics gathering, and parallel query processing to improve performance. MySQL continues to adopt features from other databases in recent releases.
This document introduces windowing functions in Firebird 3. It explains that windowing functions allow calculations across sets of rows and provide access to values from related rows in the same table. It describes the syntax of window functions in Firebird and how they use windows to define partitions of rows and sort orders. Examples show how aggregate functions can be used as window functions to calculate moving and cumulative aggregates over window partitions.
Using histograms to provide better query performance in MariaDB. Histograms capture the distribution of values in columns to help the query optimizer select better execution plans. The optimizer needs statistics on data distributions to estimate query costs accurately. Histograms are not enabled by default but can be collected using ANALYZE TABLE with the PERSISTENT option. Making histograms available improves the performance of queries that have selective filters or ordering on non-indexed columns.
The document discusses various Transact-SQL functions and features including:
1) Sequence objects that generate unique numbers and can be used to automatically populate columns. String functions like CONCAT and FORMAT are demonstrated for concatenating and formatting strings.
2) Logical functions such as IIF and CHOOSE are shown. Date/time functions like DATEFROMPARTS and PARSE are used to extract dates from strings.
3) Paging results with OFFSET and FETCH is covered along with using sequences to generate unique IDs and row numbers when paging query results.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
The document discusses single-row functions in SQL. It describes that single-row functions manipulate or return one value per row based on arguments. Examples of different types of single-row functions are provided, including character, number, date, and general functions. Specific functions like UPPER, ROUND, SYSDATE, and NVL are described along with examples of how to use them in SQL queries. Nesting of functions is also covered.
MariaDB Server 10.3 provides enhancements for temporal data support, database compatibility, and performance. Key features include:
- System versioned tables to store and query historical data at different points in time.
- Improved Oracle compatibility with features like PL/SQL parsing, packages for stored functions, sequences, and additional data types.
- Performance improvements such as adding instant columns for InnoDB and statement-based lock wait timeouts.
- Other new features include user-defined aggregate functions, compressed columns, and proxy protocol support.
This document discusses how to resolve the ORA-14098 "index mismatch" error that can occur when performing an ALTER TABLE EXCHANGE PARTITION operation. The error is caused when the indexes on the partitioned table do not match the indexes on the non-partitioned table. It provides steps to use tracing to identify the mismatched indexes, compare the indexes on both tables, and ways to work around the issue such as disabling or dropping indexes.
MariaDB Server 10.3 - Temporale Daten und neues zur DB-KompatibilitätMariaDB plc
MariaDB Server 10.3 (RC) introduces enhancements for temporal data support, database compatibility, performance, flexibility, and scalability. Key features include system versioned tables for querying historical data, PL/SQL compatibility for stored functions, sequences, intersect and except operators, and user-defined aggregate functions. The Spider storage engine is also updated.
Firebird 3 includes many new SQL features such as the full MERGE statement syntax from SQL 2008, window functions, regular expression support in SUBSTRING, a native BOOLEAN datatype, and improvements to cursor stability for data modification queries. Procedural SQL is also enhanced with the ability to create SQL functions, subroutines, external functions/procedures/triggers, and exception handling improvements. Additional new features include DDL changes, enhanced security options, and monitoring capabilities.
This document discusses techniques for refactoring Ruby code to follow object-oriented principles and design patterns. It provides examples of refactoring a Bhaskara equation solver class to have better encapsulation, organization and separation of concerns. It also discusses ways to make objects more collection-like and use delegation, modules and other techniques to improve code design. The overall goal is to help Ruby developers write more maintainable, understandable and "enterprise-ready" code.
The document discusses data manipulation language (DML) statements in SQL. It describes how to insert rows into a table using INSERT, update rows using UPDATE, and delete rows from a table using DELETE. It also covers transaction control using COMMIT to save changes permanently and ROLLBACK to undo pending changes back to a savepoint.
Laracon EU 2018: OMG MySQL 8.0 is out! are we there yet?Gabriela Ferrara
Sick and tired of "X technology is only good for starting out; after you do, move to Y"? Good news: you don't need to move away, you just need to get in further! In this talk, you'll learn about improvements in the newest version of the most used database in the world. What are Window Functions? How do you use CTEs? How can the new default encoding help me (emoji! 臘♀️)? We'll also talk about new JSON features and extended UUID support! Be prepared to drink from the firehose of what's new and awesome about MySQL 8.0.
The document describes several SQL experiments conducted to create and populate tables, apply constraints, modify schemas, and perform queries. Key points:
1) Tables were created for departments and employees, with data inserted. Describe commands showed the schemas.
2) More tables were created, drop and delete commands were used, and select queries with and without where clauses were run.
3) Schemas were altered by adding columns and modifying data types. Update commands modified existing data.
4) Primary keys, foreign keys, unique constraints and other constraints were applied to newly created tables in various experiments.
5) Select queries used aggregate functions, arithmetic operators, sorting, and nested queries. Joins were performed
What SQL functionality was added in the past year or so. The presentation covers default expressions, functional key parts, lateral derived tables, CHECK constraints, JSON and spatial improvements. Also some other small SQL and other improvements.
- The document discusses histograms used for data statistics in MariaDB, MySQL, and PostgreSQL. Histograms provide compact summaries of column value distributions to help query optimizers estimate condition selectivities.
- MariaDB stores histograms in the mysql.column_stats table and collects them via full table scans. PostgreSQL collects histograms using random sampling and stores statistics in pg_stats including histograms and most common values lists.
- While both use height-balanced histograms, PostgreSQL additionally tracks most common values to improve selectivity estimates for frequent values.
- A table is a logical representation of data stored in a database. It holds data in rows and columns.
- Data Definition Language (DDL) commands like CREATE, ALTER, TRUNCATE, DROP are used to create, modify and delete database objects like tables.
- Data Manipulation Language (DML) commands like INSERT, SELECT, UPDATE, DELETE are used to query and manipulate data in existing tables.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
The document discusses modularization in ABAP including subroutine calls, passing parameters, function modules, function groups, and the CATCH statement. It provides examples of calling subroutines, passing different types of parameters, defining and calling function modules and groups, and using the CATCH statement to handle exceptions.
The report finds all custom objects in the R3TR system that have a development class and object type specified by the user. It displays the objects in an ALV grid or downloads the data to a CSV file. It retrieves object data from the TADIR table and transaction code data from the TSTC table, then either displays it using REUSE_ALV_GRID_DISPLAY or fills a table for downloading using GUI_DOWNLOAD.
This document discusses SQL ranking functions and provides an example of using ranking functions to assign sequential numbers, ranks, and quartile groups to rows in an Orders table ordered by quantity. It first creates the Orders table with sample data. It then uses ROW_NUMBER(), RANK(), DENSE_RANK(), and NTILE() analytical functions without a PARTITION BY clause to assign row numbers, ranks, dense ranks, and quartile groups based on ordering by quantity. The output shows the results.
Advanced PL/SQL Optimizing for Better Performance 2016Zohar Elkayam
This is the presentation I used for Oracle Week 2016 session. This includes new features from both 12cR1 and 12cR2.
Agenda:
Developing PL/SQL:
- Composite datatypes, advanced cursors, dynamic SQL, tracing, and more…
Compiling PL/SQL:
- dependencies, optimization levels, and DBMS_WARNING
Tuning PL/SQL:
- GTT, Result cache and Memory handling
- Oracle 11g, 12cR1 and 12cR2 new useful features
- SQLcl – New replacement tool for SQL*Plus (if we have time)
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
The document discusses single-row functions in SQL. It describes that single-row functions manipulate or return one value per row based on arguments. Examples of different types of single-row functions are provided, including character, number, date, and general functions. Specific functions like UPPER, ROUND, SYSDATE, and NVL are described along with examples of how to use them in SQL queries. Nesting of functions is also covered.
MariaDB Server 10.3 provides enhancements for temporal data support, database compatibility, and performance. Key features include:
- System versioned tables to store and query historical data at different points in time.
- Improved Oracle compatibility with features like PL/SQL parsing, packages for stored functions, sequences, and additional data types.
- Performance improvements such as adding instant columns for InnoDB and statement-based lock wait timeouts.
- Other new features include user-defined aggregate functions, compressed columns, and proxy protocol support.
This document discusses how to resolve the ORA-14098 "index mismatch" error that can occur when performing an ALTER TABLE EXCHANGE PARTITION operation. The error is caused when the indexes on the partitioned table do not match the indexes on the non-partitioned table. It provides steps to use tracing to identify the mismatched indexes, compare the indexes on both tables, and ways to work around the issue such as disabling or dropping indexes.
MariaDB Server 10.3 - Temporale Daten und neues zur DB-KompatibilitätMariaDB plc
MariaDB Server 10.3 (RC) introduces enhancements for temporal data support, database compatibility, performance, flexibility, and scalability. Key features include system versioned tables for querying historical data, PL/SQL compatibility for stored functions, sequences, intersect and except operators, and user-defined aggregate functions. The Spider storage engine is also updated.
Firebird 3 includes many new SQL features such as the full MERGE statement syntax from SQL 2008, window functions, regular expression support in SUBSTRING, a native BOOLEAN datatype, and improvements to cursor stability for data modification queries. Procedural SQL is also enhanced with the ability to create SQL functions, subroutines, external functions/procedures/triggers, and exception handling improvements. Additional new features include DDL changes, enhanced security options, and monitoring capabilities.
This document discusses techniques for refactoring Ruby code to follow object-oriented principles and design patterns. It provides examples of refactoring a Bhaskara equation solver class to have better encapsulation, organization and separation of concerns. It also discusses ways to make objects more collection-like and use delegation, modules and other techniques to improve code design. The overall goal is to help Ruby developers write more maintainable, understandable and "enterprise-ready" code.
The document discusses data manipulation language (DML) statements in SQL. It describes how to insert rows into a table using INSERT, update rows using UPDATE, and delete rows from a table using DELETE. It also covers transaction control using COMMIT to save changes permanently and ROLLBACK to undo pending changes back to a savepoint.
Laracon EU 2018: OMG MySQL 8.0 is out! are we there yet?Gabriela Ferrara
Sick and tired of "X technology is only good for starting out; after you do, move to Y"? Good news: you don't need to move away, you just need to get in further! In this talk, you'll learn about improvements in the newest version of the most used database in the world. What are Window Functions? How do you use CTEs? How can the new default encoding help me (emoji! 臘♀️)? We'll also talk about new JSON features and extended UUID support! Be prepared to drink from the firehose of what's new and awesome about MySQL 8.0.
The document describes several SQL experiments conducted to create and populate tables, apply constraints, modify schemas, and perform queries. Key points:
1) Tables were created for departments and employees, with data inserted. Describe commands showed the schemas.
2) More tables were created, drop and delete commands were used, and select queries with and without where clauses were run.
3) Schemas were altered by adding columns and modifying data types. Update commands modified existing data.
4) Primary keys, foreign keys, unique constraints and other constraints were applied to newly created tables in various experiments.
5) Select queries used aggregate functions, arithmetic operators, sorting, and nested queries. Joins were performed
What SQL functionality was added in the past year or so. The presentation covers default expressions, functional key parts, lateral derived tables, CHECK constraints, JSON and spatial improvements. Also some other small SQL and other improvements.
- The document discusses histograms used for data statistics in MariaDB, MySQL, and PostgreSQL. Histograms provide compact summaries of column value distributions to help query optimizers estimate condition selectivities.
- MariaDB stores histograms in the mysql.column_stats table and collects them via full table scans. PostgreSQL collects histograms using random sampling and stores statistics in pg_stats including histograms and most common values lists.
- While both use height-balanced histograms, PostgreSQL additionally tracks most common values to improve selectivity estimates for frequent values.
- A table is a logical representation of data stored in a database. It holds data in rows and columns.
- Data Definition Language (DDL) commands like CREATE, ALTER, TRUNCATE, DROP are used to create, modify and delete database objects like tables.
- Data Manipulation Language (DML) commands like INSERT, SELECT, UPDATE, DELETE are used to query and manipulate data in existing tables.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
The document discusses modularization in ABAP including subroutine calls, passing parameters, function modules, function groups, and the CATCH statement. It provides examples of calling subroutines, passing different types of parameters, defining and calling function modules and groups, and using the CATCH statement to handle exceptions.
The report finds all custom objects in the R3TR system that have a development class and object type specified by the user. It displays the objects in an ALV grid or downloads the data to a CSV file. It retrieves object data from the TADIR table and transaction code data from the TSTC table, then either displays it using REUSE_ALV_GRID_DISPLAY or fills a table for downloading using GUI_DOWNLOAD.
This document discusses SQL ranking functions and provides an example of using ranking functions to assign sequential numbers, ranks, and quartile groups to rows in an Orders table ordered by quantity. It first creates the Orders table with sample data. It then uses ROW_NUMBER(), RANK(), DENSE_RANK(), and NTILE() analytical functions without a PARTITION BY clause to assign row numbers, ranks, dense ranks, and quartile groups based on ordering by quantity. The output shows the results.
Advanced PL/SQL Optimizing for Better Performance 2016Zohar Elkayam
This is the presentation I used for Oracle Week 2016 session. This includes new features from both 12cR1 and 12cR2.
Agenda:
Developing PL/SQL:
- Composite datatypes, advanced cursors, dynamic SQL, tracing, and more…
Compiling PL/SQL:
- dependencies, optimization levels, and DBMS_WARNING
Tuning PL/SQL:
- GTT, Result cache and Memory handling
- Oracle 11g, 12cR1 and 12cR2 new useful features
- SQLcl – New replacement tool for SQL*Plus (if we have time)
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
This document summarizes several performance improvements in MySQL 5.6 including index condition pushdown, multi-range read, batched key access, and persistent optimizer statistics. It provides examples of query execution with and without these optimizations, showing significant performance gains in MySQL 5.6 for queries that push index conditions down to the storage engine, read multiple index ranges sequentially, and access keys in batches for joins.
This document discusses data mining techniques in the context of the MVC model and provides examples of using SQL, including recursive queries. It describes benefits and limitations of different data mining approaches like native SQL, ORM, and SQL standards like SQL92, SQL99, and SQL2003. Examples are provided to demonstrate simple to complex SQL queries, including joining tables, unions, and excluding results. Recursive queries are explained as a way to represent hierarchical data using common table expressions.
In this talk, Tzach Livyatan, VP Product at ScyllaDB discusses NoSQL Data Modeling 101. He covers:
- NoSQL vs SQL data modeling
- What are partition keys, and clustering keys and how to choose them
- What are Materialized Views in NoSQL and ScyllaDB
The document provides guidance on migrating a Perl application from MySQL to PostgreSQL. It discusses prerequisites like having a good test suite. Key steps include: automated schema migration where possible by adapting MySQL schemas to PostgreSQL; making code compatible by adding a "with_db" function to abstract differences; and migrating data through tools or custom scripts. Challenges addressed include data types, indexes, dates/times, application features like locking, and ensuring the application works as intended on PostgreSQL. Proper testing at each stage is emphasized for a successful migration.
Tutorial - Learn SQL with Live Online DatabaseDBrow Adm
The document provides an overview of SQL queries that can be practiced on a sample eCommerce database using an online tool. It covers basic queries including selecting columns, filtering rows, sorting results, joining tables, aggregate functions and more advanced topics such as subqueries, outer joins and regular expressions. Each example is accompanied by a link to test the query directly and view the output. The goal is to help users test and solidify their understanding of SQL.
The document demonstrates how to recover data from PostgreSQL database files using the pg_filedump tool. It shows extracting table data and metadata like the table schema from the heap and system catalog files. Key points extracted include:
- pg_filedump can display formatted contents of PostgreSQL files including tables, indexes, and system catalogs
- Running pg_filedump on the table file extracted the table data including column types
- Further analysis of system catalog files using pg_filedump provided the table and column names and types to fully recover the table schema
The Hidden Face of Cost-Based Optimizer: PL/SQL Specific StatisticsMichael Rosenblum
Database statistics are not limited to tables, columns, and indexes. PL/SQL functions also have a number of associated statistics, namely costs (CPU, I/O, network), selectivity, and cardinality (for functions that return collections). These statistics have default values that only somewhat represent reality. However, these values are always used by Oracle's cost-based optimizer to build execution plans. This session uses real-life examples to illustrate how properly managed PL/SQL statistics can significantly improve executions plans. It also demonstrates that Oracle's extensible optimizer is flexible enough to support packaged functions.
Informix Warehouse Accelerator (IWA) features in version 12.1Keshav Murthy
The document discusses enhancements made to Informix Warehouse Accelerator (IWA) in version 12.10. Key points include:
- IWA now supports operations like creating, deploying, loading, enabling, and disabling data marts on secondary nodes in MACH11 and high availability environments, in addition to the primary/standard server node.
- New procedures like dropPartMart and loadPartMart allow refreshing partitions in a partitioned fact table within a data mart.
- Performance of SQL queries involving UNIONs, derived tables, and DISTINCT aggregates was improved.
- Additional OLAP functions and options like NULLS FIRST/LAST in ORDER BY were added for enhanced analytical querying.
Advanced PLSQL Optimizing for Better PerformanceZohar Elkayam
A Presentation from Oracle Week 2015 in Israel
Agenda:
• Developing PL/SQL:
o Composite Data Types: Records, Collections and Table type
o Advanced Cursors: Ref cursor, Cursor function, Cursor subquery in PL/SQL
o Bulk Binding
o Dynamic SQL – SQL Injection
o Tracing PL/SQL Execution
o Design patterns for PL/SQL: Autonomous Transactions, Invoker and Definer rights, serially_reusable code
o Triggers Improvements
• Compiling PL/SQL:
o PL/SQL Fine-Grain Dependency Management
o PLSQL_OPTIMIZE_LEVEL parameter
o PL/SQL Compile-Time Warnings and Using DBMS_WARNING package
• Tuning PL/SQL:
o Handling Packages in Memory
o Global Temporary Tables
o PL/SQL Function Result Cache and pitfalls
• Oracle Database 12c PL/SQL new features: What is new in Oracle 12c
o Language Usability Enhancements
o New Limitations
• Additional useful features, Tips and Tricks for better performance
The document discusses several topics related to SQL:
1) SQLNet compression - How ordering data in a query can significantly reduce the amount of data sent over the network by compressing repeated values. Ordering by additional columns further improves compression.
2) NULLs and indexes - There is a misconception that indexes cannot be used with queries involving NULL values, but indexes can support queries searching for NULL values.
3) Subquery caching - Repeated scalar subqueries are cached and evaluated only once to improve performance of queries containing subqueries.
The document describes several databases related to banking, insurance, orders, students, and books. It includes the structure of each database with table definitions and sample data. Various SQL queries are demonstrated to retrieve, update, insert and delete records in the tables to solve business problems for each database application.
Cassandra Community Webinar | Become a Super ModelerDataStax
Sure you can do some time series modeling. Maybe some user profiles. What's going to make you a super modeler? Let's take a look at some great techniques taken from real world applications where we exploit the Cassandra big table model to it's fullest advantage. We'll cover some of the new features in CQL 3 as well as some tried and true methods. In particular, we will look at fast indexing techniques to get data faster at scale. You'll be jet setting through your data like a true super modeler in no time.
Speaker: Patrick McFadin, Principal Solutions Architect at DataStax
The document discusses different ways to avoid mutating table errors when using triggers in Oracle:
1. Use a compound trigger instead of a row-level trigger to avoid the error. A compound trigger allows initializing variables before and after statements to prevent mutating errors.
2. An example is given of a compound trigger that enforces a business rule limiting salary increases to 10% of the department average without causing a mutating error.
3. The mutating error occurs when a trigger references the table that owns the trigger, preventing the trigger from seeing changes made by its own statement.
The document describes an algorithm used to purge data from a large IBM DB2 database to reduce its size. Key steps included:
1) Exporting data from large tables to external files and reloading the tables with only valid records to remove invalid data
2) Dropping constraints and indexes from large tables to improve performance during the purge process
3) Setting integrity constraints back on tables after the purge to ensure data validity
How to Implement Distributed Data Store Philip Zhong
This document discusses the design of an XQuery engine data storage system. It covers major features like data query and storage. It describes the data flow and data structures. It also outlines rules for selectivity calculation, SQL generation, database high availability and monitoring. Performance test results are provided for different queries on large tables with and without indexes.
Similar to Building a Hierarchical Data Model Using the Latest IBM Informix Features (20)
Discover the power of Recursive SQL and query transformation with Informix da...Ajay Gupte
This presentation will provide an overview of the Recursive SQL with the CONNECT BY clause feature. We will provide examples of typical practical database problems and describe in detail how they can be solved with recursive SQL. The problems discussed include for bill of materials, obtaining the number of employees for each manager in a particular sub-organization, converting linked dimension hierarchies in a star schema to fixed dimension hierarchies, tracking packages, and generating test data. This presentation compares the new solutions with traditional solutions of these problems and discusses the advantages and disadvantages of the various methods. This presentation will also discuss the query transformation techniques with Informix 12.10 features which will focus on how query blocks are moved between different levels and optimized using examples and diagrams. Users will learn how to analyze complex examples based on various Informix 12.10 features. Examples included in this session are query block movement, table re-ordering, complex ANSI joins, sub-queries, derived tables, views, connect by, OLAP functions, setops cases.
Using Lateral derived table in Informix databaseAjay Gupte
This presentation will focus on Lateral derived table concept along with various examples. It will cover lateral correlation overview and user scenarios with views, stored procedures and complex queries. It will show how Informix Server execute Lateral correlation in different cases. Users will learn how to build Lateral correlation in application development.
Using JSON/BSON types in your hybrid application environmentAjay Gupte
This presentation will cover overview of
JSON/BSON types along with various SQL
features. It will cover JSON/BSON data extraction, performance & tips for hybrid environment.
Examples will have SQL features such as Views,
Derived Tables, Stored Procedure, Hierarchical
queries
How IBM API Management use Informix and NoSQLAjay Gupte
IBM API Management product version 3 (V3) has been
re-design and re- architected from ground up to
be able to handle scale in a cloud environment as
well as in an on-premise environment, but also to
be able to deliver features at a faster pace. As
part of this process. This session will cover Programming Model Best Practices with NoSQL Technology and Informix Database.
NoSQL Analytics: JSON Data Analysis and Acceleration in MongoDB WorldAjay Gupte
In analytics world, when you need to process many millions or billions of documents to generate a single report. Novel techniques have been developed for exploiting modern processor architecture (larger on-chip cache, SIMD processing, compression, vector processing, columnar approach). Now, this technology is available to process your large JSON data. This talk will discuss analysis of JSON data using advanced data warehousing techniques and make it simple and seamless for the application/tool developer.
IBM Informix Database SQL Set operators and ANSI Hash JoinAjay Gupte
This document discusses SQL set operators like UNION, INTERSECT, and MINUS. It explains that INTERSECT returns rows common to two result sets, while MINUS returns rows in the first set not in the second. The operators support NULLs and have rules like UNION. Examples demonstrate their usage in views, derived tables, and procedures. Optimization techniques like nested loops and hash joins are covered. Scenarios illustrate uses like finding overlapping or non-overlapping supplier and order IDs. ANSI join improvements like hash joins are also summarized.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️
Building a Hierarchical Data Model Using the Latest IBM Informix Features
1. Building a Hierarchical Data Model
Using the Latest IBM Informix
Features
Ajaykumar Gupte
gupte@us.ibm.com
1
2. Agenda
●
Problem of querying hierarchical data
●
Hierarchical data design
●
“Connect By”- keywords & pseudo columns
●
Execution model
●
Query transformation
3. Problem of querying hierarchical data
• Common technique of storing hierarchical data in
relational tables is self-reference
– Employee-Manager
• Employee table (key – empid)
• Every employee has a manager (indicated by mgrid)
• Manager is also an employee (with a valid empid)
– Shipment
• Inbound shipment table (key – item_id)
• Each item can belong to a package ( key –
package_id)
• Every package is itself an item (with a valid item_id)
CREATE TABLE employee (
empid INTEGER NOT NULL
PRIMARY KEY,
name VARCHAR(10),
salary DECIMAL(9, 2),
mgrid INTEGER);
CREATE TABLE inbound_shipment (
shipment_id VARCHAR(50),
item_id VARCHAR(20) ,
package_id VARCHAR(20),
.......
5. Characteristics/Limitations
■ Multi-step approach – requiring complex application/SPL logic
■ Recursive self-join
■ Filtering/ordering/grouping requires more additions
■ Joining results with other tables becomes complex
■ Reuse amongst other applications
– understanding of the complex logic (data placement etc)
– more customization
6. SELECT level as package_level, item_id,
package_id
FROM inbound_shipment
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR
item_id = package_id
Using CONNECT BY to discover data
hierarchy
C o n d it io n o f r e c u r s io n
s e e d o f r e c u r s io n
8. Hierarchical view of data
17
15 16
10 13 11 12 14
1 2 43 5 6 7 8 9
SELECT name, empid, mgrid
FROM emp
START WITH name = 'Goyal'
CONNECT BY PRIOR empid = mgrid
G o y a l Goyal 16 17
Zander 11 16
McKeoug
h
5 11
Barnes 6 11
Henry 12 16
O'Neil 7 12
Smith 8 12
Shoeman 9 12
Scott 14 16
empid mgrid
9. 12
Flow of Execution
17
15 16
10 13 11 12 14
1 2 43 5 6 7 8 9
SELECT name, empid, mgrid
FROM emp
START WITH name = 'Goyal'
CONNECT BY PRIOR empid = mgrid
Stack
JOIN
16
PUSH
POP11 14
65 987
10. Where is hierarchical data ?
Bill of materials
Reporting structure
Package tracking
Inventory management
Social media
date/time
Geography / region
11. PRIOR■ Unary operator PRIOR is used in join filter to distinguish column references
of the last prior recursive step, from column references to the base table.
■ Query without PRIOR can result in a forever running query or single row
package_level item_id package_id
1 pallet_BX505 ship_CX2555
2 box_C1255 pallet_BX505
3 band_aid_H10 box_C1255
3 band_aid_H12 box_C1255
3 A1_pharma_F23 box_C1255
3 A1_pharma_F33 box_C1255
2 box_C3524 pallet_BX505
3 vicks_CK215 box_C3524
3 vicks_CK315 box_C3524
3 vicks_CK324 box_C3524
SELECT level , item_id, package_id
FROM inbound_shipment
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR
item_id = package_id
12. LEVEL
■ Pseudo column that tracks the level of a node in hierarchy starting with level 1
for the root node.
■ Can be used in CONNECT BY clause as a filter to limit the depth of hierarchy
package_level item_id package_id
1 pallet_BX505 ship_CX2555
2 box_C1255 pallet_BX505
2 box_C3524 pallet_BX505
2 box_C4520 pallet_BX505
2 box_C4000 pallet_BX505
5 row(s) retrieved.
SELECT level as package_level,
item_id, package_id
FROM inbound_shipment
where level < 3
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR item_id =
package_id
13. NOCYCLE
■ By default hierarchical queries return error when they detect cycle in the data
■ Used to allow the query to return all rows by ignoring the cycle causing row
insert into inbound_shipment(item_id,package_id) values ("ship_CX2555",
"pallet_BX505");
package_level item_id package_id
1 pallet_BX505 ship_CX2555
26079: CONNECT BY query resulted in a loop/cycle.
Error in line 9
Near character position 37
SELECT level , item_id, package_id
FROM inbound_shipment
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR
item_id = package_id
14. NOCYCLE Example
package_level item_id package_id
1 pallet_BX505 ship_CX2555
2 ship_CX2555 pallet_BX505
2 box_C1255 pallet_BX505
2 box_C3524 pallet_BX505
2 box_C4520 pallet_BX505
2 box_C4000 pallet_BX505
6 row(s) retrieved.
SELECT level as package_level, item_id, package_id
FROM inbound_shipment
where level < 3
START WITH item_id = 'pallet_BX505'
CONNECT BY NOCYCLE PRIOR item_id = package_id
15. CONNECT_BY_ISCYCLE
■ Identify the nodes that would result in a cycle
package_level item_id package_id connect_by_iscycle
1 pallet_BX505 ship_CX2555 0
2 ship_CX2555 pallet_BX505 1
2 box_C1255 pallet_BX505 0
2 box_C3524 pallet_BX505 0
2 box_C4520 pallet_BX505 0
2 box_C4000 pallet_BX505 0
6 row(s) retrieved.
SELECT level as package_level,
item_id, package_id ,
connect_by_iscycle
FROM inbound_shipment
where level < 3
START WITH item_id =
'pallet_BX505'
CONNECT BY NOCYCLE PRIOR
item_id = package_id
17. SYS_CONNECT_BY_PATH
■ Expression which is used to build a string representing a path from the root row
to current row.
■ >>--SYS_CONNECT_BY_PATH--(--string-expression1--,--string-expression2--)--><
path pallet_BX505
item_id pallet_BX505
package_id ship_CX2555
path pallet_BX505box_C1255
item_id box_C1255
package_id pallet_BX505
path pallet_BX505box_C3524
item_id box_C3524
package_id pallet_BX505
path pallet_BX505box_C4520
item_id box_C4520
package_id pallet_BX505
path pallet_BX505box_C4000
item_id box_C4000
package_id pallet_BX505
5 row(s) retrieved.
SELECT
sys_connect_by_path(item_id,"") as path ,
item_id, package_id
FROM inbound_shipment
where level < 3
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR item_id = package_id
18. CONNECT_BY_ROOT
■ unary operator which, for every row in the hierarchy, returns the expression for
the row’s root ancestor
■ >>--CONNECT_BY_ROOT--expression----------------------------------><
root item_id package_id
pallet_BX505 pallet_BX505 ship_CX2555
pallet_BX505 box_C1255 pallet_BX505
pallet_BX505 box_C3524 pallet_BX505
pallet_BX505 box_C4520 pallet_BX505
pallet_BX505 box_C4000 pallet_BX505
5 row(s) retrieved.
SELECT
connect_by_root item_id as root,
item_id, package_id
FROM inbound_shipment
where level < 3
START WITH item_id =
'pallet_BX505'
CONNECT BY PRIOR item_id =
package_id
19. SIBLINGS
■ Attribute of ORDER BY clause to order the siblings at every level of hierarchy
■ Same semantics of ORDER BY but applied at siblings rows
level item_id package_id
1 pallet_BX505 ship_CX2555
2 box_C1255 pallet_BX505
2 box_C3524 pallet_BX505
2 box_C4000 pallet_BX505
2 box_C4520 pallet_BX505
5 row(s) retrieved.
SELECT level, item_id,
package_id
FROM inbound_shipment
where level < 3
START WITH item_id =
'pallet_BX505'
CONNECT BY PRIOR item_id
= package_id
order SIBLINGS by item_id
20. Query rewrite & Execution model
• Query rewrite
SELECT level , item_id, package_id
FROM inbound_shipment
START WITH item_id = 'pallet_BX505'
CONNECT BY PRIOR
item_id = package_id
SELECT level , item_id, package_id FROM
( SELECT level, item_id, package_id
FROM inbound_shipment
WHERE item_id = 'pallet_BX505'
UNION ALL
SELECT level, ship.item_id , ship.package_id
FROM inbound_shipment ship, dtab
WHERE ship.package_id = dtab.item_id
)
AS dtab;
21. Execution model of recursive queries in IDS
TEMP TABLE
CYCLE OR
TRAVERSAL
SCAN
JOIN
UNION ALL
SORT
SCAN SCAN
SORT
SCAN
Scan of shipment
table
Scan of
shipment
table
order
siblings by
Connect
by filters
Top level scan on
derived table
22. sqexplainQUERY:
SELECT level as package_level, item_id, package_id FROM inbound_shipment
START WITH item_id = 'pallet_BX505' CONNECT BY PRIOR item_id = package_id
Connect by Query Rewrite:
select x0.level ,x0.item_id ,x0.package_id from
(select x1.item_id ,x1.package_id ,x1.item_id ,1 ,1 ,0 from
"informix".inbound_shipment x1 where (x1.item_id = 'pallet_BX505' )
union all
select x2.item_id ,x2.package_id ,x2.item_id ,(level + 1 ) ::integer
,connect_by_isleaf ,dtab_30093_173_stkcol from "informix".inbound_shipment
x2 ,"informix".dtab_30093_173 x0 where (dtab_30093_173_p_item_id =
x2.package_id ) )
X0
(item_id,package_id,dtab_30093_173_p_item_id,level,connect_by_isleaf,dtab_3
0093_173_stkcol)
S T A R T W I T H
24. CONNECT BY Restriction
Multiple tables are not allowed
SELECT ship.item_id , ord.name
FROM inbound_shipment ship, orders ordinbound_shipment ship, orders ord
START WITH item_id = “pallet_BX505”
CONNECT BY PRIOR item_id = package_id
WHERE ship.item_id = ord.item_id
Rewrite to
SELECT item_id , name
FROM (SELECT ship.item_id, ord.name
FROM inbound_shipment ship, orders ord
WHERE ship.item_id = ord.item_id )
START WITH item_id = “pallet_BX505”
CONNECT BY PRIOR item_id = package_id
26. Child to Parent Traversal
package_level item_id package_id
1 tylenol_BA500 box_C4000
2 box_C4000 pallet_BX505
3 pallet_BX505 ship_CX2555
3 row(s) retrieved.
SELECT level as package_level, item_id, package_id
FROM inbound_shipment
START WITH item_id = 'tylenol_BA500'
CONNECT BY PRIOR package_id = item_id
27. SEQUENCE NUMBER GENERATOR
SELECT level FROM sysmaster:sysdual CONNECT BY level <= 10
S in g le r o w t a b leConnect by Query Rewrite:
---------------------------
select x0.level from (select 1 ,1 ,0 from sysmaster:"informix".sysdual x1 union all select (level + 1 ) ::integer ,connect_by_isleaf
,dtab_27465_191_stkcol from sysmaster:"informix".sysdual x2 ,"informix".dtab_27465_191 x0 where ((level + 1 ) <= 10. ) )
x0(level,connect_by_isleaf,dtab_27465_191_stkcol)
1) informix.dtab_27465_191: COLLECTION SCAN
Subquery:
---------
Estimated Cost: 5
Estimated # of Rows Returned: 2
1) sysmaster:informix.sysdual: SEQUENTIAL SCAN
Union Query:
------------
1) informix.dtab_27465_191: SEQUENTIAL SCAN
Filters: informix.dtab_27465_191.level + 1 <= 10
2) sysmaster:informix.sysdual: SEQUENTIAL SCAN
NESTED LOOP JOIN
28. Performance Considerations
• Queries are recursive and involves repeated self joins
• Use “PRIOR” Keyword, else query will run forever !!
• TEMP Dbspace used for hierarchy traversal (stack) and
cycle detection
• Configure - DBSPACETEMP
29. Conclusion
• Simple queries for complex reporting
• Useful for single or multiple data tree structure
• Easy to map path between two nodes/rows
Employee-Manager
All employees reporting to “Goyal”
Entire organization chart for “Goyal”
All managers under Goyal with salary &lt; $X
All non-manager employee under Goyal with salary &lt; $Y
Shipment
List all items from a pallet #10
Which product units are inside pallet #10 ?
Find out a pallet number of unit (upc 456….) ?
Display all products from a pallet by scanning a single unit with upc (678….)
Count number of boxes from a pallet by scanning a single unit with upc (567….)
Count number of product units & boxes from a pallet by scanning a single unit with upc (567….)
List all items/boxes from pallet “pallet_BX505”
Fetch row from inbound_shipment where item_id = “pallet_BX505”
Materialize result of step 1 into TEMP table
Join the result of step 2 back into the inbound_shipment such that item_id from step 2 == package_id (similar to self join)
Materialize results of step 3 into TEMP table
Repeat step 3 and 4 until step 3 results in no data i.e. Join results in no data
A hierarchical query operates on rows, which correspond to nodes within a logical structure of parent-child relationships. If parent rows have multiple children, sibling relationships exist among child rows of the same parent. These relationships might reflect, for example, the reporting structure among employees and managers within the divisions and management levels of an organization.
Important: Hierarchical queries are most efficient for data sets in which parent-child dependencies in the table have the logical topology of a simple graph. If the self-referencing table includes more than one independent hierarchy for the same set of columns, or if any child row is also an ancestor of its parent, see also the section Dependency patterns that are not a simple graph.
Pseudo column which returns a 1 or 0 to indicate if the row resulted in a cycle or not (row when joined back into the base table would result in cycle or not)
to identify the nodes that would result in a cycle
Can be used only when NOCYCLE attribute is used
Cannot be used in START WITH and CONNECT BY clause
This Pseudo column returns either 1 or 0 based on whether the node is a leaf node or not
A node is a leaf node if it has no children in the query result hierarchy (not in the actual data hierarchy)
Cannot appear in START WITH and CONNECT BY clause.
CONNECT BY queries are
Supported inside views / Derived tables
Supported inside subqueries
SPLs (static and dynamic statements in SPL)
CONNECT BY queries do not support joins in the FROM clause
Workaround is to rewrite queries to push down join into FROM clause of CONNECT BY query
Queries are optimized exactly like normal SQL queries
Access paths/join types are chosen based on available statistics
Subqueries with CONNECT BY are not flattened (merged into parent query block)
Views with CONNECT BY or views referenced in FROM clause of CONNECT BY queries are always materialized