The document provides an introduction to the Oracle optimizer. It discusses key concepts like cost, selectivity, cardinality, clustering factor, access methods including table scans, index scans and joins. It covers partitioning, subqueries and hints. The examples show how statistics, histograms and indexes impact the optimizer's choice of access paths and joins.
The document describes the internal structures of Oracle tables and indexes. It discusses the heap table structure including the first and second level bitmaps, extent information, and data blocks. It also explains how Oracle stores different data types like numbers, dates, and varchar. Regarding indexes, it outlines the structures of unique and non-unique B-tree indexes, including the index block headers, leaf blocks, and branch blocks. Composite and function-based indexes are also briefly mentioned.
This document compares the NOT EXISTS and NOT IN operators in SQL queries. It notes that NOT EXISTS is more efficient because it can stop evaluating the subquery after finding a single match, while NOT IN must scan the entire subquery. The document also shows that NOT IN performs a full table scan of the subquery for each row in the main query, while NOT EXISTS only scans tables twice total. It demonstrates these differences through examples run on two tables, comparing the session statistics and trace files. New features in Oracle 11g are also discussed that help optimize NOT IN queries.
1. Mr. A entered employee categories into a database table, but 5 rows were missing categories. He incorrectly entered "0" as the category for those rows.
2. The exercise tested different scenarios querying the category column with and without statistics, histograms, and NULL values instead of the "0" placeholder.
3. With statistics but no histogram, the cardinality calculation was worst, estimating over 9,000 matches instead of the actual 5. Histograms improved the calculation by breaking the column into buckets. Replacing the "0" with NULL also helped avoid a skewed distribution.
The document discusses different methods for updating a target table based on values from a source table: correlated update, merge, and update from select. It finds that correlated update has poor performance without an index on the source table, as it must do a full table scan for each target row. Merge and update from select leave unmatched rows unchanged, while correlated update updates them to empty strings. Update from select requires a unique index on the source table. Overall, update from select has better performance when updating a small percentage of rows and the target table has an index.
The document discusses dynamic memory allocation and linked lists in C programming. It covers the key concepts of:
- Dynamic memory allocation functions like malloc(), calloc(), free(), and realloc() which are used to allocate and free memory during runtime.
- The differences between arrays and linked lists, with linked lists being a dynamic data structure that can grow and shrink in size more efficiently than arrays.
- How to create, traverse, insert, delete and perform other operations on singly and doubly linked lists using C pointers and memory allocation functions. Examples code is provided to demonstrate creating and manipulating linked lists.
This document introduces ABAP programming concepts for reading database tables using SELECT statements in Open SQL. It discusses using SELECT to retrieve single records or multiple rows of data, filtering data with WHERE clauses, sorting results with ORDER BY, and working with system variables. Formal documentation methods are also introduced.
This document contains a presentation on linked lists. It includes:
1. An introduction to linked lists describing their representation using linked allocation and algorithms for inserting and deleting nodes.
2. Algorithms for inserting a node at the first, last, and ordered positions in a single linked list, as well as deleting a node and copying a linked list.
3. A section on linear linked list multiple choice questions.
In computer science, a linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence.
The document describes the internal structures of Oracle tables and indexes. It discusses the heap table structure including the first and second level bitmaps, extent information, and data blocks. It also explains how Oracle stores different data types like numbers, dates, and varchar. Regarding indexes, it outlines the structures of unique and non-unique B-tree indexes, including the index block headers, leaf blocks, and branch blocks. Composite and function-based indexes are also briefly mentioned.
This document compares the NOT EXISTS and NOT IN operators in SQL queries. It notes that NOT EXISTS is more efficient because it can stop evaluating the subquery after finding a single match, while NOT IN must scan the entire subquery. The document also shows that NOT IN performs a full table scan of the subquery for each row in the main query, while NOT EXISTS only scans tables twice total. It demonstrates these differences through examples run on two tables, comparing the session statistics and trace files. New features in Oracle 11g are also discussed that help optimize NOT IN queries.
1. Mr. A entered employee categories into a database table, but 5 rows were missing categories. He incorrectly entered "0" as the category for those rows.
2. The exercise tested different scenarios querying the category column with and without statistics, histograms, and NULL values instead of the "0" placeholder.
3. With statistics but no histogram, the cardinality calculation was worst, estimating over 9,000 matches instead of the actual 5. Histograms improved the calculation by breaking the column into buckets. Replacing the "0" with NULL also helped avoid a skewed distribution.
The document discusses different methods for updating a target table based on values from a source table: correlated update, merge, and update from select. It finds that correlated update has poor performance without an index on the source table, as it must do a full table scan for each target row. Merge and update from select leave unmatched rows unchanged, while correlated update updates them to empty strings. Update from select requires a unique index on the source table. Overall, update from select has better performance when updating a small percentage of rows and the target table has an index.
The document discusses dynamic memory allocation and linked lists in C programming. It covers the key concepts of:
- Dynamic memory allocation functions like malloc(), calloc(), free(), and realloc() which are used to allocate and free memory during runtime.
- The differences between arrays and linked lists, with linked lists being a dynamic data structure that can grow and shrink in size more efficiently than arrays.
- How to create, traverse, insert, delete and perform other operations on singly and doubly linked lists using C pointers and memory allocation functions. Examples code is provided to demonstrate creating and manipulating linked lists.
This document introduces ABAP programming concepts for reading database tables using SELECT statements in Open SQL. It discusses using SELECT to retrieve single records or multiple rows of data, filtering data with WHERE clauses, sorting results with ORDER BY, and working with system variables. Formal documentation methods are also introduced.
This document contains a presentation on linked lists. It includes:
1. An introduction to linked lists describing their representation using linked allocation and algorithms for inserting and deleting nodes.
2. Algorithms for inserting a node at the first, last, and ordered positions in a single linked list, as well as deleting a node and copying a linked list.
3. A section on linear linked list multiple choice questions.
In computer science, a linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence.
This powerpoint presentation covers singly linked lists and doubly linked lists. It defines linked lists as linear data structures composed of nodes that contain data and a pointer to the next node. Singly linked lists allow traversing the list in one direction as each node only points to the next node, while doubly linked lists allow traversing in both directions as each node points to both the next and previous nodes. The presentation explains basic operations like insertion, deletion, and searching on both types of linked lists and compares their complexities. It provides examples of inserting and deleting nodes from a doubly linked list.
In computer science, a linked list is a linear collection of data elements, in which linear order is not given by their physical placement in memory. Instead, each element points to the next
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
The document discusses different types of linked lists including:
- Singly linked lists that can only be traversed in one direction.
- Doubly linked lists that allow traversal in both directions using forward and backward pointers.
- Circular linked lists where the last node points back to the first node allowing continuous traversal.
- Header linked lists that include a header node at the beginning for simplified insertion and deletion. Header lists can be grounded where the last node contains a null pointer or circular where the last node points to the header.
- Two-way or doubly linked lists where each node contains a forward and backward pointer allowing bidirectional traversal through the list.
linked list
singly linked list
insertion in singly linked list
DELETION IN SINGLY LINKED LIST
Searching a singly linked list
Doubly Linked List
insertion from Doubly linked list
DELETION from Doubly LINKED LIST
Searching a doubly linked list
Circular linked list
This document discusses different types of linked lists including singly linked lists, circular linked lists, and doubly linked lists. It provides details on representing stacks and queues using linked lists. Key advantages of linked lists over arrays are that linked lists can dynamically grow in size as needed, elements can be inserted and deleted without shifting other elements, and there is no memory wastage. Operations like insertion, deletion, traversal, and searching are described for singly linked lists along with sample C code to implement a linked list, stack, and queue.
The document provides examples of SQL commands for:
1. Selecting data from tables including top rows, random rows, joins, outer joins, and grouping.
2. Creating views and stored procedures.
3. Differences between functions and stored procedures.
4. Using triggers, cursors, and retrieving the second highest/lowest value from a column.
5. Common DDL commands like creating, copying, deleting, and altering tables.
So in summary, it covers the basics of SQL including queries, views, stored procedures, functions, triggers and DDL commands.
IBM Informix Database SQL Set operators and ANSI Hash JoinAjay Gupte
This document discusses SQL set operators like UNION, INTERSECT, and MINUS. It explains that INTERSECT returns rows common to two result sets, while MINUS returns rows in the first set not in the second. The operators support NULLs and have rules like UNION. Examples demonstrate their usage in views, derived tables, and procedures. Optimization techniques like nested loops and hash joins are covered. Scenarios illustrate uses like finding overlapping or non-overlapping supplier and order IDs. ANSI join improvements like hash joins are also summarized.
Row migration occurs when an updated row no longer fits in its original block due to low PCTFREE. This degrades performance by requiring two block reads to access migrated rows. To detect migration, use ALTER TABLE COMPUTE STATISTICS and check the CHAIN_CNT column. There are several ways to fix migration, including altering PCTFREE, moving the tablespace, exporting/truncating/importing, or selectively deleting and re-inserting migrated rows while holding their data in a temporary table.
Deletion from single way linked list and searchEstiak Khan
The document discusses linked lists and operations on single linked lists such as deletion and searching. It defines a linked list as a linear data structure containing nodes with a data and link part, where the link part contains the address of the next node. It describes how to delete nodes from different positions in a single linked list, including the first, last, and intermediate nodes. It also explains how to perform a linear search to find a required element by traversing the list node by node.
Discover the power of Recursive SQL and query transformation with Informix da...Ajay Gupte
This presentation will provide an overview of the Recursive SQL with the CONNECT BY clause feature. We will provide examples of typical practical database problems and describe in detail how they can be solved with recursive SQL. The problems discussed include for bill of materials, obtaining the number of employees for each manager in a particular sub-organization, converting linked dimension hierarchies in a star schema to fixed dimension hierarchies, tracking packages, and generating test data. This presentation compares the new solutions with traditional solutions of these problems and discusses the advantages and disadvantages of the various methods. This presentation will also discuss the query transformation techniques with Informix 12.10 features which will focus on how query blocks are moved between different levels and optimized using examples and diagrams. Users will learn how to analyze complex examples based on various Informix 12.10 features. Examples included in this session are query block movement, table re-ordering, complex ANSI joins, sub-queries, derived tables, views, connect by, OLAP functions, setops cases.
Linked List Static and Dynamic Memory AllocationProf Ansari
Static variables are declared and named while writing the program. (Space for them exists as long as the program, in which they are declared, is running.) Static variables cannot be created or destroyed during execution of the program in which they are declared.
Dynamic variables are created (and may be destroyed) during program execution since dynamic variables do not exist while the program is compiled, but only when it is run, they cannot be assigned names while it is being written. The only way to access dynamic variables is by using pointers. Once it is created, however, a dynamic variable does contain data and must have a type like any other variable. If a dynamic variable is created in a function, then it can continue to exist even after the function terminates.
Linked Linear List
We saw in previous chapters how static representation of linear ordered list through Array leads to wastage of memory and in some cases overflows. Now we don't want to assign memory to any linear list in advance instead we want to allocate memory to elements as they are inserted in list. This requires Dynamic Allocation of memory and it can be achieved by using malloc() or calloc() function.
But memory assigned to elements will not be contiguous, which is a requirement for linear ordered list, and was provided by array representation. How we could achieve this?
The document discusses list data structures and their implementation using arrays and linked memory. It describes common list operations like insertion, removal, searching, and provides examples of how to implement them with arrays and linked lists. Key list operations include adding and removing elements from different positions, accessing elements by index or pointer, and traversing the list forward and backward. Linked lists offer more flexibility than arrays by not requiring predefined memory allocation.
This document discusses various techniques for optimizing MySQL indexes, including:
- Ensuring indexes have good selectivity on fields and composite indexes are ordered optimally
- Using prefix indexes that take up less space and are faster than whole column indexes
- Explaining query execution plans using EXPLAIN to identify optimal indexes
- Using hints like USE INDEX, IGNORE INDEX, and STRAIGHT_JOIN to influence the optimizer
- Analyzing the slow query log and general query log to identify queries that need optimization
This document discusses various SQL concepts including joins, aggregation functions, and grouping. It begins with an overview of installing MySQL Workbench and loading sample data. It then covers SELECT statements and functions like COUNT, SUM, AVG. It describes different types of joins like inner, left, right, and self joins. It provides examples of joining tables to retrieve related data and performing self joins to combine rows from the same table. It also explains how to use the GROUP BY clause to divide data into groups and apply aggregation functions.
The document discusses double and circular linked lists. It covers inserting and deleting nodes from doubly linked lists and circular linked lists. Specifically, it describes how to insert nodes at different positions in a doubly linked list, such as at the front, after a given node, at the end, and before a given node. It also explains how to delete nodes from a doubly linked list. For circular linked lists, it outlines how to insert nodes in an empty list, at the beginning, at the end, and between nodes. It also provides the steps to delete nodes from a circular linked list.
- Circular linked lists connect the last node to the first node to allow continuous traversal of the list.
- A circular linked list can use a header node or external pointer to mark the beginning of the list. The header node's data can indicate it is the header.
- Primitive functions like insertion and deletion work similarly in circular and linear lists, except a circular list must check for the empty list case of a single node pointing to itself.
This document discusses the implementation of a single linked list data structure. It describes the nodes that make up a linked list, which have an info field to store data and a next field pointing to the next node. The document outlines different ways to represent linked lists, including static arrays and dynamic pointers. It also provides algorithms for common linked list operations like traversing, inserting, and deleting nodes from the beginning, end, or a specified position within the list.
A circular linked list is a variation of a linked list where the last element points to the first element, forming a circle. This can be done for both singly and doubly linked lists. Basic operations like insertion, deletion, and traversal can be performed on a circular linked list in a similar way as a regular linked list by taking into account the circular nature when reaching the end of the list. Code examples are provided to demonstrate how to insert a node at the beginning, delete the first node, and print the list for a singly linked circular list.
This document discusses hashing techniques for implementing abstract data types like tables. It begins by describing tables as data structures with fields that can be searched using a key. Different implementations of tables are then examined, including unsorted and sorted arrays, linked lists, and binary trees. The document focuses on hashing as a way to enable fast search (O(1) time) by using a hash function to map keys to array indices. It covers hash table implementation using arrays with collision resolution via separate chaining or open addressing. Factors like hash functions, collision handling, and table size that influence hashing performance are also summarized.
This document provides an overview of SQL tuning and optimization techniques. It discusses various indexing options in Oracle like bitmap indexes and reverse key indexes. It also covers execution plan analysis using tools like EXPLAIN PLAN and tuning techniques like hints. The goal of SQL tuning is to identify resource-intensive queries and optimize them using better indexing, rewriting queries, and other optimization strategies.
This powerpoint presentation covers singly linked lists and doubly linked lists. It defines linked lists as linear data structures composed of nodes that contain data and a pointer to the next node. Singly linked lists allow traversing the list in one direction as each node only points to the next node, while doubly linked lists allow traversing in both directions as each node points to both the next and previous nodes. The presentation explains basic operations like insertion, deletion, and searching on both types of linked lists and compares their complexities. It provides examples of inserting and deleting nodes from a doubly linked list.
In computer science, a linked list is a linear collection of data elements, in which linear order is not given by their physical placement in memory. Instead, each element points to the next
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
The document discusses different types of linked lists including:
- Singly linked lists that can only be traversed in one direction.
- Doubly linked lists that allow traversal in both directions using forward and backward pointers.
- Circular linked lists where the last node points back to the first node allowing continuous traversal.
- Header linked lists that include a header node at the beginning for simplified insertion and deletion. Header lists can be grounded where the last node contains a null pointer or circular where the last node points to the header.
- Two-way or doubly linked lists where each node contains a forward and backward pointer allowing bidirectional traversal through the list.
linked list
singly linked list
insertion in singly linked list
DELETION IN SINGLY LINKED LIST
Searching a singly linked list
Doubly Linked List
insertion from Doubly linked list
DELETION from Doubly LINKED LIST
Searching a doubly linked list
Circular linked list
This document discusses different types of linked lists including singly linked lists, circular linked lists, and doubly linked lists. It provides details on representing stacks and queues using linked lists. Key advantages of linked lists over arrays are that linked lists can dynamically grow in size as needed, elements can be inserted and deleted without shifting other elements, and there is no memory wastage. Operations like insertion, deletion, traversal, and searching are described for singly linked lists along with sample C code to implement a linked list, stack, and queue.
The document provides examples of SQL commands for:
1. Selecting data from tables including top rows, random rows, joins, outer joins, and grouping.
2. Creating views and stored procedures.
3. Differences between functions and stored procedures.
4. Using triggers, cursors, and retrieving the second highest/lowest value from a column.
5. Common DDL commands like creating, copying, deleting, and altering tables.
So in summary, it covers the basics of SQL including queries, views, stored procedures, functions, triggers and DDL commands.
IBM Informix Database SQL Set operators and ANSI Hash JoinAjay Gupte
This document discusses SQL set operators like UNION, INTERSECT, and MINUS. It explains that INTERSECT returns rows common to two result sets, while MINUS returns rows in the first set not in the second. The operators support NULLs and have rules like UNION. Examples demonstrate their usage in views, derived tables, and procedures. Optimization techniques like nested loops and hash joins are covered. Scenarios illustrate uses like finding overlapping or non-overlapping supplier and order IDs. ANSI join improvements like hash joins are also summarized.
Row migration occurs when an updated row no longer fits in its original block due to low PCTFREE. This degrades performance by requiring two block reads to access migrated rows. To detect migration, use ALTER TABLE COMPUTE STATISTICS and check the CHAIN_CNT column. There are several ways to fix migration, including altering PCTFREE, moving the tablespace, exporting/truncating/importing, or selectively deleting and re-inserting migrated rows while holding their data in a temporary table.
Deletion from single way linked list and searchEstiak Khan
The document discusses linked lists and operations on single linked lists such as deletion and searching. It defines a linked list as a linear data structure containing nodes with a data and link part, where the link part contains the address of the next node. It describes how to delete nodes from different positions in a single linked list, including the first, last, and intermediate nodes. It also explains how to perform a linear search to find a required element by traversing the list node by node.
Discover the power of Recursive SQL and query transformation with Informix da...Ajay Gupte
This presentation will provide an overview of the Recursive SQL with the CONNECT BY clause feature. We will provide examples of typical practical database problems and describe in detail how they can be solved with recursive SQL. The problems discussed include for bill of materials, obtaining the number of employees for each manager in a particular sub-organization, converting linked dimension hierarchies in a star schema to fixed dimension hierarchies, tracking packages, and generating test data. This presentation compares the new solutions with traditional solutions of these problems and discusses the advantages and disadvantages of the various methods. This presentation will also discuss the query transformation techniques with Informix 12.10 features which will focus on how query blocks are moved between different levels and optimized using examples and diagrams. Users will learn how to analyze complex examples based on various Informix 12.10 features. Examples included in this session are query block movement, table re-ordering, complex ANSI joins, sub-queries, derived tables, views, connect by, OLAP functions, setops cases.
Linked List Static and Dynamic Memory AllocationProf Ansari
Static variables are declared and named while writing the program. (Space for them exists as long as the program, in which they are declared, is running.) Static variables cannot be created or destroyed during execution of the program in which they are declared.
Dynamic variables are created (and may be destroyed) during program execution since dynamic variables do not exist while the program is compiled, but only when it is run, they cannot be assigned names while it is being written. The only way to access dynamic variables is by using pointers. Once it is created, however, a dynamic variable does contain data and must have a type like any other variable. If a dynamic variable is created in a function, then it can continue to exist even after the function terminates.
Linked Linear List
We saw in previous chapters how static representation of linear ordered list through Array leads to wastage of memory and in some cases overflows. Now we don't want to assign memory to any linear list in advance instead we want to allocate memory to elements as they are inserted in list. This requires Dynamic Allocation of memory and it can be achieved by using malloc() or calloc() function.
But memory assigned to elements will not be contiguous, which is a requirement for linear ordered list, and was provided by array representation. How we could achieve this?
The document discusses list data structures and their implementation using arrays and linked memory. It describes common list operations like insertion, removal, searching, and provides examples of how to implement them with arrays and linked lists. Key list operations include adding and removing elements from different positions, accessing elements by index or pointer, and traversing the list forward and backward. Linked lists offer more flexibility than arrays by not requiring predefined memory allocation.
This document discusses various techniques for optimizing MySQL indexes, including:
- Ensuring indexes have good selectivity on fields and composite indexes are ordered optimally
- Using prefix indexes that take up less space and are faster than whole column indexes
- Explaining query execution plans using EXPLAIN to identify optimal indexes
- Using hints like USE INDEX, IGNORE INDEX, and STRAIGHT_JOIN to influence the optimizer
- Analyzing the slow query log and general query log to identify queries that need optimization
This document discusses various SQL concepts including joins, aggregation functions, and grouping. It begins with an overview of installing MySQL Workbench and loading sample data. It then covers SELECT statements and functions like COUNT, SUM, AVG. It describes different types of joins like inner, left, right, and self joins. It provides examples of joining tables to retrieve related data and performing self joins to combine rows from the same table. It also explains how to use the GROUP BY clause to divide data into groups and apply aggregation functions.
The document discusses double and circular linked lists. It covers inserting and deleting nodes from doubly linked lists and circular linked lists. Specifically, it describes how to insert nodes at different positions in a doubly linked list, such as at the front, after a given node, at the end, and before a given node. It also explains how to delete nodes from a doubly linked list. For circular linked lists, it outlines how to insert nodes in an empty list, at the beginning, at the end, and between nodes. It also provides the steps to delete nodes from a circular linked list.
- Circular linked lists connect the last node to the first node to allow continuous traversal of the list.
- A circular linked list can use a header node or external pointer to mark the beginning of the list. The header node's data can indicate it is the header.
- Primitive functions like insertion and deletion work similarly in circular and linear lists, except a circular list must check for the empty list case of a single node pointing to itself.
This document discusses the implementation of a single linked list data structure. It describes the nodes that make up a linked list, which have an info field to store data and a next field pointing to the next node. The document outlines different ways to represent linked lists, including static arrays and dynamic pointers. It also provides algorithms for common linked list operations like traversing, inserting, and deleting nodes from the beginning, end, or a specified position within the list.
A circular linked list is a variation of a linked list where the last element points to the first element, forming a circle. This can be done for both singly and doubly linked lists. Basic operations like insertion, deletion, and traversal can be performed on a circular linked list in a similar way as a regular linked list by taking into account the circular nature when reaching the end of the list. Code examples are provided to demonstrate how to insert a node at the beginning, delete the first node, and print the list for a singly linked circular list.
This document discusses hashing techniques for implementing abstract data types like tables. It begins by describing tables as data structures with fields that can be searched using a key. Different implementations of tables are then examined, including unsorted and sorted arrays, linked lists, and binary trees. The document focuses on hashing as a way to enable fast search (O(1) time) by using a hash function to map keys to array indices. It covers hash table implementation using arrays with collision resolution via separate chaining or open addressing. Factors like hash functions, collision handling, and table size that influence hashing performance are also summarized.
This document provides an overview of SQL tuning and optimization techniques. It discusses various indexing options in Oracle like bitmap indexes and reverse key indexes. It also covers execution plan analysis using tools like EXPLAIN PLAN and tuning techniques like hints. The goal of SQL tuning is to identify resource-intensive queries and optimize them using better indexing, rewriting queries, and other optimization strategies.
Oracle is an RDBMS package that allows communication between the backend and frontend via SQL*NET. The key differences between truncate and delete are that truncate commits after deleting the entire table and cannot be rolled back, while delete allows filtered deletion and deleted records can be rolled back or committed. An alias is temporary while a synonym is permanent. Rowid holds the physical location of a record and is the fastest way to access a row. PL/SQL allows programming with blocks, variables, and conditions, and DBMS_OUTPUT is used to display output.
Oracle is an RDBMS package that allows communication between the backend and frontend via SQL*NET. The key differences between truncate and delete are that truncate commits after deleting the entire table and cannot be rolled back, while delete allows filtered deletion and deleted records can be rolled back or committed. An alias is temporary while a synonym is permanent. Rowid provides uniqueness of records given by the system and is permanent, while rownum represents a record number and is temporary. PL/SQL allows programming with blocks, variables, and control structures like loops and conditions, enabling complex logic and programming within Oracle.
The document discusses various techniques for optimizing database performance in Oracle, including:
- Using the cost-based optimizer (CBO) to choose the most efficient execution plan based on statistics and hints.
- Creating appropriate indexes on columns used in predicates and queries to reduce I/O and sorting.
- Applying constraints and coding practices like limiting returned rows to improve query performance.
- Tuning SQL statements through techniques like predicate selectivity, removing unnecessary objects, and leveraging indexes.
This document discusses database performance factors for developers. It covers topics like query execution plans, table indexes, table partitioning, and performance troubleshooting. The goal is to help developers understand how to optimize database performance. It provides examples and recommends analyzing execution plans, properly indexing tables, partitioning large tables, and using a structured approach to troubleshooting performance issues.
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
This document discusses database indexing. It provides information on the benefits of indexes, how to create indexes, common misconceptions about indexing, and rules for determining when and how to create indexes. Key points include that indexes improve performance of queries by enabling faster data retrieval and synchronization; indexes should be created on columns frequently filtered in WHERE and JOIN clauses; and the order of columns in an index matters for its effectiveness.
Oracle Join Methods and 12c Adaptive PlansFranck Pachot
Join Methods and 12c Adaptive Plans
In its quest to improve cardinality estimation, 12c has introduced Adaptive Execution Plans which deals with the cardinalities that are difficult to estimate before execution. Ever seen a hanging query because a nested loop join is running on millions of rows?
This is the point addressed by Adaptive Joins. But that new feature is also a good occasion to look at the four possible join methods available for years.
This document discusses methods for preparing datasets for data mining analysis using horizontal aggregations in SQL. It introduces horizontal aggregations, which aggregate numeric expressions and transpose results to produce datasets with a horizontal layout. This is unlike standard SQL aggregations which produce vertical layouts. The document proposes three methods for evaluating horizontal aggregations: 1) SPJ method using standard relational operators, 2) CASE method using SQL CASE constructs, and 3) PIVOT method using available PIVOT operators. The CASE method is presented as generally the most efficient evaluation method. The document concludes horizontal aggregations are useful for creating horizontally-laid out datasets required by most data mining algorithms.
This document discusses different searching methods like sequential, binary, and hashing. It defines searching as finding an element within a list. Sequential search searches lists sequentially until the element is found or the end is reached, with efficiency of O(n) in worst case. Binary search works on sorted arrays by eliminating half of remaining elements at each step, with efficiency of O(log n). Hashing maps keys to table positions using a hash function, allowing searches, inserts and deletes in O(1) time on average. Good hash functions uniformly distribute keys and generate different hashes for similar keys.
For regular Updates on SAP ABAP please like our Facebook page:-
Facebook:- https://www.facebook.com/bigclasses/
Twitter:- https://twitter.com/bigclasses
LinkedIn:-https://www.linkedin.com/company/bigclasses/
Google+:https://plus.google.com/+Bigclassesonlinetraining
SAP ABAP Course Page:-https://bigclasses.com/sap-abap-online-training.html
Contact us: - India +91 800 811 4040
USA +1 732 325 1626
Email us at: - info@bigclasses.com
sap abap online training, online sap abap training, sap abap training online, sap abap training, abap online training, sap abap, sap online training, sap abap online training from india, sap abap online training demo, sap, abap, sap abap online classes, sap abap online, sap abap training course, online abap training, abap training online, sap abap online courses, www.bigclasses.com,sap abap training
usa
This document provides an overview of optimizing MySQL queries. It discusses optimization at the database and hardware levels, understanding query execution plans, using EXPLAIN to analyze queries, optimizing specific query types like counts and groups, indexing strategies like covering indexes, and partitioning tables for performance. The goal is to help readers write efficient queries and properly structure databases and indexes for high performance.
Database questions and answers document containing:
1. SQL queries for fetching data from tables using SELECT and WHERE clauses.
2. Joins to retrieve data from multiple tables using SELECT, FROM, and WHERE clauses.
3. Differences between unique key and primary key including allowing null values and number allowed per table.
4. Uses and types of indexes to improve query performance including on single or multiple columns.
5. Purpose and examples of foreign key constraints to maintain referential integrity.
6. Aggregate functions like AVG, COUNT, MAX used in queries.
Hash join is a type of join operation that uses a hash table to perform the join. There are three types of hash joins - optimal, onepass, and multipass. Optimal hash join performs the join entirely in memory, while onepass and multipass hash joins spill data to temporary storage due to insufficient memory. The size of the build table can impact the performance and memory requirements of the hash join, with smaller build tables generally requiring less memory but potentially more disk reads. The best build table depends on the relative sizes of the tables and available memory.
MySQL optimization involves understanding the entire system to be optimized. The query optimizer attempts to determine the most efficient way to execute a query by considering possible query plans. Key aspects of optimization include data types and schema design, indexing, and query optimization. Smaller data types, simpler schemas, and indexes on commonly used columns can improve performance.
Amazon Redshift is a fully managed data warehouse service that allows for petabyte-scale analytics on data stored in columns. It uses a massively parallel processing architecture and columnar data storage to improve query performance. Defining sort keys and distribution keys appropriately is crucial to influence how data is stored and queries are processed in parallel across nodes. Automatic features like concurrency scaling, resize operations, and backups help ensure the warehouse scales and remains available as data and usage grow over time.
MySQL uses indexes to optimize queries and improve performance. Indexes are stored in b-trees to keep data sorted and allow fast searches, inserts and deletions. The selectivity of an index, or the ratio of unique values within a column, determines how effectively the index can reduce the result set size. Highly selective columns on frequently queried subsets of rows make the best candidates for indexes. MySQL can use indexes to optimize data lookups, sorting, avoiding full table scans, and certain aggregation functions.
Oracle 9i is changing the ETL (Extract, Transform, Load) paradigm by providing powerful new ETL capabilities within the database. Key features discussed include external tables for reading flat files directly without loading to temporary tables, the MERGE statement for updating or inserting rows with one statement, multi-table inserts for conditionally inserting rows into multiple tables, pipelined table functions for efficiently passing row sets between functions, and native compilation for improving PL/SQL performance. These new Oracle 9i capabilities allow for simpler, more efficient, and lower cost ETL processes compared to traditional third-party ETL tools.
1) The document discusses various database indexing concepts including tables, heaps, clustered and non-clustered indexes.
2) It covers the pros and cons of indexes, and how to balance performance with index overhead.
3) The author provides tips on choosing appropriate indexes including covering indexes and using system views to identify high impact missing indexes.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
3. Our Environment
Heap Table
This is default table when we issue the CREATE TABLE
statement
Data is stored in random fashion, no specific sort of order
(best fit algorithm)
3
7. Our Environment
Histogram
Collection of information about data distribution in
specific column
Oracle maintains 2 types of histogram: frequency and
height-balanced
Oracle use histogram as additional information when
deciding whether to use index scan or table scan
7
12. Cost
Jonathan Lewis: “The cost represents the optimizer‟s best estimate of the
time it will take to execute the statement”
A result of the calculation performed by optimizer
Few conditions that make CBO produces wrong result”
No statistics on the underlying objects or statistics are obsolete
Performance characteristics of hardware or current workloads are not known
Bug
From Oracle Performance Tuning Guide and Reference
Cost = (#SRds * sreadtim + #MRds * mreadtim +
#CPUCycles / cpuspeed) / sreadtim
#SRds: number of single data block reads
#MRds: number of multi data block reads
#CPUCycle: number of CPU cycles
sreadtim: single block read time
mreadtim: multi block read time
cpuspeed: CPU cycles per second
12
14. Selectivity & Cardinality
Selectivity is what fraction of row the predicate is supposed to fetch/ return
Selectivity = 1 / num_distinct (no histogram)
Selectivity = density (histogram)
Cardinality is total number of row the predicate is supposed to return
Cardinality = num_rows * density
Going back to slide #4
Cardinality = num_rows = 20
Slide #5
Selectivity = (number of bucket with data / number of total bucket) = 17 / 17 = 1
Cardinality = selectivity * num_rows = 1 * 20 = 20
Slide #10
14
Selectivity = 4 / 17 = 0.235
Cardinality = 0.235 * 20 = 4.7
15. Clustering Factor
Represents the degree to which data is randomly
distributed through a table
Number of blocks <= clustering factor <= number of rows
Index has better selectivity if clustering factor is close to
number of data block, means Oracle can do multi block
read on the table for several index‟s key
15
16. Clustering Factor (block_id from rowid)
Going back to NORMAL_HASH table example, due to
the anomaly configuration of pctfree and pctused, every
block contains single row only
16
17. Access Method
There are 2 access methods: table and index
access
Index access can be: Fast Full Scan, Full Scan,
Unique Scan, Range Scan, Range Scan
(MIN/MAX) and Skip Scan
17
18. Access Method
Full Table Scan
Oracle is reading all rows from the table
Not suitable for OLTP system with high volume of
transaction and usually only access small fraction of data
Suitable for DSS system with batch reporting query
Not good for Nested Loop (NL) for the outer table (huge
table)
Usually we see it in Hash Join (HJ)
18
19. Access Method
Index Fast Full Scan
Oracle is reading all rows from the index to get the
result (doesn‟t required table access since the index
contains all columns required to resolve the query)
To be able to use Index FFS, the column should be
defined as NOT NULL or at least one column in a
composite index is NOT NULL, the reason is that
NULL values are not included in the index creation,
so when the column is defined as NOT NULL Oracle
knows that all values are available in the index
Index FFS will be available as an option if we put “IS
NOT NULL” in the WHERE clause explicitly
Cost will be only for accessing the index
19
21. Access Method
Index Fast Full Scan (cost calculation)
If we remove the HINT, we will have Index Full Scan
Cost for Index Fast Full Scan
Cost = (8 / 16) * (10 + 16 * 8192 / 4092) / (10 + 8192 / 4092) = 0.5 * 32 / 12 =
ceil(1.333) = 2
Cost for Index Full Scan
Cost = 0 + ceil(1 * 1 / 14) = ceil(0.0714) = 1
21
22. Access Method
Index Fast Full Scan (building the example)
Another figure from index with more leaf block
(another anomaly configuration in the pctfree of
the index)
22
24. Access Method
Index Full Scan
Oracle is reading all rows from the index, and
may be accessing these rows in the underlying
table
Without table
access
24
26. Access Method
Index Unique Scan
Oracle is reading 0 or 1 rows from the index, only on
unique index
Equality operator in the predicate (=), ay be seen in AND,
OR, IS NULL operator
26
28. Access Method
Index Range Scan
Oracle is reading 0 or more contiguous rows
from the index
Non unique index with range operator in the
predicate (>, <, >=, <=)
28
29. Access Method
Index Range Scan (MIN/MAX)
Oracle is identifying 0 or more contiguous rows
in the index, but is reading only one (the first or
the last) in order to satisfy a MIN or MAX
aggregate function
29
30. Access Method
Index Skip Scan
Oracle is reading 0 or more rows from different
parts of the index (composite index), and may be
accessing these rows in the underlying table
Skip Scan will be happened when we access a
composite index with second column in the
index‟s order. It will not be happened for the third
column, forth, etc.
It will works only if first column in the index has
low cardinality (few distinct value) otherwise full
table scan will be better in most of the case
30
34. Partitioning
There are some consideration when we are
working with partition table. One of its is
regarding access path. We introduce partitioning
on the table usually to reduce the number of
rows which will affected by any query, since we
know that not all of those rows are being used in
the query.
There are 4 kinds of access method for
partitioned table: Partition Range Single,
Partition Range Iterator, Partition Range All and
Partition Range Sub-query
34
36. Partitioning
Partition Range Single
Exactly only single partition of the table involves
in the query. Access path on this partition
depends on the query, can be Table Scan or any
Index Scan
36
38. Partitioning
Partition Range All
In this kind of access method, all partitions in the table
will be scanned. This is a bad example of table design
(create a partition table without taking any benefit of it)
38
39. Partitioning
Partition Range Sub-query (building an example)
This method is new in 10g. If the partitioned table
is bigger compare to the other join table and the
expected number of the records (result) is
significantly less, Oracle will perform dynamic
partition pruning using sub-query
The partitioned table will be having 200,000
blocks and the other join only 200 blocks
39
42. Join Method
There are 3 join methods: Nested Loop (NL),
Hash Join (HJ) and Sort Merge Join (SM)
Most of the time we see only „standard‟ join
between 2 tables, but in rare case we will see
Anti-Join and Semi-Join variation for all above 3
methods. Anti-Join will be appear when we are
working with NOT IN clause while Semi-Join will
be appear when we are working with EXISTS
clause
42
43. Join Method
Nested Loop
The Nested means an iteration. Pseudo-code for
this kind of join will be like below:
for x in (select [col] from outer_table) loop
for y in (select [col] from inner_table where outer_table.join_col =
inner_table.join_col) loop
return the rows from outer and inner table
end loop;
end loop;
Suitable for small “size” for the outer (driving)
table. For the inner table, it should be accessed
using index scan
Starting from 9i, Oracle introduces new „table
prefetching‟ method which will reduce logical I/O
43
48. Join Method
Hash Join
In this method, first Oracle will choose 1 dataset
(build table – this is outer table in Nested Loop), and
then create hash table in memory using generated
hash-key from join column. Once completed, second
table (probe table – this is inner table in Nested
Loop) will be scanned using the same hash function
(probing the hash table)
Applicable for join with equality operator (=)
There are 3 level of effectiveness: optimal, one-pass
and multi-pass. Optimal when the size of tables is
matched with hash_area_size. One-pass or Multipass when the tables is not enough to be hash-ed in
the memory (requires disk operation)
Event 10104 for tracing Hash Join operation
48
49. Join Method
Hash Join (cont.)
Check hash_area_size and
workarea_size_policy database parameter
Check v$sysstat for relevant system statistics
SELECT name, value, case when sum(value) over() = 0 then 0
else round(value*100/sum(value) over(),2) end as pct
FROM v$sysstat
WHERE name LIKE 'workarea executions%'
49
56. Join Method
Sort Merge
There are 2 operation in this method: sort and merge. So
56
it is application for any query which requires sorting (on
the join column): Order By clause, Group By clause, Set
operation, Distinct operator, Analytical function, Index
creation, Connect By query and etc
Similar to Hash Join, there are 3 level of effectiveness for
sorting operation: optimal, one-pass and multi-pass.
Optimal when the size sort_area_size is enough to
handle sort operation. One-pass or Multi-pass when
Oracle requires disk operation for the sorting
Event 10032 for tracing sort operation and 10033 for
tracing sort I/O operation
Check sort_area_size and workarea_size_policy
database parameter
Check v$sysstat for relevant system statistics and
v$tempstat for sorting statistics
57. Join Method
Sort Merge (cont.)
Merging part can be one of the following
possibilities:
57
64. Sub-query
There 2 main types of sub-query: Nested Sub-
query and Correlated Sub-query
Nested sub-query when the sub-query (inner
query) need to be completed first and then the
result will be passed to the main query
Correlated sub-query when the main query
should be executed first in order to execute the
inner query
In some cases we can rewrite sub-query into join
form for performance improvement
64