The document discusses algorithms for heap data structures and their applications. It begins with an introduction to heaps and their representation as complete binary trees or arrays. It then covers the heap operations of MaxHeapify, building a max heap, and heapsort. Heapsort runs in O(n log n) time by using a max heap to iteratively find and remove the maximum element. The document concludes by discussing how heaps can be used to implement priority queues, with common operations like insert, extract maximum, and increase key running in O(log n) time.
Conceptual Dependency (CD) is a theory that uses a set of primitive symbols to represent complicated knowledge and solve problems by graphically presenting high-level concepts. Various primitives used in CD include actions like transferring objects or information, moving body parts, grasping objects, ingesting objects, and speaking. An example CD representation shows "John ate the egg" using symbols like INGEST to represent the action of eating.
The document discusses code optimization techniques in compilers. It covers the following key points:
1. Code optimization aims to improve code performance by replacing high-level constructs with more efficient low-level code while preserving program semantics. It occurs at various compiler phases like source code, intermediate code, and target code.
2. Common optimization techniques include constant folding, propagation, algebraic simplification, strength reduction, copy propagation, and dead code elimination. Control and data flow analysis are required to perform many optimizations.
3. Optimizations can be local within basic blocks, global across blocks, or inter-procedural across procedures. Representations like flow graphs, basic blocks, and DAGs are used to apply optimizations at
This presentation discusses peephole optimization. Peephole optimization is performed on small segments of generated code to replace sets of instructions with shorter or faster equivalents. It aims to improve performance, reduce code size, and reduce memory footprint. The working flow of peephole optimization involves scanning code for patterns that match predefined replacement rules. These rules include constant folding, strength reduction, removing null sequences, and combining operations. Peephole optimization functions by replacing slow instructions with faster ones, removing redundant code and stack instructions, and optimizing jumps.
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
Recursive descent parsing is a top-down parsing method that uses a set of recursive procedures associated with each nonterminal of a grammar to process input and construct a parse tree. It attempts to find a leftmost derivation for an input string by creating nodes of the parse tree in preorder starting from the root. While simple to implement, recursive descent parsing involves backtracking and is not as fast as other methods, with limitations on error reporting and lookahead. However, it can be constructed easily from recognizers by building a parse tree.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
Conceptual Dependency (CD) is a theory that uses a set of primitive symbols to represent complicated knowledge and solve problems by graphically presenting high-level concepts. Various primitives used in CD include actions like transferring objects or information, moving body parts, grasping objects, ingesting objects, and speaking. An example CD representation shows "John ate the egg" using symbols like INGEST to represent the action of eating.
The document discusses code optimization techniques in compilers. It covers the following key points:
1. Code optimization aims to improve code performance by replacing high-level constructs with more efficient low-level code while preserving program semantics. It occurs at various compiler phases like source code, intermediate code, and target code.
2. Common optimization techniques include constant folding, propagation, algebraic simplification, strength reduction, copy propagation, and dead code elimination. Control and data flow analysis are required to perform many optimizations.
3. Optimizations can be local within basic blocks, global across blocks, or inter-procedural across procedures. Representations like flow graphs, basic blocks, and DAGs are used to apply optimizations at
This presentation discusses peephole optimization. Peephole optimization is performed on small segments of generated code to replace sets of instructions with shorter or faster equivalents. It aims to improve performance, reduce code size, and reduce memory footprint. The working flow of peephole optimization involves scanning code for patterns that match predefined replacement rules. These rules include constant folding, strength reduction, removing null sequences, and combining operations. Peephole optimization functions by replacing slow instructions with faster ones, removing redundant code and stack instructions, and optimizing jumps.
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
The purpose of types:
To define what the program should do.
e.g. read an array of integers and return a double
To guarantee that the program is meaningful.
that it does not add a string to an integer
that variables are declared before they are used
To document the programmer's intentions.
better than comments, which are not checked by the compiler
To optimize the use of hardware.
reserve the minimal amount of memory, but not more
use the most appropriate machine instructions.
Recursive descent parsing is a top-down parsing method that uses a set of recursive procedures associated with each nonterminal of a grammar to process input and construct a parse tree. It attempts to find a leftmost derivation for an input string by creating nodes of the parse tree in preorder starting from the root. While simple to implement, recursive descent parsing involves backtracking and is not as fast as other methods, with limitations on error reporting and lookahead. However, it can be constructed easily from recognizers by building a parse tree.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
The document discusses lexical analysis and how it relates to parsing in compilers. It introduces basic terminology like tokens, patterns, lexemes, and attributes. It describes how a lexical analyzer works by scanning input, identifying tokens, and sending tokens to a parser. Regular expressions are used to specify patterns for token recognition. Finite automata like nondeterministic and deterministic finite automata are constructed from regular expressions to recognize tokens.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
Curve clipping involves using polygon clipping to test if a curved object's bounding rectangle overlaps a clipping window. If there is no overlap, the object is discarded. If there is overlap, the simultaneous curve and boundary equations are solved to find intersection points. Special cases like circles are considered, such as discarding a circle if its center is outside the clipping window plus/minus the radius. Bezier and spline curves can also be clipped by approximating them as polylines or using their convex hull properties.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
Slice Based testing and Object Oriented Testingvarsha sharma
This document discusses slice based testing and object oriented testing. It defines a program slice as a subset of a program. It describes the origins of static and dynamic slicing techniques. Static slicing uses static program information while dynamic slicing uses execution traces. The document also discusses different levels of object oriented testing like class/unit testing, interclass/integration testing, and system testing. It provides details on the class testing process and developing test cases for classes. Finally, it outlines some issues in object oriented testing like additional techniques needed to test dependencies and difficulties testing states and polymorphism.
The document discusses sorting algorithms. It begins by defining sorting as arranging data in logical order based on a key. It then discusses internal and external sorting methods. For internal sorting, all data fits in memory, while external sorting handles data too large for memory. The document covers stability, efficiency, and time complexity of various sorting algorithms like bubble sort, selection sort, insertion sort, and merge sort. Merge sort uses a divide-and-conquer approach to sort arrays with a time complexity of O(n log n).
BCA DATA STRUCTURES LINEAR ARRAYS MRS.SOWMYA JYOTHISowmya Jyothi
1) The document discusses linear arrays and their representation in memory. A linear array stores elements in consecutive memory locations.
2) The number of elements in an array is given by the length formula: Length = Upper Bound - Lower Bound + 1. The address of each element can be calculated using the base address of the array.
3) Common array operations like traversing, inserting, and deleting elements are discussed along with algorithms to perform each operation. Representing polynomials and sparse matrices using arrays is also covered.
The document discusses operator precedence parsing, which is a bottom-up parsing technique for operator grammars. It describes operator precedence grammars as grammars where no RHS has a non-terminal and no two non-terminals are adjacent. An operator precedence parser uses a parsing table to shift or reduce based on the precedence relations between terminals. It provides an example of constructing a precedence parsing table and parsing a string using the operator precedence parsing algorithm.
The document discusses the FIRST and FOLLOW sets used in compiler construction for predictive parsing. FIRST(X) is the set of terminals that can begin strings derived from X. FOLLOW(A) is the set of terminals that can immediately follow A. Rules are provided to compute the FIRST and FOLLOW sets for a grammar. Examples demonstrate applying the rules to sample grammars and presenting the resulting FIRST and FOLLOW sets.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
Flow-oriented modeling represents how data objects are transformed as they move through a system. A data flow diagram (DFD) is the diagrammatic form used to depict this approach. DFDs show the flow of data through processes and external entities of a system using symbols like circles and arrows. They provide a unique view of how a system works by modeling the input, output, storage and processing of data from level to level.
Token, Pattern and Lexeme defines some key concepts in lexical analysis:
Tokens are valid sequences of characters that can be identified as keywords, constants, identifiers, numbers, operators or punctuation. A lexeme is the sequence of characters that matches a token pattern. Patterns are defined by regular expressions or grammar rules to identify lexemes as specific tokens. The lexical analyzer collects attributes like values for number tokens and symbol table entries for identifiers and passes the tokens and attributes to the parser. Lexical errors occur if a character sequence cannot be scanned as a valid token. Error recovery strategies include deleting or inserting characters to allow tokenization to continue.
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Algorithm Design and Complexity - Course 4 - Heaps and Dynamic ProgammingTraian Rebedea
Course 4 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
A heap is a binary tree data structure where the highest (or lowest) priority element is always stored at the root. Heaps are often implemented using arrays for efficiency. This document discusses max heaps, which are heaps where the highest priority elements have the greatest values. Max heaps support two main operations - extracting the maximum value from the root and inserting new elements. These operations maintain the max heap property where each parent node is greater than or equal to its children. The document provides examples of building a max heap from an unsorted array in O(n) time using a heapify procedure, and analyzes the worst-case time complexity of building a heap as O(n).
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
The document discusses lexical analysis and how it relates to parsing in compilers. It introduces basic terminology like tokens, patterns, lexemes, and attributes. It describes how a lexical analyzer works by scanning input, identifying tokens, and sending tokens to a parser. Regular expressions are used to specify patterns for token recognition. Finite automata like nondeterministic and deterministic finite automata are constructed from regular expressions to recognize tokens.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
Curve clipping involves using polygon clipping to test if a curved object's bounding rectangle overlaps a clipping window. If there is no overlap, the object is discarded. If there is overlap, the simultaneous curve and boundary equations are solved to find intersection points. Special cases like circles are considered, such as discarding a circle if its center is outside the clipping window plus/minus the radius. Bezier and spline curves can also be clipped by approximating them as polylines or using their convex hull properties.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
Slice Based testing and Object Oriented Testingvarsha sharma
This document discusses slice based testing and object oriented testing. It defines a program slice as a subset of a program. It describes the origins of static and dynamic slicing techniques. Static slicing uses static program information while dynamic slicing uses execution traces. The document also discusses different levels of object oriented testing like class/unit testing, interclass/integration testing, and system testing. It provides details on the class testing process and developing test cases for classes. Finally, it outlines some issues in object oriented testing like additional techniques needed to test dependencies and difficulties testing states and polymorphism.
The document discusses sorting algorithms. It begins by defining sorting as arranging data in logical order based on a key. It then discusses internal and external sorting methods. For internal sorting, all data fits in memory, while external sorting handles data too large for memory. The document covers stability, efficiency, and time complexity of various sorting algorithms like bubble sort, selection sort, insertion sort, and merge sort. Merge sort uses a divide-and-conquer approach to sort arrays with a time complexity of O(n log n).
BCA DATA STRUCTURES LINEAR ARRAYS MRS.SOWMYA JYOTHISowmya Jyothi
1) The document discusses linear arrays and their representation in memory. A linear array stores elements in consecutive memory locations.
2) The number of elements in an array is given by the length formula: Length = Upper Bound - Lower Bound + 1. The address of each element can be calculated using the base address of the array.
3) Common array operations like traversing, inserting, and deleting elements are discussed along with algorithms to perform each operation. Representing polynomials and sparse matrices using arrays is also covered.
The document discusses operator precedence parsing, which is a bottom-up parsing technique for operator grammars. It describes operator precedence grammars as grammars where no RHS has a non-terminal and no two non-terminals are adjacent. An operator precedence parser uses a parsing table to shift or reduce based on the precedence relations between terminals. It provides an example of constructing a precedence parsing table and parsing a string using the operator precedence parsing algorithm.
The document discusses the FIRST and FOLLOW sets used in compiler construction for predictive parsing. FIRST(X) is the set of terminals that can begin strings derived from X. FOLLOW(A) is the set of terminals that can immediately follow A. Rules are provided to compute the FIRST and FOLLOW sets for a grammar. Examples demonstrate applying the rules to sample grammars and presenting the resulting FIRST and FOLLOW sets.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
Flow-oriented modeling represents how data objects are transformed as they move through a system. A data flow diagram (DFD) is the diagrammatic form used to depict this approach. DFDs show the flow of data through processes and external entities of a system using symbols like circles and arrows. They provide a unique view of how a system works by modeling the input, output, storage and processing of data from level to level.
Token, Pattern and Lexeme defines some key concepts in lexical analysis:
Tokens are valid sequences of characters that can be identified as keywords, constants, identifiers, numbers, operators or punctuation. A lexeme is the sequence of characters that matches a token pattern. Patterns are defined by regular expressions or grammar rules to identify lexemes as specific tokens. The lexical analyzer collects attributes like values for number tokens and symbol table entries for identifiers and passes the tokens and attributes to the parser. Lexical errors occur if a character sequence cannot be scanned as a valid token. Error recovery strategies include deleting or inserting characters to allow tokenization to continue.
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Algorithm Design and Complexity - Course 4 - Heaps and Dynamic ProgammingTraian Rebedea
Course 4 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
A heap is a binary tree data structure where the highest (or lowest) priority element is always stored at the root. Heaps are often implemented using arrays for efficiency. This document discusses max heaps, which are heaps where the highest priority elements have the greatest values. Max heaps support two main operations - extracting the maximum value from the root and inserting new elements. These operations maintain the max heap property where each parent node is greater than or equal to its children. The document provides examples of building a max heap from an unsorted array in O(n) time using a heapify procedure, and analyzes the worst-case time complexity of building a heap as O(n).
The document discusses heap data structures and algorithms. A heap is a binary tree that satisfies the heap property of a parent being greater than or equal to its children. Common operations on heaps like building
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
Heaps are nearly complete binary trees stored as arrays that satisfy the heap property: for max-heaps a parent node is greater than or equal to its children, and for min-heaps a parent is less than or equal to its children. Heap operations like insert, extract max/min, and increase/decrease key take O(log n) time due to the heap's height. The heapsort algorithm uses a max-heap to sort an array in O(n log n) time by repeatedly extracting and placing the max element in its final position. Priority queues implemented with heaps support insertion, maximum/minimum extraction, and key increases/decreases in O(log n) time.
The document discusses algorithms including heapsort and quicksort. It provides explanations of heap data structures and how they can be used to implement priority queues. It then describes how priority queues can be used to simulate event-driven systems like collisions in a combat billiards simulation. Quicksort is also summarized, including the key partition step that divides the array into two subarrays.
This document describes the heap sort algorithm. It begins by explaining heaps and their properties like the heap property. It then describes key heap operations like Max-Heapify, Build-Max-Heap, finding the maximum element, extract max, increase key, and insert. These operations allow heaps to function as priority queues. Heap sort works by building a max heap from the input array using Build-Max-Heap, then repeatedly extracting the maximum element and putting it in its sorted position. This runs in O(n log n) time, combining advantages of insertion and merge sort by sorting in-place like insertion sort but with a faster running time of merge sort.
: A heap is a nearly complete binary tree with the following two properties:
Structural property: all levels are full, except possibly the last one, which is filled from left to right
Order (heap) property: for any node x
Parent(x) ≥ x
The document discusses heapsort, an efficient sorting algorithm that uses a heap data structure. Heapsort first builds a max heap from the input array in O(n) time using a heapify procedure. It then extracts elements from the heap into the sorted output array, maintaining the heap property, resulting in an overall time complexity of O(n log n).
This document discusses the heap data structure and heapsort algorithm. It defines a heap as a nearly complete binary tree that satisfies the heap property, where every parent node is greater than or equal to its children. It describes how to represent a heap using an array and the key operations of building a max-heap from an array, inserting and deleting nodes, and maintaining the heap property. Heapsort works by building a max-heap from the input array and then extracting elements in sorted order.
Heap sort is based on the priority queue data structure implemented using a heap. Specifically, heap sort uses a max-heap or min-heap to sort elements in ascending or descending order respectively.
The key steps are:
1) Build a max-heap from the input array in O(n) time
2) Repeatedly remove the max/min element from the heap and put it in its sorted position in the array. This takes O(nlogn) time.
So the overall time complexity of heap sort is O(nlogn). It is an in-place sorting algorithm that does not require additional storage.
This document introduces the heapsort algorithm. Heapsort uses a heap data structure to sort an array in O(n log n) time. It builds a max-heap from the input array in O(n) time using the BUILD-MAX-HEAP procedure. It then repeatedly extracts the maximum element from the heap and inserts it into the sorted portion of the array, maintaining the heap property using MAX-HEAPIFY. This summarizes the key steps and runtime of heapsort.
This document provides an overview of heaps and heap sort. It defines a heap as a complete binary tree where each node is greater than or equal to its children (for a max heap) or less than or equal (for a min heap). The key operations covered are insertion into a heap, which takes O(log n) time, deletion of the root element, which takes O(log n) time, and heap sort, which uses deletion from a heap to sort elements in O(n log n) time. Worked examples are provided for building max and min heaps from lists of numbers.
This document provides an overview of several common data structures: stacks, queues, heaps, priority queues, union-find structures, binary search trees, Fenwick trees, and lowest common ancestor trees. For each data structure, it describes their key properties and operations and provides examples of how to implement them to achieve efficient runtimes, often in O(log n) time. It also notes the related standard library implementations available in C++ and Java.
This document describes heap data structures and algorithms like heap sort. It defines a max heap and min heap. It explains the build heap, heapify, insertion and deletion algorithms. Build heap transforms an array into a max heap by applying heapify to each node from bottom to top. Heapify maintains the heap property when a node is added or removed. Heap sort works by building a max heap from the input array and then extracting elements from the root to sort the array in descending order.
This document discusses various data structures used for priority queues, including binary heaps and binomial heaps. It provides details on implementing priority queue operations like insert, delete, and change key on these structures. Key points covered include:
- Binary heaps allow efficient priority queue operations like insert, delete, and change key in O(log n) time and are commonly used in programming competitions.
- Binomial heaps also support priority queue operations in O(log n) time but have more complex union operations that take O(log n) time.
- Dijkstra's algorithm can be implemented using a binary heap to efficiently find the shortest path between nodes.
The document discusses various data structures used to implement priority queues, including binary heaps and binomial heaps. It describes how each structure can be implemented using an array and the time complexities of common operations like insertion, deletion, finding the minimum element, etc. It also provides an example of how binary heaps can be used to implement Dijkstra's algorithm for finding the shortest paths from a single source vertex in a graph.
Heapsort is a sorting algorithm that uses a heap data structure. It first transforms the input array into a max-heap, where the largest element is at the root, in O(n) time. It then repeatedly swaps the root element with the last element, reducing the size of the heap by 1, and sifts the new root element down to maintain the heap property. This process takes O(n log n) time overall, making heapsort an efficient sorting algorithm with O(n log n) time complexity in all cases.
The document discusses heap data structures and their implementation using arrays. It defines a heap as a complete binary tree that satisfies the heap property - a node's value is greater than or equal to its children's values. Heaps can be implemented using arrays by numbering nodes from top to bottom and storing each node at its number index. Common heap operations like building a heap from an array, inserting/extracting elements, and deleting elements are described along with their time complexities of O(n) or O(log n).
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
Artificial Bee Colony (ABC) is a swarm
optimization technique. This algorithm generally used to solve
nonlinear and complex problems. ABC is one of the simplest
and up to date population based probabilistic strategy for
global optimization. Analogous to other population based
algorithms, ABC also has some drawbacks computationally
pricey due to its sluggish temperament of search procedure.
The solution search equation of ABC is notably motivated by a
haphazard quantity which facilitates in exploration at the cost
of exploitation of the search space. Due to the large step size in
the solution search equation of ABC there are chances of
skipping the factual solution are higher. For that reason, this
paper introduces a new search strategy in order to balance the
diversity and convergence capability of the ABC. Both
employed bee phase and onlooker bee phase are improved
with help of a local search strategy stimulated by memetic
algorithm. This paper also proposes a new strategy for fitness
calculation and probability calculation. The proposed
algorithm is named as Improved Memetic Search in ABC
(IMeABC). It is tested over 13 impartial benchmark functions
of different complexities and two real word problems are also
considered to prove proposed algorithms superiority over
original ABC algorithm and its recent variants
Spider Monkey optimization (SMO) algorithm is newest addition in class of swarm intelligence. SMO is a population based stochastic meta-heuristic. It is motivated by intelligent foraging behaviour of fission fusion structured social creatures. SMO is a very good option for complex optimization problems. This paper proposed a modified strategy in order to enhance performance of original SMO. This paper introduces a position update strategy in SMO and modifies both local leader and global leader phase. The proposed strategy is named as Modified Position Update in Spider Monkey Optimization (MPU-SMO) algorithm. The proposed algorithm tested over benchmark problems and results show that it gives better results for considered unbiased problems.
Artificial Bee Colony (ABC) algorithm is a Nature Inspired Algorithm (NIA) which based in intelligent food foraging behaviour of honey bee swarm. ABC outperformed over other NIAs and other local search heuristics when tested for benchmark functions as well as factual world problems but occasionally it shows premature convergence and stagnation due to lack of balance between exploration and exploitation. This paper establishes a local search mechanism that enhances exploration capability of ABC and avoids the dilemma of stagnation. With help of recently introduces local search strategy it tries to balance intensification and diversification of search space. The anticipated algorithm named as Enhanced local search in ABC (EnABC) and tested over eleven benchmark functions. Results are evidence for its dominance over other competitive algorithms.
The document discusses a proposed Randomized Memetic Artificial Bee Colony (RMABC) algorithm for optimization problems. RMABC incorporates local search techniques into the Artificial Bee Colony algorithm to improve exploitation of promising solutions. It randomizes the step size in the local search to balance diversification and intensification. Experimental results on benchmark problems show RMABC outperforms other ABC algorithm variants in finding optimal solutions. The document provides background on optimization problems, nature-inspired algorithms, Artificial Bee Colony algorithm, and Memetic algorithms.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Artificial Bee Colony (ABC) is a distinguished optimization strategy that can resolve nonlinear and multifaceted problems. It is comparatively a straightforward and modern population based probabilistic approach for comprehensive optimization. In the vein of the other population based algorithms, ABC is moreover computationally classy due to its slow nature of search procedure. The solution exploration equation of ABC is extensively influenced by a arbitrary quantity which helps in exploration at the cost of exploitation of the better search space. In the solution exploration equation of ABC due to the outsized step size the chance of skipping the factual solution is high. Therefore, here this paper improve onlooker bee phase with help of a local search strategy inspired by memetic algorithm to balance the diversity and convergence capability of the ABC. The proposed algorithm is named as Improved Onlooker Bee Phase in ABC (IoABC). It is tested over 12 well known un-biased test problems of diverse complexities and two engineering optimization problems; results show that the anticipated algorithm go one better than the basic ABC and its recent deviations in a good number of the experiments.
Artificial bee colony (ABC) algorithm is a well known and one of the latest swarm intelligence based techniques. This method is a population based meta-heuristic algorithm used for numerical optimization. It is based on the intelligent behavior of honey bees. Artificial Bee Colony algorithm is one of the most popular techniques that are used in optimization problems. Artificial Bee Colony algorithm has some major advantages over other heuristic methods. To utilize its good feature a number of researchers combined ABC algorithm with other methods, and generate some new hybrid methods. This paper provides comparative analysis of hybrid differential Artificial Bee Colony algorithm with hybrid ABC – SPSO, Genetic algorithm and Independent rough set approach based on some parameters like technique, dimension, methodology etc. KEYWORDS
Artificial bee colony (ABC) algorithm has proved its importance in solving a number of problems including engineering optimization problems. ABC algorithm is one of the most popular and youngest member of the family of population based nature inspired meta-heuristic swarm intelligence method. ABC has been proved its superiority over some other Nature Inspired Algorithms (NIA) when applied for both benchmark functions and real world problems. The performance of search process of ABC depends on a random value which tries to balance exploration and exploitation phase. In order to increase the performance it is required to balance the exploration of search space and exploitation of optimal solution of the ABC. This paper outlines a new hybrid of ABC algorithm with Genetic Algorithm. The proposed method integrates crossover operation from Genetic Algorithm (GA) with original ABC algorithm. The proposed method is named as Crossover based ABC (CbABC). The CbABC strengthens the exploitation phase of ABC as crossover enhances exploration of search space. The CbABC tested over four standard benchmark functions and a popular continuous optimization problem.
Multiplication of two 3 d sparse matrices using 1d arrays and linked listsDr Sandeep Kumar Poonia
A basic algorithm of 3D sparse matrix multiplication (BASMM) is presented using one dimensional (1D) arrays which is used further for multiplying two 3D sparse matrices using Linked Lists. In this algorithm, a general concept is derived in which we enter non- zeros elements in 1st and 2nd sparse matrices (3D) but store that values in 1D arrays and linked lists so that zeros could be removed or ignored to store in memory. The positions of that non-zero value are also stored in memory like row and column position. In this way space complexity is decreased. There are two ways to store the sparse matrix in memory. First is row major order and another is column major order. But, in this algorithm, row major order is used. Now multiplying those two matrices with the help of BASMM algorithm, time complexity also decreased. For the implementation of this, simple c programming and concepts of data structures are used which are very easy to understand for everyone.
This document summarizes a tool called Sunzip that uses the Huffman algorithm for data compression. It discusses how Huffman encoding works by assigning shorter bit codes to more common symbols to reduce file size. The tool analyzes files to determine symbol frequencies and builds a Huffman tree to assign variable-length codes. It allows compressing different data types like text, images, audio and video. Adaptive Huffman coding is also described, which dynamically updates the code tree as more data is processed. Benefits of Huffman compression include being fast, simple to implement and achieving close to optimal compression. Sample screenshots of the Sunzip tool are also provided showing file details before and after compression.
Artificial Bee Colony (ABC) algorithm is a Nature
Inspired Algorithm (NIA) which based on intelligent food
foraging behaviour of honey bee swarm. This paper introduces
a local search strategy that enhances exploration competence
of ABC and avoids the problem of stagnation. The proposed
strategy introduces two new local search phases in original
ABC. One just after onlooker bee phase and one after scout
bee phase. The newly introduced phases are inspired by
modified Golden Section Search (GSS) strategy. The proposed
strategy named as new local search strategy in ABC
(NLSSABC). The proposed NLSSABC algorithm applied over
thirteen standard benchmark functions in order to prove its
efficiency.
This document presents a new approach called mixed S-D slicing that combines static and dynamic program slicing using object-oriented concepts in C++. Static slicing analyzes the entire program code but produces larger slices, while dynamic slicing produces smaller slices based on a specific execution but is more difficult to compute. The mixed S-D slicing aims to generate dynamic slices faster by leveraging object-oriented features like classes. An example C++ program is provided to demonstrate the S-D slicing approach using concepts like classes, inheritance, and polymorphism. The approach is intended to reduce complexity and aid in debugging object-oriented programs by combining static and dynamic slicing techniques.
Performance evaluation of different routing protocols in wsn using different ...Dr Sandeep Kumar Poonia
This document evaluates the performance of different routing protocols in wireless sensor networks using various network parameters. It simulates the Dynamic Source Routing (DSR) and Adhoc On-Demand Distance Vector (AODV) routing protocols in a 1000m x 1000m terrain area with 100 sensor nodes. The packet delivery fraction, average throughput, and normalized routing load are analyzed at different node speeds ranging from 20-100m/s. The results show that AODV performs better than DSR in terms of packet delivery fraction and normalized routing load, while DSR has better average throughput performance. In conclusion, AODV is more optimal for small terrain areas when considering packet delivery and routing overhead, while DSR provides higher data rates.
Articial bee Colony algorithm (ABC) is a population based
heuristic search technique used for optimization problems. ABC
is a very eective optimization technique for continuous opti-
mization problem. Crossover operators have a better exploration
property so crossover operators are added to the ABC. This pa-
per presents ABC with dierent types of real coded crossover op-
erator and its application to Travelling Salesman Problem (TSP).
Each crossover operator is applied to two randomly selected par-
ents from current swarm. Two o-springs generated from crossover
and worst parent is replaced by best ospring, other parent remains
same. ABC with real coded crossover operator applied to travelling
salesman problem. The experimental result shows that our proposed
algorithm performs better than the ABC without crossover in terms
of eciency and accuracy.
This document describes a simulator for database aggregation using metadata. The simulator sits between an end-user application and a database management system (DBMS) to intercept SQL queries and transform them to take advantage of available aggregates using metadata describing the data warehouse schema. The simulator provides performance gains by optimizing queries to use appropriate aggregate tables. It was found to improve performance over previous aggregate navigators by making fewer calls to system tables through the use of metadata mappings. Experimental results showed the simulator solved queries faster than alternative approaches by transforming queries to leverage aggregate tables.
Performance evaluation of diff routing protocols in wsn using difft network p...Dr Sandeep Kumar Poonia
In the recent past, wireless sensor networks have been introduced to use in many applications. To design the networks, the factors needed to be considered are the coverage area, mobility, power consumption, communication capabilities etc. The challenging goal of our project is to create a simulator to support the wireless sensor network simulation. The network simulator (NS-2) which supports both wire and wireless networks is implemented to be used with the wireless sensor network.
The Traveling Salesman Problem (TSP) involves finding the minimum cost tour that visits each customer exactly once and returns to the starting depot. Key heuristics to solve the TSP include nearest neighbor, insertion methods, and 2-opt exchanges. The Vehicle Routing Problem (VRP) extends the TSP by routing multiple vehicles of limited capacity from a central depot to serve customer demands. Common heuristics for the VRP include savings algorithms and sweep methods.
This document provides an overview of linear programming, including:
- It describes the linear programming model which involves maximizing a linear objective function subject to linear constraints.
- It provides examples of linear programming problems like product mix, blending, transportation, and network flow problems.
- It explains how to develop a linear programming model by defining decision variables, the objective function, and constraints.
- It discusses solutions methods like the graphical and simplex methods. The simplex method involves iteratively moving to adjacent extreme points to maximize the objective function.
This document discusses approximation algorithms and introduces several combinatorial optimization problems. It begins by explaining that approximation algorithms are needed to find near-optimal solutions for problems that cannot be solved in polynomial time, such as set cover and bin packing. It then provides examples of problems that are in P, NP, and NP-complete. Several techniques for designing approximation algorithms are outlined, including greedy algorithms, linear programming, and semidefinite programming. Specific NP-complete problems like vertex cover, set cover, and independent set are introduced and approximations algorithms with performance guarantees are provided for set cover and vertex cover.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
4. Sorting Revisited
So far we’ve talked about two algorithms to
sort an array of numbers
What is the advantage of merge sort?
What is the advantage of insertion sort?
Next on the agenda: Heapsort
Combines advantages of both previous algorithms
Sort in Place – Like insertion sort
O(n Lg n) Worst case – Like Merge sort
Sandeep Kumar Poonia
5. Heaps
A heap can be seen as a complete binary tree:
16
14
10
8
2
7
4
9
1
What makes a binary tree complete?
Is the example above complete?
Sandeep Kumar Poonia
3
6. Heaps
A heap can be seen as a complete binary tree:
16
14
10
8
2
7
4
1
9
1
1
3
1
1
We calls them “nearly complete” binary trees; can
think of unfilled slots as null pointers
Sandeep Kumar Poonia
1
7. Heaps
In practice, heaps are usually implemented as
arrays:
16
14
A = 16 14 10 8
7
9
3
2
4
8
1 =
2
Sandeep Kumar Poonia
10
7
4
1
9
3
8. Heaps
To represent a complete binary tree as an array:
The root node is A[1]
Node i is A[i]
The parent of node i is A[i/2] (note: integer divide)
The left child of node i is A[2i]
The right child of node i is A[2i + 1]
16
14
A = 16 14 10 8
7
9
3
2
4
8
1 =
2
Sandeep Kumar Poonia
10
7
4
1
9
3
9. Referencing Heap Elements
So…
Parent(i) { return i/2; }
Left(i) { return 2*i; }
right(i) { return 2*i + 1; }
An aside: How would you implement this
most efficiently?
Sandeep Kumar Poonia
10. The Heap Property
Heaps also satisfy the heap property:
A[Parent(i)] A[i]
for all nodes i > 1
In other words, the value of a node is at most the
value of its parent
Where is the largest element in a heap stored?
Definitions:
The height of a node in the tree = the number of
edges on the longest downward path to a leaf
The height of a tree = the height of its root
Sandeep Kumar Poonia
11. Heap Height
Q: What are the minimum and maximum
numbers of elements in a heap of height h?
Ans: Since a heap is an almost-complete binary
tree (complete at all levels except possibly the
lowest), it has at most 2h+1 -1 elements (if it is
complete) and
at least 2h -1+1 = 2h elements (if the lowest level
has just 1 element and the other levels are
complete).
Sandeep Kumar Poonia
12. Heap Height
Q: What is the height of an n-element heap?
Why?
Ans: This is nice: basic heap operations take at
most time proportional to the height of the
heap
Given an n-element heap of height h, we know
that
Thus
Since h is an integer,
Sandeep Kumar Poonia
13. Heap Operations: Heapify()
Heapify(): maintain the heap property
Given: a node i in the heap with children l and r
Given: two subtrees rooted at l and r, assumed to
be heaps
Problem: The subtree rooted at i may violate the
heap property.
Action: let the value of the parent node “float
down” so subtree at i satisfies the heap property
What
do you suppose will be the basic operation
between i, l, and r?
Sandeep Kumar Poonia
14. Procedure MaxHeapify
MaxHeapify(A, i)
1. l left(i)
2. r right(i)
3. if l heap-size[A] and A[l] > A[i]
4. then largest l
5. else largest i
6. if r heap-size[A] and A[r] > A[largest]
7. then largest r
8. if largest i
9. then exchange A[i] A[largest]
10.
MaxHeapify(A, largest)
Sandeep Kumar Poonia
Assumption:
Left(i) and Right(i)
are max-heaps.
15. Running Time for MaxHeapify
MaxHeapify(A, i)
1. l left(i)
2. r right(i)
3. if l heap-size[A] and A[l] > A[i]
4. then largest l
5. else largest i
6. if r heap-size[A] and A[r] > A[largest]
7. then largest r
8. if largest i
9. then exchange A[i] A[largest]
10.
MaxHeapify(A, largest)
Sandeep Kumar Poonia
Time to fix node i
and its children =
(1)
PLUS
Time to fix the
subtree rooted at
one of i’s children =
T(size of subree at
largest)
16. Running Time for MaxHeapify(A, n)
T(n) = T(largest) + (1)
largest 2n/3 (worst case occurs when the last row of
tree is exactly half full)
T(n) T(2n/3) + (1) T(n) = O(lg n)
Alternately, MaxHeapify takes O(h) where h is the
height of the node where MaxHeapify is applied
Sandeep Kumar Poonia
17. Building a heap
Use MaxHeapify to convert an array A into a max-heap.
Call MaxHeapify on each element in a bottom-up
manner.
BuildMaxHeap(A)
1. heap-size[A] length[A]
2. for i length[A]/2 downto 1
3.
do MaxHeapify(A, i)
Sandeep Kumar Poonia
20. Correctness of BuildMaxHeap
Loop Invariant: At the start of each iteration of the for
loop, each node i+1, i+2, …, n is the root of a max-heap.
Initialization:
Before first iteration i = n/2
Nodes n/2+1, n/2+2, …, n are leaves and hence roots of
max-heaps.
Maintenance:
By LI, subtrees at children of node i are max heaps.
Hence, MaxHeapify(i) renders node i a max heap root (while
preserving the max heap root property of higher-numbered
nodes).
Decrementing i reestablishes the loop invariant for the next
iteration.
Sandeep Kumar Poonia
21. Running Time of BuildMaxHeap
Loose upper bound:
Cost of a MaxHeapify call No. of calls to MaxHeapify
O(lg n) O(n) = O(nlg n)
Tighter bound:
Cost of a call to MaxHeapify at a node depends on the height,
h, of the node – O(h).
Height of most nodes smaller than n.
Height of nodes h ranges from 0 to lg n.
No. of nodes of height h is n/2h+1
Sandeep Kumar Poonia
22. Heapsort
Sort by maintaining the as yet unsorted elements as a
max-heap.
Start by building a max-heap on all elements in A.
Move the maximum element to its correct final
position.
Decrement heap-size[A].
Restore the max-heap property on A[1..n–1].
Exchange A[1] with A[n].
Discard A[n] – it is now sorted.
Maximum element is in the root, A[1].
Call MaxHeapify(A, 1).
Repeat until heap-size[A] is reduced to 2.
Sandeep Kumar Poonia
33. Algorithm Analysis
HeapSort(A)
1. Build-Max-Heap(A)
2. for i length[A] downto 2
3.
do exchange A[1] A[i]
4.
heap-size[A] heap-size[A] – 1
5.
MaxHeapify(A, 1)
In-place
Not Stable
Build-Max-Heap takes O(n) and each of the n-1 calls
to Max-Heapify takes time O(lg n).
Therefore, T(n) = O(n lg n)
Sandeep Kumar Poonia
34. Heap Procedures for Sorting
MaxHeapify
BuildMaxHeap
HeapSort
Sandeep Kumar Poonia
O(lg n)
O(n)
O(n lg n)
35. Priority Queue
Popular & important application of heaps.
Max and min priority queues.
Maintains a dynamic set S of elements.
Each set element has a key – an associated value.
Goal is to support insertion and extraction efficiently.
Applications:
Ready list of processes in operating systems by their
priorities – the list is highly dynamic
In event-driven simulators to maintain the list of events to be
simulated in order of their time of occurrence.
Sandeep Kumar Poonia
36. Basic Operations
Operations on a max-priority queue:
Insert(S, x) - inserts the element x into the set S
S S {x}.
Maximum(S) - returns the element of S with the largest
key.
Extract-Max(S) - removes and returns the element of S with
the largest key.
Increase-Key(S, x, k) – increases the value of element x’s
key to the new value k.
Min-priority queue supports Insert, Minimum,
Extract-Min, and Decrease-Key.
Heap gives a good compromise between fast insertion
but slow extraction and vice versa.
Sandeep Kumar Poonia
37. Heap Property (Max and Min)
Max-Heap
For every node excluding the root,
value is at most that of its parent: A[parent[i]] A[i]
Largest element is stored at the root.
In any subtree, no values are larger than the value
stored at subtree root.
Min-Heap
For every node excluding the root,
value is at least that of its parent: A[parent[i]] A[i]
Smallest element is stored at the root.
In any subtree, no values are smaller than the value
stored at subtree root
Sandeep Kumar Poonia
38. Heap-Extract-Max(A)
Implements the Extract-Max operation.
Heap-Extract-Max(A,n)
1. if n < 1
2. then error “heap underflow”
3. max A[1]
4. A[1] A[n]
5. n n - 1
6. MaxHeapify(A, 1)
7. return max
Running time : Dominated by the running time of MaxHeapify
= O(lg n)
Sandeep Kumar Poonia
39. Heap-Insert(A, key)
Heap-Insert(A, key)
1. heap-size[A] heap-size[A] + 1
2.
i heap-size[A]
4. while i > 1 and A[Parent(i)] < key
5.
do A[i] A[Parent(i)]
6.
i Parent(i)
7. A[i] key
Running time is O(lg n)
The path traced from the new leaf to the root has
length O(lg n)
Sandeep Kumar Poonia
40. Heap-Increase-Key(A, i, key)
Heap-Increase-Key(A, i, key)
1
If key < A[i]
2
then error “new key is smaller than the current key”
3 A[i] key
4
while i > 1 and A[Parent[i]] < A[i]
5
do exchange A[i] A[Parent[i]]
6
i Parent[i]
Heap-Insert(A, key)
1
heap-size[A] heap-size[A] + 1
2 A[heap-size[A]] –
3
Heap-Increase-Key(A, heap-size[A], key)
Sandeep Kumar Poonia
41. Quicksort
Sorts in place
Sorts O(n lg n) in the average case
Sorts O(n2) in the worst case
So why would people use it instead of merge
sort?
Sandeep Kumar Poonia
42. Quicksort
Another divide-and-conquer algorithm
The array A[p..r] is partitioned into two nonempty subarrays A[p..q] and A[q+1..r]
Invariant:
All elements in A[p..q] are less than all
elements in A[q+1..r]
The subarrays are recursively sorted by calls to
quicksort
Unlike merge sort, no combining step: two
subarrays form an already-sorted array
Sandeep Kumar Poonia
44. Partition
Clearly, all the action takes place in the
partition() function
Rearranges the subarray in place
End result:
Two
subarrays
All values in first subarray all values in second
Returns the index of the “pivot” element
separating the two subarrays
How do you suppose we implement this
function?
Sandeep Kumar Poonia
45. Partition In Words
Partition(A, p, r):
Select an element to act as the “pivot” (which?)
Grow two regions, A[p..i] and A[j..r]
All
elements in A[p..i] <= pivot
All elements in A[j..r] >= pivot
Increment i until A[i] >= pivot
Decrement j until A[j] <= pivot
Swap A[i] and A[j]
Repeat until i >= j
Return j
Sandeep Kumar Poonia
47. Review: Analyzing Quicksort
What will be the worst case for the algorithm?
What will be the best case for the algorithm?
Partition is balanced
Which is more likely?
Partition is always unbalanced
The latter, by far, except...
Will any particular input elicit the worst case?
Yes: Already-sorted input
Sandeep Kumar Poonia
48. Review: Analyzing Quicksort
In the worst case: one subarray have 0 element
and another n-1 elements
T(1) = (1)
T(n) = T(n - 1) + (n)
Works out to
T(n) = (n2)
Sandeep Kumar Poonia
49. Review: Analyzing Quicksort
In the best case: each has <= n/2 elements
T(n) = 2T(n/2) + (n)
What does this work out to?
T(n) = (n lg n)
Sandeep Kumar Poonia
50. Review: Analyzing Quicksort
(Average Case)
Intuitively, a real-life run of quicksort will
produce a mix of “bad” and “good” splits
Randomly distributed among the recursion tree
Pretend for intuition that they alternate between
best-case (n/2 : n/2) and worst-case (n-1 : 1)
What happens if we bad-split root node, then
good-split the resulting size (n-1) node?
Sandeep Kumar Poonia
51. Review: Analyzing Quicksort
(Average Case)
Intuitively, a real-life run of quicksort will
produce a mix of “bad” and “good” splits
Randomly distributed among the recursion tree
Pretend for intuition that they alternate between
best-case (n/2 : n/2) and worst-case (n-1 : 1)
What happens if we bad-split root node, then
good-split the resulting size (n-1) node?
We
end up with three subarrays, size 1, (n-1)/2, (n-1)/2
Combined cost of splits = n + n -1 = 2n -1 = O(n)
No worse than if we had good-split the root node!
Sandeep Kumar Poonia
52. Review: Analyzing Quicksort
(Average Case)
Intuitively, the O(n) cost of a bad split
(or 2 or 3 bad splits) can be absorbed
into the O(n) cost of each good split
Thus running time of alternating bad and good
splits is still O(n lg n), with slightly higher
constants
Sandeep Kumar Poonia
53. Analyzing Quicksort: Average Case
For simplicity, assume:
All inputs distinct (no repeats)
Slightly different partition() procedure
partition
around a random element, which is not
included in subarrays
all splits (0:n-1, 1:n-2, 2:n-3, … , n-1:0) equally likely
What is the probability of a particular split
happening?
Answer: 1/n
Sandeep Kumar Poonia
54. Analyzing Quicksort: Average Case
So partition generates splits
(0:n-1, 1:n-2, 2:n-3, … , n-2:1, n-1:0)
each with probability 1/n
If T(n) is the expected running time,
1 n1
T n T k T n 1 k n
n k 0
What is each term under the summation for?
What is the (n) term for?
Sandeep Kumar Poonia
55. Analyzing Quicksort: Average Case
So…
1 n 1
T n T k T n 1 k n
n k 0
2 n 1
T k n
n k 0
Sandeep Kumar Poonia
Write it on
the board
56. Analyzing Quicksort: Average Case
We can solve this recurrence using the
substitution method
Guess the answer
Assume that the inductive hypothesis holds
Substitute it in for some value < n
Prove that it follows for n
Sandeep Kumar Poonia
57. Analyzing Quicksort: Average Case
We can solve this recurrence using the
substitution method
Guess the answer
What’s
the answer?
Assume that the inductive hypothesis holds
Substitute it in for some value < n
Prove that it follows for n
Sandeep Kumar Poonia
58. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
= O(n lg n)
Assume that the inductive hypothesis holds
Substitute it in for some value < n
Prove that it follows for n
Sandeep Kumar Poonia
59. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
= O(n lg n)
Assume that the inductive hypothesis holds
What’s
the inductive hypothesis?
Substitute it in for some value < n
Prove that it follows for n
Sandeep Kumar Poonia
60. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
Assume that the inductive hypothesis holds
T(n)
= O(n lg n)
an lg n + b for some constants a and b
Substitute it in for some value < n
Prove that it follows for n
Sandeep Kumar Poonia
61. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
Assume that the inductive hypothesis holds
T(n)
= O(n lg n)
an lg n + b for some constants a and b
Substitute it in for some value < n
What
value?
Prove that it follows for n
Sandeep Kumar Poonia
62. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
Assume that the inductive hypothesis holds
T(n)
an lg n + b for some constants a and b
Substitute it in for some value < n
The
= O(n lg n)
value k in the recurrence
Prove that it follows for n
Sandeep Kumar Poonia
63. Analyzing Quicksort: Average Case
We can solve this recurrence using the dreaded
substitution method
Guess the answer
T(n)
Assume that the inductive hypothesis holds
T(n)
an lg n + b for some constants a and b
Substitute it in for some value < n
The
= O(n lg n)
value k in the recurrence
Prove that it follows for n
Grind
Sandeep Kumar Poonia
through it…
64. Analyzing Quicksort: Average Case
2 n 1
T n T k n
n k 0
2 n 1
ak lg k b n
n k 0
The recurrence to be solved
Plug in inductive hypothesis
What are we doing here?
2 n 1
What are we doing here?
b ak lg k b n Expand out the k=0 case
n k 1
2 n 1
2b
ak lg k b
n
n k 1
n
2b/n is just a constant,
What are we doing here?
so fold it into (n)
2 n 1
ak lg k b n
n k 1
Note: leaving the same
recurrence as the book
Sandeep Kumar Poonia
65. Analyzing Quicksort: Average Case
2 n 1
T n ak lg k b n
n k 1
2 n 1
2 n 1
ak lg k b n
n k 1
n k 1
The recurrence to be solved
Distribute the summation
What are we doing here?
2a n 1
2b
the summation:
k lg k (n 1) n Evaluateare we doing here?
What
b+b+…+b = b (n-1)
n k 1
n
2a n 1
k lg k 2b n
n k 1
This summation gets its own set of slides later
Sandeep Kumar Poonia
Since n-1<n,we doing here?
What are 2b(n-1)/n < 2b
66. Analyzing Quicksort: Average Case
2a n 1
T n
k lg k 2b n
n k 1
The recurrence to be solved
2a 1 2
1 2
We’ll prove this
n lg n n 2b n What the hell? later
n 2
8
a
an lg n n 2b n
Distribute the (2a/n) term
What are we doing here?
4
a Remember, our goal is to get
an lg n b n b n
What are we doing here?
T(n) an lg n + b
4
Pick a large enough that
an lg n b
How did we do this?
an/4 dominates (n)+b
Sandeep Kumar Poonia
67. Analyzing Quicksort: Average Case
So T(n) an lg n + b for certain a and b
Thus the induction holds
Thus T(n) = O(n lg n)
Thus quicksort runs in O(n lg n) time on average
Sandeep Kumar Poonia