Analysis of searching and sorting. Insertion sort, Quick sort, Merge sort and Heap sort. Binomial Heaps and Fibonacci Heaps, Lower bounds for sorting by comparison of keys. Comparison of sorting algorithms. Amortized Time Analysis. Red-Black Trees – Insertion & Deletion.
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
Design & Analysis of Algorithms Lecture NotesFellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
This slides contains assymptotic notations, recurrence relation like subtitution method, iteration method, master method and recursion tree method and sorting algorithms like merge sort, quick sort, heap sort, counting sort, radix sort and bucket sort.
This document provides an overview of a lecture on designing and analyzing computer algorithms. It discusses key concepts like what an algorithm and program are, common algorithm design techniques like divide-and-conquer and greedy methods, and how to analyze algorithms' time and space complexity. The goals of analyzing algorithms are to understand their behavior, improve efficiency, and determine whether problems can be solved within a reasonable time frame.
Hi:
This is the first slide of my class on analysis of algorithms based in Cormen's book.
In this slides, we define the following concepts:
1.- What is an algorithm?
2.- What problems are solved by algorithms?
3.- What subjects will be studied in this class?
4.- Cautionary tale about complexities
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
Design & Analysis of Algorithms Lecture NotesFellowBuddy.com
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
This slides contains assymptotic notations, recurrence relation like subtitution method, iteration method, master method and recursion tree method and sorting algorithms like merge sort, quick sort, heap sort, counting sort, radix sort and bucket sort.
This document provides an overview of a lecture on designing and analyzing computer algorithms. It discusses key concepts like what an algorithm and program are, common algorithm design techniques like divide-and-conquer and greedy methods, and how to analyze algorithms' time and space complexity. The goals of analyzing algorithms are to understand their behavior, improve efficiency, and determine whether problems can be solved within a reasonable time frame.
Hi:
This is the first slide of my class on analysis of algorithms based in Cormen's book.
In this slides, we define the following concepts:
1.- What is an algorithm?
2.- What problems are solved by algorithms?
3.- What subjects will be studied in this class?
4.- Cautionary tale about complexities
Lecture 3 insertion sort and complexity analysisjayavignesh86
This document discusses algorithms and insertion sort. It begins by defining time complexity as the amount of computer time required by an algorithm to complete. Time complexity is measured by the number of basic operations like comparisons, not in physical time units. The document then discusses how to calculate time complexity by counting the number of times loops and statements are executed. It provides examples of calculating time complexities of O(n) for a simple for loop and O(n^2) for a nested for loop. Finally, it introduces insertion sort and divide-and-conquer algorithms.
This document provides an overview of algorithm analysis. It discusses how to analyze the time efficiency of algorithms by counting the number of operations and expressing efficiency using growth functions. Different common growth rates like constant, linear, quadratic, and exponential are introduced. Examples are provided to demonstrate how to determine the growth rate of different algorithms, including recursive algorithms, by deriving their time complexity functions. The key aspects covered are estimating algorithm runtime, comparing growth rates of algorithms, and using Big O notation to classify algorithms by their asymptotic behavior.
The document discusses different string matching algorithms:
1. The naive string matching algorithm compares characters in the text and pattern sequentially to find matches.
2. The Robin-Karp algorithm uses hashing to quickly determine if the pattern is present in the text before doing full comparisons.
3. Finite automata models the pattern as states in an automaton to efficiently search the text for matches.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
This document discusses algorithm analysis and complexity. It introduces algorithm analysis as a way to predict and compare algorithm performance. Different algorithms for computing factorials and finding the maximum subsequence sum are presented, along with their time complexities. The importance of efficient algorithms for problems involving large datasets is discussed.
The document discusses the analysis of algorithms. It begins by defining an algorithm and describing different types. It then covers analyzing algorithms in terms of correctness, time efficiency, space efficiency, and optimality through theoretical and empirical analysis. The document discusses analyzing time efficiency by determining the number of repetitions of basic operations as a function of input size. It provides examples of input size, basic operations, and formulas for counting operations. It also covers analyzing best, worst, and average cases and establishes asymptotic efficiency classes. The document then analyzes several examples of non-recursive and recursive algorithms.
Algorithm And analysis Lecture 03& 04-time complexity.Tariq Khan
This document discusses algorithm efficiency and complexity analysis. It defines key terms like algorithms, asymptotic complexity, Big O notation, and different complexity classes. It provides examples of analyzing time complexity for different algorithms like loops, nested loops, and recursive functions. The document explains that Big O notation allows analyzing algorithms independent of machine or input by focusing on the highest order term as the problem size increases. Overall, the document introduces methods for measuring an algorithm's efficiency and analyzing its time and space complexity asymptotically.
Algorithms Lecture 2: Analysis of Algorithms IMohamed Loey
This document discusses analysis of algorithms and time complexity. It explains that analysis of algorithms determines the resources needed to execute algorithms. The time complexity of an algorithm quantifies how long it takes. There are three cases to analyze - worst case, average case, and best case. Common notations for time complexity include O(1), O(n), O(n^2), O(log n), and O(n!). The document provides examples of algorithms and determines their time complexity in different cases. It also discusses how to combine complexities of nested loops and loops in algorithms.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
Lecture 2 role of algorithms in computingjayavignesh86
This document discusses algorithms and their role in computing. It defines an algorithm as a set of steps to solve a problem on a machine in a finite amount of time. Algorithms must be unambiguous, have defined inputs and outputs, and terminate. The document discusses designing algorithms, proving their correctness, and analyzing their performance and complexity. It provides examples of algorithm problems like sorting, searching, and graphs. The goal of analyzing algorithms is to evaluate and compare their performance as the problem size increases.
This document discusses asymptotic notations and their use in analyzing the time complexity of algorithms. It introduces the Big-O, Big-Omega, and Big-Theta notations for describing the asymptotic upper bound, lower bound, and tight bound of an algorithm's running time. The document explains that asymptotic notations allow algorithms to be compared by ignoring lower-order terms and constants, and focusing on the highest-order term that dominates as the input size increases. Examples are provided to illustrate the different orders of growth and the notations used to describe them.
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
The document discusses algorithms and their analysis. It covers:
1) The definition of an algorithm and its key characteristics like being unambiguous, finite, and efficient.
2) The fundamental steps of algorithmic problem solving like understanding the problem, designing a solution, and analyzing efficiency.
3) Methods for specifying algorithms using pseudocode, flowcharts, or natural language.
4) Analyzing an algorithm's time and space efficiency using asymptotic analysis and orders of growth like best-case, worst-case, and average-case scenarios.
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
This document discusses analyzing the efficiency of algorithms. It introduces the framework for analyzing algorithms in terms of time and space complexity. Time complexity indicates how fast an algorithm runs, while space complexity measures the memory required. The document outlines steps for analyzing algorithms, including measuring input size, determining the basic operations, calculating frequency counts of operations, and expressing efficiency in Big O notation order of growth. Worst-case, best-case, and average-case time complexities are also discussed.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
The document summarizes a lecture on algorithms that covered insertion sort, analyzing its time complexity, and an introduction to the divide and conquer approach and merge sort. It included pseudocode for insertion sort algorithms and discussed how merge sort follows the divide and conquer paradigm by dividing the problem into subproblems, solving them recursively, and combining the solutions. Pseudocode was also provided for the merge sort algorithm.
This document discusses the divide and conquer algorithm design strategy and provides an analysis of the merge sort algorithm as an example. It begins by explaining the divide and conquer strategy of dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. It then provides pseudocode and explanations for the merge sort algorithm, which divides an array in half, recursively sorts the halves, and then merges the sorted halves back together. It analyzes the time complexity of merge sort as Θ(n log n), proving it is more efficient than insertion sort.
Quick sort Algorithm Discussion And AnalysisSNJ Chaudhary
Quicksort is a divide-and-conquer algorithm that works by partitioning an array around a pivot element and recursively sorting the subarrays. In the average case, it has an efficiency of Θ(n log n) time as the partitioning typically divides the array into balanced subproblems. However, in the worst case of an already sorted array, it can be Θ(n^2) time due to highly unbalanced partitioning. Randomizing the choice of pivot helps avoid worst-case scenarios and achieve average-case efficiency in practice, making quicksort very efficient and commonly used.
Lecture 3 insertion sort and complexity analysisjayavignesh86
This document discusses algorithms and insertion sort. It begins by defining time complexity as the amount of computer time required by an algorithm to complete. Time complexity is measured by the number of basic operations like comparisons, not in physical time units. The document then discusses how to calculate time complexity by counting the number of times loops and statements are executed. It provides examples of calculating time complexities of O(n) for a simple for loop and O(n^2) for a nested for loop. Finally, it introduces insertion sort and divide-and-conquer algorithms.
This document provides an overview of algorithm analysis. It discusses how to analyze the time efficiency of algorithms by counting the number of operations and expressing efficiency using growth functions. Different common growth rates like constant, linear, quadratic, and exponential are introduced. Examples are provided to demonstrate how to determine the growth rate of different algorithms, including recursive algorithms, by deriving their time complexity functions. The key aspects covered are estimating algorithm runtime, comparing growth rates of algorithms, and using Big O notation to classify algorithms by their asymptotic behavior.
The document discusses different string matching algorithms:
1. The naive string matching algorithm compares characters in the text and pattern sequentially to find matches.
2. The Robin-Karp algorithm uses hashing to quickly determine if the pattern is present in the text before doing full comparisons.
3. Finite automata models the pattern as states in an automaton to efficiently search the text for matches.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
This document discusses algorithm analysis and complexity. It introduces algorithm analysis as a way to predict and compare algorithm performance. Different algorithms for computing factorials and finding the maximum subsequence sum are presented, along with their time complexities. The importance of efficient algorithms for problems involving large datasets is discussed.
The document discusses the analysis of algorithms. It begins by defining an algorithm and describing different types. It then covers analyzing algorithms in terms of correctness, time efficiency, space efficiency, and optimality through theoretical and empirical analysis. The document discusses analyzing time efficiency by determining the number of repetitions of basic operations as a function of input size. It provides examples of input size, basic operations, and formulas for counting operations. It also covers analyzing best, worst, and average cases and establishes asymptotic efficiency classes. The document then analyzes several examples of non-recursive and recursive algorithms.
Algorithm And analysis Lecture 03& 04-time complexity.Tariq Khan
This document discusses algorithm efficiency and complexity analysis. It defines key terms like algorithms, asymptotic complexity, Big O notation, and different complexity classes. It provides examples of analyzing time complexity for different algorithms like loops, nested loops, and recursive functions. The document explains that Big O notation allows analyzing algorithms independent of machine or input by focusing on the highest order term as the problem size increases. Overall, the document introduces methods for measuring an algorithm's efficiency and analyzing its time and space complexity asymptotically.
Algorithms Lecture 2: Analysis of Algorithms IMohamed Loey
This document discusses analysis of algorithms and time complexity. It explains that analysis of algorithms determines the resources needed to execute algorithms. The time complexity of an algorithm quantifies how long it takes. There are three cases to analyze - worst case, average case, and best case. Common notations for time complexity include O(1), O(n), O(n^2), O(log n), and O(n!). The document provides examples of algorithms and determines their time complexity in different cases. It also discusses how to combine complexities of nested loops and loops in algorithms.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
Lecture 2 role of algorithms in computingjayavignesh86
This document discusses algorithms and their role in computing. It defines an algorithm as a set of steps to solve a problem on a machine in a finite amount of time. Algorithms must be unambiguous, have defined inputs and outputs, and terminate. The document discusses designing algorithms, proving their correctness, and analyzing their performance and complexity. It provides examples of algorithm problems like sorting, searching, and graphs. The goal of analyzing algorithms is to evaluate and compare their performance as the problem size increases.
This document discusses asymptotic notations and their use in analyzing the time complexity of algorithms. It introduces the Big-O, Big-Omega, and Big-Theta notations for describing the asymptotic upper bound, lower bound, and tight bound of an algorithm's running time. The document explains that asymptotic notations allow algorithms to be compared by ignoring lower-order terms and constants, and focusing on the highest-order term that dominates as the input size increases. Examples are provided to illustrate the different orders of growth and the notations used to describe them.
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
The document discusses algorithms and their analysis. It covers:
1) The definition of an algorithm and its key characteristics like being unambiguous, finite, and efficient.
2) The fundamental steps of algorithmic problem solving like understanding the problem, designing a solution, and analyzing efficiency.
3) Methods for specifying algorithms using pseudocode, flowcharts, or natural language.
4) Analyzing an algorithm's time and space efficiency using asymptotic analysis and orders of growth like best-case, worst-case, and average-case scenarios.
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
This document discusses analyzing the efficiency of algorithms. It introduces the framework for analyzing algorithms in terms of time and space complexity. Time complexity indicates how fast an algorithm runs, while space complexity measures the memory required. The document outlines steps for analyzing algorithms, including measuring input size, determining the basic operations, calculating frequency counts of operations, and expressing efficiency in Big O notation order of growth. Worst-case, best-case, and average-case time complexities are also discussed.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
The document summarizes a lecture on algorithms that covered insertion sort, analyzing its time complexity, and an introduction to the divide and conquer approach and merge sort. It included pseudocode for insertion sort algorithms and discussed how merge sort follows the divide and conquer paradigm by dividing the problem into subproblems, solving them recursively, and combining the solutions. Pseudocode was also provided for the merge sort algorithm.
This document discusses the divide and conquer algorithm design strategy and provides an analysis of the merge sort algorithm as an example. It begins by explaining the divide and conquer strategy of dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. It then provides pseudocode and explanations for the merge sort algorithm, which divides an array in half, recursively sorts the halves, and then merges the sorted halves back together. It analyzes the time complexity of merge sort as Θ(n log n), proving it is more efficient than insertion sort.
Quick sort Algorithm Discussion And AnalysisSNJ Chaudhary
Quicksort is a divide-and-conquer algorithm that works by partitioning an array around a pivot element and recursively sorting the subarrays. In the average case, it has an efficiency of Θ(n log n) time as the partitioning typically divides the array into balanced subproblems. However, in the worst case of an already sorted array, it can be Θ(n^2) time due to highly unbalanced partitioning. Randomizing the choice of pivot helps avoid worst-case scenarios and achieve average-case efficiency in practice, making quicksort very efficient and commonly used.
The document discusses several sorting algorithms including selection sort, insertion sort, bubble sort, merge sort, and quick sort. It provides details on how each algorithm works including pseudocode implementations and analyses of their time complexities. Selection sort, insertion sort and bubble sort have a worst-case time complexity of O(n^2) while merge sort divides the list into halves and merges in O(n log n) time, making it more efficient for large lists.
The document discusses algorithm analysis and asymptotic notation. It begins by explaining how to analyze algorithms to predict resource requirements like time and space. It defines asymptotic notation like Big-O, which describes an upper bound on the growth rate of an algorithm's running time. The document then provides examples of analyzing simple algorithms and classifying functions based on their asymptotic growth rates. It also introduces common time functions like constant, logarithmic, linear, quadratic, and exponential time and compares their growth.
The document discusses sorting algorithms. It begins by defining the sorting problem as taking an unsorted sequence of numbers and outputting a permutation of the numbers in ascending order. It then discusses different types of sorts like internal versus external sorts and stable versus unstable sorts. Specific algorithms covered include insertion sort, bubble sort, and selection sort. Analysis is provided on the best, average, and worst case time complexity of insertion sort.
The document discusses various sorting algorithms including exchange sorts like bubble sort and quicksort, selection sorts like straight selection sort, and tree sorts like heap sort. For each algorithm, it provides an overview of the approach, pseudocode, analysis of time complexity, and examples. Key algorithms covered are bubble sort (O(n2)), quicksort (average O(n log n)), selection sort (O(n2)), and heap sort (O(n log n)).
Quicksort is an efficient sorting algorithm that uses a divide-and-conquer approach. It works by partitioning an array around a pivot value, and then recursively sorting the subarrays. In the best case when the array is partitioned evenly, quicksort runs in O(n log n) time. In the worst case of a sorted or reverse sorted array, it runs in O(n^2) time. Using a median-of-three approach for pivot selection helps avoid worst case behavior in typical cases.
The document introduces algorithms for sorting and searching tasks. It discusses sequential search, binary search, selection sort, bubble sort, merge sort, and quick sort algorithms. For each algorithm, it provides pseudocode to describe the steps, an example, and analysis of time complexity in the best, worst and average cases. The time complexities identified are Θ(n) for sequential search average case, Θ(log n) for binary search, Θ(n2) for selection, bubble and quick sort worst cases, and Θ(n log n) for merge and quick sort average cases.
The document discusses sorting algorithms. It defines sorting as arranging a list of records in a certain order based on their keys. Some key points made:
- Sorting is important as it enables efficient searching and other tasks. Common sorting algorithms include selection sort, insertion sort, mergesort, quicksort, and heapsort.
- The complexity of sorting in general is Θ(n log n) but some special cases allow linear time sorting. Internal sorting happens in memory while external sorting handles data too large for memory.
- Applications of sorting include searching, finding closest pairs of numbers, checking for duplicates, and calculating frequency distributions. Sorting also enables efficient algorithms for computing medians, convex hulls, and
1. Several sorting algorithms are compared including quicksort, heapsort, mergesort, insertion sort, selection sort, and bubble sort.
2. For most algorithms, the best, average, and worst case time complexities are listed as O(n log(n)), O(n log(n)), and O(n^2) respectively except for some cases like selection sort and bubble sort that have average and worst case complexities of O(n^2).
3. For space complexity, many algorithms have worst case complexities of O(n) except for mergesort which uses additional space and has a worst case complexity of O(n).
The document discusses algorithms and their analysis. It begins by defining an algorithm and key aspects like correctness, input, and output. It then discusses two aspects of algorithm performance - time and space. Examples are provided to illustrate how to analyze the time complexity of different structures like if/else statements, simple loops, and nested loops. Big O notation is introduced to describe an algorithm's growth rate. Common time complexities like constant, linear, quadratic, and cubic functions are defined. Specific sorting algorithms like insertion sort, selection sort, bubble sort, merge sort, and quicksort are then covered in detail with examples of how they work and their time complexities.
This document discusses algorithm analysis and asymptotic notation. It defines common asymptotic notations like O(N), Ω(N), and Θ(N) and provides examples of analyzing simple algorithms and determining their time complexities. The document also outlines general rules for analyzing algorithms with loops, nested loops, consecutive statements, and recursion to determine their asymptotic running times.
The document provides an overview of several sorting algorithms, including insertion sort, bubble sort, selection sort, and radix sort. It describes the basic approach for each algorithm through examples and pseudocode. Analysis of the time complexity is also provided, with insertion sort, bubble sort, and selection sort having worst-case performance of O(n^2) and radix sort having performance of O(nk) where k is the number of passes.
This document discusses algorithm design and analysis. It introduces sorting as an example problem and compares the insertion sort and merge sort algorithms. Insertion sort runs in O(n2) time in the worst case, while merge sort runs in O(nlogn) time. It provides pseudocode for insertion sort and merge sort and analyzes their time complexities. It also covers algorithm analysis techniques like recursion trees and asymptotic notation.
The document summarizes two sorting algorithms: Mergesort and Quicksort. Mergesort uses a divide and conquer approach, recursively splitting the list into halves and then merging the sorted halves. Quicksort uses a partitioning approach, choosing a pivot element and partitioning the list into elements less than and greater than the pivot. The average time complexity of Quicksort is O(n log n) while the worst case is O(n^2).
Quicksort is a sorting algorithm that works by partitioning an array around a pivot value, and then recursively sorting the sub-partitions. It chooses a pivot element and partitions the array based on whether elements are less than or greater than the pivot. Elements are swapped so that those less than the pivot are moved left and those greater are moved right. The process recursively partitions the sub-arrays until the entire array is sorted.
Dsa – data structure and algorithms sortingsajinis3
This document summarizes several sorting algorithms: bubble sort, insertion sort, selection sort, quicksort, merge sort, and radix sort. For each algorithm, it provides a high-level description of the approach, pseudocode for the algorithm, and time complexities. The key points are that sorting algorithms arrange data in a certain order, different techniques exist like comparing adjacent elements or dividing and merging, and the time complexities typically range from O(n log n) for efficient algorithms to O(n^2) for less efficient ones.
This document provides an introduction to algorithm design and analysis. It discusses sorting as an example problem, comparing the insertion sort and merge sort algorithms. Insertion sort runs in O(n^2) time while merge sort runs in O(nlogn) time, making merge sort faster for large inputs. The document explains the recursive definitions and analyses of these algorithms' runtimes. It also introduces asymptotic notation and techniques for algorithm analysis such as recurrence relations and decision trees. Finally, it briefly discusses NP-complete problems.
The document discusses network concepts and Wi-Fi setup. It defines a network as connected computers that share resources and lists benefits like resource sharing and reduced costs. It describes common network elements like servers, clients, and the client-server relationship. It also distinguishes between local, metropolitan, and wide area networks and defines peer-to-peer and client-server network types. The document then covers how to set up Wi-Fi using a wireless router and how to secure it with measures like strong passwords and encryption. It concludes by explaining how to download and upload files while offering tips for safe downloading and introducing download managers.
This document discusses information management and organization. It covers topics like introduction to information systems, information organization, file management, data backup, and data storage in the cloud. The key points are:
1) Information systems combine data, devices, software, and organizations to produce and share information. There are various ways to organize information, including by location, alphabetization, time, category, and hierarchy.
2) File management involves organizing digital files in ways that make them easy to file and find. Best practices include avoiding desktop storage, using descriptive names, and sorting files regularly.
3) Data backup creates additional copies of data that can be used to restore files after data loss or corruption. Backup methods include full
It security,malware,phishing,information theftDeepak John
The document provides an overview of IT security topics including malware, phishing, protecting data on devices, and safely searching online. It defines types of malware like viruses, spyware, and adware that can harm devices. Phishing is described as attempting to steal personal information through deceptive messages. The document outlines signs of malware infection and advises using antivirus software, firewalls, and encryption to secure devices and data. It also provides tips for safely searching online like using privacy tools and evaluating website content and credibility.
The document provides an overview of digital communication topics including email, contacts, and calendars. It discusses how digital data is transmitted electronically between digital devices over channels. Email is described as a way to send messages over the Internet with advantages like productivity tools, easy access, and communication with multiple people. The key parts of an email address like the username, domain name, and top-level domain are explained. Features of email inboxes, message panes, and compose panes in Gmail, Yahoo, and Outlook are illustrated with screenshots. The document also covers using contacts to organize contact information and calendars to schedule appointments and manage time across devices.
Register Organization of 8086, Architecture, Signal Description of 8086, Physical Memory
Organization, General Bus Operation, I/O Addressing Capability, Special Processor Activities,
Minimum Mode 8086 System and Timings, Maximum Mode 8086 System and Timings.
Addressing Modes of 8086.
Machine Language Instruction Formats – Instruction Set of 8086-Data transfer
instructions,Arithmetic and Logic instructions,Branch instructions,Loop instructions,Processor
Control instructions,Flag Manipulation instructions,Shift and Rotate instructions,String
instructions, Assembler Directives and operators,Example Programs,Introduction to Stack,
STACK Structure of 8086, Interrupts and Interrupt Service Routines, Interrupt Cycle of 8086,
Non-Maskable and Maskable Interrupts, Interrupt Programming, MACROS.
The document describes the five basic components of a computer system: the input unit, output unit, storage unit, central processing unit (CPU), and arithmetic logic unit (ALU). The input unit accepts instructions and data and converts it to a computer-readable format. The output unit accepts results from the CPU and converts it to a human-readable format. The storage unit stores data, instructions, and intermediate and final results. The CPU controls all internal and external devices and performs arithmetic/logical operations. The ALU is where actual processing and calculations occur.
Registers are groups of flip-flops that store binary data. An n-bit register contains n flip-flops and can store 2^n different states. Registers are used to store and provide digital data to logic circuits. There are different types of registers including shift registers. Shift registers can transfer data in serial-in serial-out, serial-in parallel-out, parallel-in serial-out, and parallel-in parallel-out modes. Counters are registers that increment their stored value on each clock pulse and are used to count events.
Network Security: Authentication Applications, Electronic Mail Security, IP Security, Web
Security, System Security: Intruders, Malicious Software, Firewalls
Network Security: Authentication Applications, Electronic Mail Security, IP Security, Web
Security, System Security: Intruders, Malicious Software, Firewalls
Key Management, Diffie-Hellman Key Exchange, Elliptic Curve Arithmetic, Elliptic Curve
Cryptography, Message Authentication and Hash Functions, Hash and MAC Algorithms
Digital Signatures and Authentication Protocols
Key Management, Diffie-Hellman Key Exchange, Elliptic Curve Arithmetic, Elliptic Curve
Cryptography, Message Authentication and Hash Functions, Hash and MAC Algorithms
Digital Signatures and Authentication Protocols
Registers are groups of flip-flops that store binary data. Shift registers can transfer data in serial or parallel formats. There are four basic modes of shift registers: serial-in serial-out, serial-in parallel-out, parallel-in serial-out, and parallel-in parallel-out. Counters are circuits made of flip-flops that count clock pulses and can be asynchronous, synchronous, decade, up/down, or cascaded to achieve different counts.
Advanced Encryption Standard, Multiple Encryption and Triple DES, Block Cipher Modes of
operation, Stream Ciphers and RC4, Confidentiality using Symmetric Encryption, Introduction
to Number Theory: Prime Numbers, Fermat’s and Euler’s Theorems, Testing for Primality, The
Chinese Remainder Theorem, Discrete Logarithms, Public-Key Cryptography and RSA
Advanced Encryption Standard, Multiple Encryption and Triple DES, Block Cipher Modes of
operation, Stream Ciphers and RC4, Confidentiality using Symmetric Encryption, Introduction
to Number Theory: Prime Numbers, Fermat’s and Euler’s Theorems, Testing for Primality, The
Chinese Remainder Theorem, Discrete Logarithms, Public-Key Cryptography and RSA
Introduction: OSI Security Architecture, Security attacks, ,Security Services, Security
Mechanisms, Model for Network Security, Fundamentals of Abstract Algebra : Groups, Rings,
Fields, Modular Arithmetic, Euclidean Algorithm, Finite Fields of the form GF(p),Polynomial
Arithmetic, Finite Fields of the form GF(2n),Classical Encryption techniques, Block Ciphers and
Data Encryption Standard.
Introduction: OSI Security Architecture, Security attacks, ,Security Services, Security
Mechanisms, Model for Network Security, Fundamentals of Abstract Algebra : Groups, Rings,
Fields, Modular Arithmetic, Euclidean Algorithm, Finite Fields of the form GF(p),Polynomial
Arithmetic, Finite Fields of the form GF(2n),Classical Encryption techniques, Block Ciphers and
Data Encryption Standard.
This document discusses logic gates and Boolean algebra. It begins by defining basic logic gates like AND, OR, and NOT. It then covers more advanced gates like NAND, NOR, XOR, and XNOR and provides their truth tables. The document explains how to implement logic functions using gates. It also covers Boolean algebra topics like Boolean functions, minterms, maxterms, SOP, POS, Karnaugh maps, and their use in minimizing logic expressions. Worked examples are provided for implementing functions with gates and simplifying expressions using K-maps.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Liberal Approach to the Study of Indian Politics.pdf
Analysis and design of algorithms part2
1. A l i d D i f Al ithAnalysis and Design of Algorithms
Deepak John
Department Of Computer Applications , SJCET-Pala
2. Analysis of searching and sorting. Insertion sort,Analysis of searching and sorting. Insertion sort,
Quick sort, Merge sort and Heap sort. Binomial
Heaps and Fibonacci Heaps, Lower bounds for
sorting by comparison of keys. Comparison of sorting
algorithms. Amortized Time Analysis. Red-Black
T I ti d D l tiTrees – Insertion and Deletion.
3. Approachpp
Step I:
Choose the criteria (for natural numbers, criteria can be
ascending or descending order).
Step II:Step II:
How to put data in order using the criterion selected.
4. AnalysisAnalysis
• Final ordering of data can be obtained in a variety of ways.
• Some are meaningful and efficient.g
• Meaningful and efficient ways depend on many aspects of an
application- type of data, randomness of data, run time constraints,
i f th d t t f it i tsize of the data, nature of criteria, etc.
• To make comparisons, certain properties of sorting algorithms
should be defined.
• Properties which are used to compare algorithms without
depending on the type and speed of the machines are:
– number of comparisons– number of comparisons.
– number of data movements.
– Use of auxiliary storage.
5. Sortingg
Sorting is the process of arranging a group of items
into a defined order based on particular criteriainto a defined order based on particular criteria
There are many, different types of sorting algorithms,
but the primary ones are:
1. insertion sort1. insertion sort
2. Quick sort
3. Merge sort
4. Heap sort.
6. Insertion sortInsertion sort
th iti t i t th t l t i f d b• the proper position to insert the current element is found by
comparing the current element with the elements in the sorted
sub-array.which is an efficient algorithm for sorting a small
b f l tnumber of elements.
7.
8.
9. Analyzing Algorithmy g g
O t l (li 1 8) tl 1 ti ( ith l th(A))Outer loop (lines 1–8) runs exactly n − 1 times (with n = length(A))
10. Best case
•The best case for insertion sort is when the input array is already•The best case for insertion sort is when the input array is already
sorted, in which case the while loop never executes (but the
condition must be checked once).
•tj=1, and line 6 and 7 will be executed 0 times.
•T(n) = c1n + c2(n - 1) + c4(n - 1) + c5(n - 1) + c8(n - 1)
( 1 2 4 5 8) ( 2 4 5 8)= (c1 + c2 + c4 + c5 + c8)n - (c2 + c4 + c5 + c8)
= an + b
= Θ(n)= Θ(n)
Deepak John,Department Of IT,CE Poonjar
11. worst case for insertion sort is when the input array is in reverse
sorted order, in which case the while loop executes the maximum
number of times.
•the inner loop is executed exactly j − 1 times for every iteration of the
outer loop.outer loop.
=an2+bn-c (consider only leading terms of formula, since lower order
terms are insignificant for large n Ignore the leading terms constantterms are insignificant for large n.Ignore the leading terms constant
coefficient ,since constant factors are less significant than the rate of
growth in determine computational efficiency of large inputs)
Θ( 2)=Θ(n2)
12. Average case
• when random data is sorted, insertion sort is usually closer to the
worst caseworst case.
• in average, tj = j/2. T(n) will still be in the order of n2, same as the
worst case.
Order of Growth
The order of a running-time function is the fastest growing term,g g g ,
discarding constant factors
Best case: an + b → Θ(n)
2 2Worst case : an2 + bn - c → Θ(n2)
Average case: Θ(n2)
13. • Advantageg
The advantage of Insertion Sort is that it is relatively simple and
easy to implement.
• Disadvantage
The disadvantage of Insertion Sort is that it is not efficient to
operate with a large list or input sizeoperate with a large list or input size.
14. Quick-SortQ
• Quick-sort is a randomized
ti l ith b d thsorting algorithm based on the
divide-and-conquer paradigm:
– Divide: pick a random
x
Divide: pick a random
element x (called pivot)
and partition S into xp
• L elements less than x
• E elements equal x
x
L GE
• G elements greater than x
– Recur: sort L and G
L GE
x
– Conquer: join L, E and G
15. Choice Of PivotChoice Of Pivot
Three ways to choose the pivot:
• Median-of-Three - from the leftmost, middle, and rightmostg
elements of the list to be sorted, select the one with median
key as the pivot
• Pi ot is rightmost element in list that is to be sorted• Pivot is rightmost element in list that is to be sorted
– When sorting A[6:20], use A[20] as the pivot
• Randomly select one of the elements to be sorted as the pivotRandomly select one of the elements to be sorted as the pivot
– When sorting A[6:20], generate a random number r in the
range [6, 20]
– Use A[r] as the pivot
16. Algorithm
Given an array of n elements (e.g.,
integers):
• If array only contains one element,
return
• Else• Else
– pick one element to use as pivot.
– Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
– Quick sort two sub-arraysQuick sort two sub arrays
– Return results
18. P titi iPartitioning
• The key to the algorithm is the PARTITION procedure, which
rearranges the sub-array in place.g y p
• Given a pivot, partition the elements of the array such that the
resulting array consists of:
1 One sub array that contains elements > pivot1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot
• The sub-arrays are stored in the original data array.y g y
19. Analysisy
The running time of quick sort depends on whether the partitioning is
b l d t A th t k d if lbalanced or not. Assume that keys are random, uniformly
distributed.
Best caseBest case
Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
the depth of the recursion is log2n
At each level of the recursion, the work done in all the partitions at
that level is =O(n)
O(log n) * O(n) = O(n log n)O(log2n) O(n) O(n log2n)
Best case running time: O(n log2n)
Deepak John,Department Of IT,CE Poonjar
20.
21. W tWorst case
• Data is sorted already
Recursion:– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quick sort each sub-array
Rec rring on the length n 1 part req ires rec rring to depth n 1Recurring on the length n-1 part requires recurring to depth n-1
• recursion is O(n) levels deep (for an array of size n).
• the partitioning work done at each level is O(n)the partitioning work done at each level is O(n).
• O(n) * O(n) = O(n2)
Worst case running time: O(n2)
22. Average-case
• If the pivot element is randomly chosen we expect the split of the
input array to be reasonably well balanced on average .
• Assuming random input, average-case running time is much closer to
(n lg n) than (n2)
• T(n)=O(n lgn)• T(n)=O(n lgn)
Improved Pivot Selection
Pick median value of three elements from data array : data[0],y [ ],
data[n/2], and data[n-1].Use this median value as pivot.
23. Merge sortg
Merge-sort on an input sequence S with n elements consists of three
steps:
Divide: partition S into two sequences S1 and S2 of about n/2
elements each
Recur: recursively sort S1 and S2y 1 2
Conquer: merge S1 and S2 into a unique sorted sequence
A L G O R I T H M S
divideA L G O R I T H M S
sortA G L O R H I M S T
Deepak John,Department Of IT,CE Poonjar
mergeA G H I L M O R S T
24. Algorithm
MERGE-SORT (A, p, r)
1 IF p < r // Check for base case
Algorithm
1. IF p < r // Check for base case
2. THEN q = (p + r)/2 // Divide step
3. MERGE-SORT (A, p, q // Conquer step.
4 MERGE SORT(A + 1 ) // C t4. MERGE-SORT(A, q + 1, r) // Conquer step.
5. MERGE (A, p, q, r) // Conquer step.
25. ( )
1 2 3 4 5 6 7 8
p rq
MERGE(A, p, q, r)
1. Compute n1 and n2
2 Copy the first n1 elements into
63217542
n1 n22. Copy the first n1 elements into
L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
3. L[n1 + 1] ← ; R[n2 + 1] ←
4 i 1 j 1
p q
1 2
4. i ← 1; j ← 1
5. for k ← p to r
6. do if L[ i ] ≤ R[ j ]
p q
7542
rq + 1
L
6. do if L[ i ] ≤ R[ j ]
7. then A[k] ← L[ i ]
8. i ←i + 1
6321R
9. else A[k] ← R[ j ]
10. j ← j + 1
26. Analysis
• For simplicity assume that n is a power of 2 so that each divide step• For simplicity, assume that n is a power of 2 so that each divide step
yields two subproblems, both of size exactly n/2.
• The base case occurs when n = 1.When n > 1, time for merge sort
steps:
Divide: Just compute q as the average of p and r, which takes constant
time i e Θ(1)time i.e. Θ(1).
Conquer: Recursively solve 2 sub problems, each of size n/2, which is
2T(n/2).
Combine: MERGE on an n-element sub array takes Θ(n) time.
• Summed together they give a function ,the recurrence for merge sort
i i irunning time is
T(n) = Θ(1) if n=1
= 2T(n/2)+ Θ (n)+Θ(1) If n>1= 2T(n/2)+ Θ (n)+Θ(1). If n>1
T(n)=Θ(n lg2n)
27. Analysis of MergeSort
O(n log n) best-, average-, and worst-case complexity because the
merging is always linear
Analysis of MergeSort
g g y
―Extra O(n) temporary array for merging data
―Extra copying to the temporary array and back
Useful only for external sorting
Deepak John,Department Of IT,CE Poonjar
28. Heapsp
Definitions of heap:
1. A balanced, left-justified binary tree in which no node has a, j y
value greater than the value in its parent.
Example min heap
Y>=X
Z>=X
29. Heap
• The binary heap data structures is an array that can be
viewed as a complete binary tree. Each node of the binary
tree corresponds to an element of the array. The array is
completely filled on all levels except possibly lowest.
19
12 16
41 7
1619 1 412 7Array A 1619 1 412 7Array A
30. Max Heap Example Min Heap Example
19
1
12 16
4 16
41 7
127 19
1619 1 412 7
41 7
127 191641
Array A Array A
32. • Algorithm
1. Add the new element to the next available position at the
lowest level
2. Restore the max-heap property if violated
• General strategy is percolate up (or bubble up): if the parent• General strategy is percolate up (or bubble up): if the parent
of the element is smaller than the element, then interchange
the parent and child.
OROR
Restore the min-heap property if violated
• General strategy is percolate up (or bubble up): if the parentgy p p ( p) p
of the element is larger than the element, then interchange
the parent and child.
33. 19 19
12 16 12 16
41 7 41 7 17
19
Insert 17
12 17
swap
41 7 16
Percolate up to maintain the heap property
34. • Delete max
– Copy the last number to the root ( overwrite the maximum
l t t d th )element stored there ).
– Restore the max heap property by percolate down.
• Delete min
– Copy the last number to the root ( overwrite the minimumpy (
element stored there ).
– Restore the min heap property by percolate down.
35. Maintaining the Heap PropertyMaintaining the Heap Property
• Suppose a node is smaller than a childpp
– Left and Right subtrees of i are max-
heaps
• To eliminate the violation:
– Exchange with larger child
Move down the tree– Move down the tree
– Continue until node is not smaller than
children
36. Maintaining the Heap Property
• Assumptions:
– Left and Right
Alg: MAX-HEAPIFY(A, i, n)
1 l ← LEFT(i)Left and Right
subtrees of i are
max-heaps
1. l ← LEFT(i)
2. r ← RIGHT(i)
3. if l ≤ n and A[l] > A[i]
– A[i] may be
smaller than its
children
[ ] [ ]
4. then largest ←l
5. else largest ←i
6. if r ≤ n and A[r] > A[largest]
7. then largest ←r
8 if l ≠ i8. if largest ≠ i
9. then exchange A[i] ↔ A[largest]
10 MAX HEAPIFY(A largest n)10. MAX-HEAPIFY(A, largest, n)
37. ExampleMAX-HEAPIFY(A 2 10)MAX HEAPIFY(A, 2, 10)
A[2] → A[4]
A[2] violates the heap property A[4] violates the heap property
39. T(n)=O(lg n )
•Best Case Occurs when no swap is performed, T(n)=O(1)p p , ( ) ( )
•Worst case occurs when we swap all elements
40. BUILD-MAX-HEAP
Produces a max-heap from an unordered input array
BUILD MAX HEAP
•O(n) calls( )
• Each call takes O(lg n) time for max haepify ,so O(n lg n) be the
total time.
41. Heap sort
The heapsort algorithm consists of two phases:
- build a heap from an arbitrary array
- use the heap to sort the datause the heap to sort the data
• To sort the elements in the decreasing order, use a min heap
• To sort the elements in the increasing order, use a max heap
11
42. Example Heap Sort
Let us look at this example: we must convert the unordered array
with n = 10 elements into a max-heapwith n 10 elements into a max heap
we start with position 10/2 = 5
43. We compare 3 with its child and swap them
W 17 ith it t hild d it ith thWe compare 17 with its two children and swap it with the
maximum child (70)
44. We compare 28 with its two children, 63 and 34, and swap it with
the largest child
We compare 52 with its children, swap it with the largest
Rec rsing no f rther s aps are needed– Recursing, no further swaps are needed
45. Finally, we swap the root with its largest child, and recurse,
i 46 i i h 81 d h i i h 70swapping 46 again with 81, and then again with 70
46. We have now converted the unsorted
array
into a max-heap:
47. Suppose we pop the maximum element of this heap
This leaves a gap at the back of the array:
48. This is the last entry in the array, so why not fill it with the largest
element?element?
Repeat this process: pop the maximum element, and then insert it at
the end of the array:
52. Finally we can pop 17 insert it into the 2nd location and theFinally, we can pop 17, insert it into the 2nd location, and the
resulting array is sorted
53. Analysisy
• The call to BuildHeap() takes O(n) time
• Each of the n - 1 calls to Heapify() takes O(lg n) timep y() ( g )
• Thus the total time taken by HeapSort()
= O(n) + (n - 1) O(lg n)
O( ) + O( l )= O(n) + O(n lg n)
= O(n lg n)
There are no best-case and worst-case scenarios for heap sortp
57. Binomial heaps
A binomial heap is a linked list of binomial trees with the following
properties:
1 The binomial trees are linked in increasing order of size1. The binomial trees are linked in increasing order of size.
2. There is at most one binomial tree of each size.
3. Each binomial tree has the heap structure: the value in each node is ≤
5 1h d[H]
3. Each binomial tree has the heap structure: the value in each node is ≤
the values in its children.
5 1
1210
head[H]
7
2
13103
15 151210
1616
58.
59. Binomial Heap Implementation
E h d h th f ll i fi ld• Each node has the following fields:
p: parent
child: leftmost childchild: leftmost child
sibling
Degreeg
Key
•Roots of the trees are connected using linked list.
•Each node x also contains the field degree[x] , which is the number of
children of x.
60. Binomial Heap Implementation
a) c)key
p
2
0
NIL
h d[H]
1
2
NIL
)key
degree
child sibling
NILhead[H] NIL
1210
b)
2
12
0
NIL NIL
head[H] 1
1210
10
1
15
15
0
NIL NIL
61. Binomial Heap OperationsBinomial Heap Operations
1 Make-Heap()1. Make-Heap().
2. Insert(H, x), where x is a node .
3. Minimum(H).( )
4. Extract-Min(H).
5. Union(H1, H2): merge H1 and H2, creating a new heap.
6. Decrease-Key(H, x, k): decrease x.key (x is a node in
H) to k. (It’s assumed that k x.key.)
62. Make-Heap():p()
•Make an empty binomial heap. Creating all of the pointers can be
done in O(1) time.
Th ti i l t i t d t it t NILThe operation simply creates a new pointer and sets it to NIL.
Binomial-Heap-Create()
1 head[H] <- NIL
2 return head[H]
63. Minimum(H):
•To do this we find the smallest key among those stored at the rootsTo do this we find the smallest key among those stored at the roots
connected to the head of H.
•The minimum must be in some root in the top list.
•If there are n nodes in the heap there are at most lg n roots at the top at•If there are n nodes in the heap there are at most lg n roots at the top, at
most one each of degree 0, 1, 2, . . . , lg n , so this can be found in O(lg
n) time. Binomial-Heap-Minimum(H)
1 y <- NIL
2 x <- head[H]
3 min <- ∞
4 while x is not NIL
5 do if key[x] < min then
6 min < key[x]6 min <- key[x]
7 y <- x
8 x <- sibling[x]
9 return y
64. Find Minimum Key Example
5 1head[ 2 5 1head[ 2
a) b)
1210
15
H]
7 1210
15
H]
7
15 15
5 1head[ 2 5 1head[ 2
c) d)
1210
head[
H]
7 1210
head[
H]
7
15 15
Deepak John,Department Of IT,CE Poonjar
65. Binomial-Link(y,z)
Link binomial trees with the same degree. Note that z, the second
argument to BL(), becomes the parent, and y becomes the child.
Link(y,z)
p[y] := z;
ibli [ ] hild[ ]
Link(y,z)
p[y] := z;
ibli [ ] hild[ ]sibling[y] := child[z];
child[z] := y;
degree[z] := degree[z] + 1
sibling[y] := child[z];
child[z] := y;
degree[z] := degree[z] + 1g [ ] g [ ]g [ ] g [ ]
y y
z
z
Bk-1
Bk-1Bk-1 Bk-1
y
Link
Deepak John,Department Of IT,CE Poonjar
66. Union(H1,H2)
•is the most sophisticated of the binomial heap operationsis the most sophisticated of the binomial heap operations.
•It’s used in many other operations.
The running time will be O(log n).g ( g )
UnionH1, H2
H1 H2
H1 = H2 =
Union traverses the new root list like this:
prev-x x next-x
Union traverses the new root list like this:
Deepak John,Department Of IT,CE Poonjar
67. Starting with the following two binomial heaps:
1880602
58 19
18
93
8060
32 63
2
69
M t li t b t 2 188060
53
Merge root lists, but
now we have two
trees of same degree
53
32 63
2
69
58 19
18
93
8060
53 69
Combine trees of same
28060
Combine trees of same
degree using binomial
link, make smaller key
the root of the
53
32 63
58 19
1893
the root of the
combined tree 69
68. Cases
prev-x x next-x sibling[next-x] prev-x x next-x
a b c d
p g[ ]
Bk Bl
Case 1 a b c d
p x next-x
Bk Bl
Case 1:occurs when degree[x] ≠ degree[next-x], that is, when x is the
root of a Bk-tree and next-x is the root of a Bl-tree for some l > k.k l
prev-x x next-x sibling[next-x] prev-x x next-x
a b c d
p g[ ]
BB
Case 2 a b c d
prev x x
BBBBkBk
Bk
BkBkBk
Case 2: occurs when x is the first of three roots of equal degree, that
is when
Deepak John,Department Of IT,CE Poonjar
is, when
degree[x] = degree[next-x] = degree[sibling[next-x]].
69. a b c d
prev-x x next-x sibling[next-x]
Case 3 a b d
prev-x x next-x
BkBk
Bl
key[x] key[next[x]]
c
Bk
Bk
Bl
prev-x x next-x sibling[next-x]
C 4
Bk+1
prev-x x next-x
a b c d
prev x x next x sibling[next x]
Bk Bk
Bl
Case 4 a
b
c d
Bk
Bl
prev x x next x
k
key[x] > key[next[x]]
k
Bk
Bk+1
Case 3 and 4: occur when x is the first of two roots of equal
degree, that is, when
d [ ] d [ t ] ≠d [ ibli [ t ]]degree[x] = degree[next-x] ≠degree[sibling[next-x]].
70. Union(H1, H2)Union(H1, H2)( 1, 2)
H := new heap;
head[H] := merge(H1, H2); /* simple merge of root lists */
if head[H] = NIL then return H fi;
( 1, 2)
H := new heap;
head[H] := merge(H1, H2); /* simple merge of root lists */
if head[H] = NIL then return H fi;if head[H] NIL then return H fi;
prev-x := NIL;
x := head[H];
next-x := sibling[x];
if head[H] NIL then return H fi;
prev-x := NIL;
x := head[H];
next-x := sibling[x];next x : sibling[x];
while next-x NIL do
if (degree[x] degree[next-x]) or
(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x]) then
next x : sibling[x];
while next-x NIL do
if (degree[x] degree[next-x]) or
(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x]) then(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x]) then
prev-x := x;
x := next-x;
else
(sibling[next-x] NIL and degree[sibling[next-x]] = degree[x]) then
prev-x := x;
x := next-x;
else
Cases
1,2
elseelse
Deepak John,Department Of IT,CE Poonjar
71. if key[x] key[next-x] thenif key[x] key[next-x] then
sibling[x] := sibling[next-x];
Link(next-x, x)
else
if NIL th h d[H] l ibli [ ] fi
sibling[x] := sibling[next-x];
Link(next-x, x)
else
if NIL th h d[H] l ibli [ ] fi
Case 3
if prev-x = NIL then head[H] := next-x else sibling[prev-x] := next-x fi
Link(x, next-x);
x := next-x
fi
if prev-x = NIL then head[H] := next-x else sibling[prev-x] := next-x fi
Link(x, next-x);
x := next-x
fi
Case 4
fi
fi;
next-x := sibling[x]
od;
fi
fi;
next-x := sibling[x]
od;od;
return H
od;
return H
Deepak John,Department Of IT,CE Poonjar
79. Extract Node With Minimum Key
This operation is started by finding and removing the node x withThis operation is started by finding and removing the node x with
minimum key from the binomial heap H. Create a new binomial heap
H’ and set to the list of x’s children in the reverse order. Unite H and H’
to get the resulting binomial heap.to get the resulting binomial heap.
Pseudocode
Binomial-Heap-Extract-Min(H)
1 find the root x with the minimum key in the root list of H,
and remove x from the root list of H.
2 H’ <- Make-Binomial-Heap()2 H <- Make-Binomial-Heap()
3 reverse the order of the linked list of x’s children,and set
head[H’] to point to the head of the resulting list.
4 H <- Binomial-Heap-Union(H,H’)
5 Return x
Run time: O(log n)Run time: O(log n)
Deepak John,Department Of IT,CE Poonjar
82. Decreasing a key
The current key is replaced with a new key To maintain the min-heapThe current key is replaced with a new key. To maintain the min-heap
property, it is then compared to the key of the parent. If its parent’s key is
greater then the key and data will be exchanged. This process continues until
the new key is greater than the parent’s key or the new key is in the root.y g p y y
Pseudocode:
Binomial-Heap-Decrease-Key(H,x,k)
1 if k > key[x]1 if k > key[x]
2 then error “new key is greater than current key”
3 key[x] <-k
4 y <-x
5 z <-p[y]
6 while z not NIL and key[y] < key[z]6 while z not NIL and key[y] key[z]
7 do exchange key[y] <-> key[z]
8 if y and z have satellite fields, exchange them, too.
9 <9 y <- z
10 z <- p[y]
Deepak John,Department Of IT,CE Poonjar
83. Decreasing a key
Execution time: This procedure takes O(log n) since the maximumExecution time: This procedure takes O(log n) since the maximum
depth of x is log n.
Example:p
5 2h d[H] 5 2
1210
head[H] 5 2
1210
head[H]
15 1
5 2head[H] 5 15 2
121
head[H] 5 1
122
head[H]
Deepak John,Department Of IT,CE Poonjar
10 10
84. Delete a Node
With assumption that there is no node in H has a key of -∞.
h k f d l i d i fi d dThe key of deleting node is first decreased to -∞.
This node is then deleted using extracting min procedure.
Pseudocode:
Binomial-Heap-Delete(H,x)
1 Binomial-Heap-Decrease-Key(H,x,-∞)
2 Binomial-Heap-Extract-Min(H)2 Binomial-Heap-Extract-Min(H)
Run time: O(log n) since the run time of both Binomial-Heap-Decrease-
K d Bi i l H E t t Mi d i d f O(l )Key and Binomial-Heap-Extract-Min procedures are in order of O(log n).
Deepak John,Department Of IT,CE Poonjar
85. Delete a Node Examplep
a) b)
5 2head[H] 5 2head[H]
a) b)
1210
15
12-∞
1515
5 -∞head[H] 5head[H]
c) d)
122
1
12 2head[H’]
Deepak John,Department Of IT,CE Poonjar
15 15
87. Fibonacci heap
•A Fibonacci heap is Set of min heap ordered trees
Fibonacci heap
•A Fibonacci heap is Set of min-heap ordered trees.
•Each node x has pointed p[x] to its parent & child [x] to one of its
children
•Represent trees using left-child, right sibling pointers and circular,
doubly linked list.
Child li k d t th i d bl li k d i l li t•Children are linked together in a doubly-linked circular list.
•The entire heap is accessed by a pointer min [H] which points to the
minimum-key rootu ey oo
Deepak John,Department Of IT,CE Poonjar
91. • Potential function:
Number of marked nodes in H
Fibonacci heap
Number of trees in the rooted list of H
Number of marked nodes in H
(H) = t(H) + 2m(H)
(H) = 5 + 2 3 = 11 minHeap H trees(H) = 5 marks(H) = 3
72317 24 3
30
35
26 46
4118 52
35
39 44marked
Deepak John,Department Of IT,CE
Poonjar
93. Fibonacci Heaps: Insert
Insert.
Create a new singleton tree.
Add to left of min pointer.
Update min pointer.
i t 21
21
insert 21
min
72317 24 3
30 26 46
4118 52
Deepak John,Department Of IT,CE Poonjar
35
39 44
Heap H
94. min
41
723
18 52
3
30
17
26 46
24 21
39
4118 52
35
44
Heap H
Insert Analysisse t a ys s
Actual cost. O(1)
Change in potential. +1
Amortized cost O(1)Amortized cost. O(1)
95. Fib-Heap-Insert(H x)Fib Heap Insert(H, x)
{ degree[x] 0
P[x] NIL[ ]
child[x] NIL
left[x] x ; right[x] x
mark[x] FALSE
concatenate the root list containing x with root list H
if i [H] NIL k [ ] k [ i [H]]if min[H] = NIL or key[x]<key[min[H]]
then min[H] x
n[H] n[H]+1n[H] n[H]+1
}
96. Fibonacci Heaps: Union
Union.
Concatenate two Fibonacci heaps.
Root lists are circular, doubly linked lists.
min min
717 323 24 21
39
4118 52
30
35
26 46
44
Heap H' Heap H''
39 44
Deepak John,Department Of IT,CE Poonjar
97. min
717 323 24 21
4118 52
30 26 46
39
35
44
Heap H
Actual cost. O(1)
Change in potential. 0
Amortized cost. O(1)( )
98. Fib-Heap-Union(H1, H2)
{ k ib []{ H Make-Fib-Heap[]
min[H] min[H1]
concatenate the root list of H with the root list of Hconcatenate the root list of H2 with the root list of H
if (min[H1]=NIL) or (min[H2] NIL and
min[H2]<min[H1])[ 2] [ 1])
then min[H] min[H2]
n[H] n[H1]+n[H2]
free the objects H1 and H2
return H
}
Deepak John,Department Of IT,CE Poonjar
99. Extract min()
Fib-Heap-Extract-Min(H)
{ z min[H]
if z NILif z NIL
then { for each child x of z
do { add x to the root list of H
P[x] NIL }P[x] NIL }
remove z from the root list of H
if z = right[z]
then min[H] NILthen min[H] NIL
else min[H] right[z]
Consolidate(H)
n[H] n[H] – 1n[H] n[H] – 1
}
return z
}
Deepak John,Department Of IT,CE Poonjar
}
100. Fib-Heap-Link(H, y, x)
{ remove y from
the root list of H;Consolidate(H) the root list of H;
make y a child of x;
degree[x]degree[x]+1;
mark[y] FALSE;
Consolidate(H)
{ for i 0 to D(n[H]) do A[i]=NIL
for each node w in the root list of H
do { x w ; d degree[x] ; mark[y] FALSE;
}
do { x w ; d degree[x] ;
while A[d] NIL
do { y A[d]
if key[x]>key[y] then exchange xyif key[x]>key[y] then exchange xy
Fib-Heap-Link(H, y, x)
A[d] NIL ; d d+1 }
A[d] }A[d] x }
min[H] NIL
for i 0 to D(n[H]) do
if A[i] NIL h { dd A[i] h li f Hif A[i] NIL then { add A[i] to the root list of H ;
if min[H]=NIL or key[A[i]]<key[min[H]]
then min[H] A[i] }
Deepak John,Department Of IT,CE Poonjar
}
101. Fibonacci Heaps: Delete MinFibonacci Heaps: Delete Min
• Delete min.
Delete min; meld its children into root list; update min– Delete min; meld its children into root list; update min.
– Consolidate trees so that no two roots have same rank.
min
317237 24
39
4118 52
44
30
35
26 46
39 4435
102. 411723 18 527 24
min
3930 26 46 44
35
min
current
411723 18 527 24
3930
35
26 46 44
35
114. Fibonacci Heaps: Decrease Key
Decrease key of element x to k.
Case 0: min-heap property not violated.Case 0: min heap property not violated.
•decrease key of x to k
•change heap min pointer if necessary
7 18 38min
24 17 23 21 39 41
46 3026 5245
Deepak John,Department Of IT,CE Poonjar
88
Decrease 46 to 45.
7235
115. Case 1: parent of x is unmarked.
d k f t k•decrease key of x to k
•cut off link between x and its parent
•mark parent•mark parent
•add tree rooted at x to root list, updating heap min pointer
7 18 38
min
24 17 23 21 39 41
45 3026 52
Decrease 45 to 15
15
Deepak John,Department Of IT,CE Poonjar
88
Decrease 45 to 15.
7235
116. 7 18 38
min
24 17 23 21 39 4124
15 3026 52
Decrease 45 to 15.
88
ec ease 5 to 5.
7235
7 18 38
min
15
24 17 23 21 39 412472
3026 52
88
Decrease 45 to 15.
35
Deepak John,Department Of IT,CE Poonjar
117. Case 2: parent of x is marked.
•decrease key of x to k
•cut off link between x and its parent p[x], and add x to root list
•cut off link between p[x] and p[p[x]], add p[x] to root list
If p[p[x]] unmarked, then mark it.
If p[p[x]] marked, cut off p[p[x]], unmark, and repeat.
15 7 18 38
min
24 17 23 21 39 4172 24
3026 52
Decrease 35 to 5
35
Deepak John,Department Of IT,CE Poonjar
88
Decrease 35 to 5.
5
118. 7 18 38515
min
24 17 23 21 39 412472
3026 52
D 35 5
parent marked
Decrease 35 to 5.
88
26 7 18 38515
min
24 17 23 21 39 4188 2472
30 52
Decrease 35 to 5.parent marked
Deepak John,Department Of IT,CE Poonjar
119. 26 7 18 38515 24
min
17 23 21 39 418872
30 52
Decrease 35 to 5.
Deepak John,Department Of IT,CE Poonjar
122. Amortized Analysis techniquesy q
• In amortized analysis we average the time required for a sequence
of operations over all the operations performed.
• A ti d l i t t f h• Amortized analysis guarantees an average worst case for each
operation.
– No involvement of probability
• The amortized cost per operation is therefore T(n)/n.
The aggregate method
The Accounting method The Accounting method.
The potential method
123. Aggregate analysisAggregate analysis
– The total amount of time needed for the n operations is
computed and divided by n.
– Treat all operations equally.
– Compute the worst-case running time of a sequence of n
operationsoperations.
– Divide by n to get an amortized running time.
– We aggregate the cost of a series of n operations to T(n), thengg g p ( ),
each operation has the same amortized cost of T(n)/n
124. The Accounting methodThe Accounting method
• Principles of the accounting methodp g
– 1. Associate credit accounts with different parts of the
structure
– 2. Associate amortized costs with operations and show
how they credit or debit accounts
• Different costs may be assigned to different operations• Different costs may be assigned to different operations.
operations are assigned an amortized cost. Objects of the
data structure are assigned a credit
125. Accounting Method vs. Aggregate
Method
• Aggregate method:gg g
– first analyze entire sequence
– then calculate amortized cost per operation
• Accounting method:
– first assign amortized cost per operation
– check that they are valid (never go into the red)
– then compute cost of entire sequence of operations
126. The Potential method• Similar to accounting method• Similar to accounting method
• Amortized costs are assigned in a more complicated way
– based on a potential functionbased on a potential function
– and the current state of the data structure
• Must ensure that sum of amortized costs of all operations in the
sequence is at least the sum of the actual costs of all operations in the
sequence.
• Define potential function which maps any state of the data• Define potential function which maps any state of the data
structure to a real number
• Notation:
– D0 - initial state of data structure
– Di - state of data structure after i-th operation
t l t f i th ti– ci - actual cost of i-th operation
– mi - amortized cost of i-th operation
127. Red-Black Trees
A red-black tree can also be defined as a binary search tree that satisfies
the following properties:
1.A node is either red or black.
2.The root is ALWAYS black.
3 All leaves are black3.All leaves are black.
4.Both Children of a node that is red, are black. (no red node can have
a red child).
5 E h f i d d d d l f i h5.Every path from a given node down to any descendant leaf contains the
same number of black nodes. The number of black nodes on such a path
(not including the initial node but including leaves) is called the black-
height (bh) of the node.
The red-black tree has O(lg n) height
Deepak John,Department Of IT,CE Poonjar
128. Red-Black Tree
■ Root Property: the root is black
■ External Property: every leaf is blackp y y
■ Internal Property: the children of a red node are black
■ Depth Property: all the leaves have the same black depth
Deepak John,Department Of IT,CE Poonjar
129. Rotations
•Rotations are the basic tree-restructuring operation for almost all
balanced search trees.
R t ti t k d bl k t d d•Rotation takes a red-black-tree and a node,
•Changes pointers to change the local structure, and Won’t violate the
binary-search-tree property.
•Left rotation and right rotation are inverses.
y
Left-Rotate(T, x)x
x
y
Right-Rotate(T, y)
Deepak John,Department Of IT,CE Poonjar
130. An example of LEFT-ROTATE(T,x)
Deepak John,Department Of IT,CE Poonjar
131. Left and Right Rotation
Left Rotate (T x)Left Rotate (T x)Left-Rotate (T, x)
1. y right[x] // Set y.
2. right[x] left[y] //Turn y’s left subtree into x’s right subtree.
Left-Rotate (T, x)
1. y right[x] // Set y.
2. right[x] left[y] //Turn y’s left subtree into x’s right subtree.
3. if left[y] nil[T ]
4. then p[left[y]] x
5 [ ] [ ] // Li k ’ t t
3. if left[y] nil[T ]
4. then p[left[y]] x
5 [ ] [ ] // Li k ’ t t5. p[y] p[x] // Link x’s parent to y.
6. if p[x] = nil[T ]
7. then root[T ] y
5. p[y] p[x] // Link x’s parent to y.
6. if p[x] = nil[T ]
7. then root[T ] y
•The code for RIGHT-
ROTATE is symmetric.
[ ] y
8. else if x = left[p[x]]
9. then left[p[x]] y
10 l i h [ [ ]]
[ ] y
8. else if x = left[p[x]]
9. then left[p[x]] y
10 l i h [ [ ]] •Both LEFT-ROTATE
and RIGHT-ROTATE
run in O(1) time
10. else right[p[x]] y
11. left[y] x // Put x on y’s left.
12. p[x] y
10. else right[p[x]] y
11. left[y] x // Put x on y’s left.
12. p[x] y
Deepak John,Department Of IT,CE Poonjar
run in O(1) time.12. p[x] y12. p[x] y
132. Ri h iRight rotation:
1. x=left[y];
2. left[y]=right[x];[y] g [ ];
3. If(right[x]!=nil)
4. then p[right[x]]=y;
5 p[x]=p[y];5. p[x] p[y];
6. if(p[y]==nil)
7. then root=x;
8 El If(l f [ [ ]] )8. Else If(left[p[y]]=y)
9. then left[p[y]]=x;
10. else right[p[y]]=x;g [p[y]]
11. right[x]=y;
12. p[y]=x;
133. Rotation
Th d d f L f R h• The pseudo-code for Left-Rotate assumes that
– right[x] nil[T ], and
root’s parent is nil[T ]– root s parent is nil[T ].
• Left Rotation on x, makes x the left child of y, and the left subtree
of y into the right subtree of x.
• Pseudocode for Right-Rotate is symmetric: exchange left and right
everywhere.
Ti O(1) f b h L f R d Ri h R i• Time: O(1) for both Left-Rotate and Right-Rotate, since a constant
number of pointers are modified.
Operations on RB TreesOperations on RB Trees
• All operations can be performed in O(lg n) time.
• Insertion and Deletion are not straightforward.
134. When Inserting a Nodeg
Remember:
1. Insert nodes one at a time, and after every Insertion
balance the treebalance the tree.
2. Every node inserted starts as a Red node.
3. Consult the cases, for rebalancing the tree.
•Basic steps:
1. Use Tree-Insert from BST (slightly modified) to insert a node
x into Tx into T.
-Procedure RB-Insert(x).
-Color the node x red.Color the node x red.
2. Fix the modified tree by re-coloring nodes and performing
rotation to preserve RB tree property.
Deepak John,Department Of IT,CE Poonjar
-Procedure RB-Insert-Fixup.
135. Red-Black fixup
• y = z’s “uncle”y
• Three cases:
– y is red
– y is black and z is a right child
– y is black and z is a left child.
136. Case 1 – Z’s uncle y is red
C
C
new z
p[p[z]]
A D
y
C
A D
p[z]
B
z
A D
B
B
B
z is a right child here.
Similar steps if z is a left child.
• y.Color = black
• z.Parent.Color = black
• z.Parent.Parent.Color = red
• z = z.Parent.Parent
R fi• Repeat fixup
137. 11
2 14
71 15
5 8
4
y
4z
y.Color = black
z.Parent.Color = blackNew . a e .Co o b ac
z.Parent.Parent.Color = red
z = z.Parent.Parent
New
Node
repeat fixup
138. 1111
2 14
71 15
5 8 y
4z y.Color = black
z.Parent.Color = black
z.Parent.Parent.Color = red
z = z.Parent.Parent
fi
New
Node
repeat fixup
139. 11
2 142 14
71 15
5 8 y
C l bl k
4z
y.Color = black
z.Parent.Color = black
z Parent Parent Color = redNew z.Parent.Parent.Color red
z = z.Parent.Parent
repeat fixup
New
Node
p p
140. 1111
2 14
71 15
5 8 y
4z
y.Color = black
z.Parent.Color = black
z Parent Parent Color redz.Parent.Parent.Color = red
z = z.Parent.Parent
repeat fixup
New
Node
repeat fixup
141. 1111
2 14 y
71 15z
5 8
y.Color = black
4 z.Parent.Color = black
z.Parent.Parent.Color = red
P P
New
z = z.Parent.Parent
repeat fixup
New
Node
142. Case 2 – y is black, z is a right child
C C
p[z]
p[z]
A
z
y B y
B
A
z
• z = z.Parent
• Left-Rotate(T, z)
• Do Case 3
N t th t C 2 i b t f C 3• Note that Case 2 is a subset of Case 3
143. 1111
2 14 y
71 15z
5 8
z = z.Parent
4
Left-Rotate(T,z)
Do Case 3
144. 1111
2 14z y
71 15
5 8 z = z.Parent
Left-Rotate(T,z)
4
( , )
Do Case 3
146. Case 3 – y is black, z is a left child
BC
p[z]
AB y
p[z]
C
z
A
z
• z.Parent.Color = black
• z.Parent.Parent.Color = red
• Right-Rotate(T, z.Parent.Parent)
147. 1111
147 y
2 158z
1 5
4 z.Parent.Color = black
z.Parent.Parent.Color = red
Right-Rotate(T, z.Parent.Parent)
148. 1111
147 y
2 158z
1 5
z Parent Color = black
4
z.Parent.Color = black
z.Parent.Parent.Color = red
Right-Rotate(T, z.Parent.Parent)Right Rotate(T, z.Parent.Parent)
149. 11
147 y
2 158z
1 5
4
z.Parent.Color = black
4
z.Parent.Parent.Color = red
Right-Rotate(T, z.Parent.Parent)
150. 7
112
7
z
141 5 8 y
15
4
z.Parent.Color = black
z.Parent.Parent.Color = red
Right-Rotate(T, z.Parent.Parent)
151. RB-Insert(T, z)
1. y nil[T]
2 x root[T]2. x root[T]
3. while x nil[T]
4. do y x
5 if key[z] < key[x]5. if key[z] < key[x]
6. then x left[x]
7. else x right[x]
8 [ ] 8. p[z] y
9. if y = nil[T]
10. then root[T] z
11 l if k [ ] k [ ]11. else if key[z] < key[y]
12. then left[y] z
13. else right[y] z
14. left[z] nil[T]
15. right[z] nil[T]
16 color[z] RED16. color[z] RED
17. RB-Insert-Fixup (T, z)
152. RB-Insert-Fixup (T, z)
1. while color[p[z]] = RED
2 d if [ ] l ft[ [ [ ]]]2. do if p[z] = left[p[p[z]]]
3. then y right[p[p[z]]]
4. if color[y] = RED
5. then color[p[z]] BLACK // Case 1
6. color[y] BLACK // Case 1
7. color[p[p[z]]] RED // Case 1[p[p[ ]]]
8. z p[p[z]] // Case 1
9. else if z = right[p[z]] // color[y] RED
10 then z p[z] // Case 210. then z p[z] // Case 2
11. LEFT-ROTATE(T, z) // Case 2
12. color[p[z]] BLACK // Case 3
13 color[p[p[z]]] RED // Case 313. color[p[p[z]]] RED // Case 3
14. RIGHT-ROTATE(T, p[p[z]]) // Case 3
15. else (if p[z] = right[p[p[z]]])(same as 10-14
16 ith “ i ht” d “l ft” h d)16. with “right” and “left” exchanged)
17. color[root[T ]] BLACK
153. Correctness
Loop invariant:
• At the start of each iteration of the while loop,
– z is red.
– If p[z] is the root, then p[z] is black.
– There is at most one red-black violation:
• Property 2: z is a red root or• Property 2: z is a red root, or
• Property 4: z and p[z] are both red.
154. • Termination: The loop terminates only if p[z] is black. Hence,
property 4 is OK. The last line ensures property 2 always holds.p p y p p y y
• Maintenance: We drop out when z is the root (since then p[z] is
sentinel nil[T ], which is black). When we start the loop body, the
l i l ti i f t 4only violation is of property 4.
– There are 6 cases, 3 of which are symmetric to the other 3. We
consider cases in which p[z] is a left child.p[ ]
– Let y be z’s uncle (p[z]’s sibling).
155. Algorithm AnalysisAlgorithm Analysis
• O(lg n) time to get through RB-Insert up to theO(lg n) time to get through RB Insert up to the
call of RB-Insert-Fixup.
• Within RB-Insert-Fixup:• Within RB-Insert-Fixup:
– Each iteration takes O(1) time.
Each iteration but the last moves up 2 levels– Each iteration but the last moves z up 2 levels.
– O(lg n) levels O(lg n) time.
Th i ti i d bl k t t k O(l ) ti– Thus, insertion in a red-black tree takes O(lg n) time.
– Note: there are at most 2 rotations overall.
156. Deletion
• Find
• Swap
– Moves entry to node with one external node (left)
• Remove entry
• Reattach right child
157. Deletion
Deletion from a red black tree, is similar to deletion for a binary
search tree, with a few exception:
•Always set the parent of a deleted node, to be the parent of one
of the deleted nodes children.
•Red black fix-up method called if removed node is black.p
After a deletion of a red node (no violations occur):
N bl k h i h h b ff d•No black-heights have been affected.
•No red nodes have been made adjacent (parent and child both
red).)
•Deleted node is not the root since the root is black.
Deepak John,Department Of IT,CE Poonjar
158. • After Deletion of a Black node a restore function must be called to
fix red-black properties that might be violated. There are 3
possible initial violations.
If d l t d d th t d hild i ht b th t– If deleted node was the root, a red child might now be the root,
Violation of property 2.
– If both the parent of removed node, and a child of removedIf both the parent of removed node, and a child of removed
node are red, we have a violation of property 4.
– The removal of a black node can cause the black-height of one
h b h (b 1) i l i 5path to be shorter (by 1), violating property 5.
– We correct the problem of rule 5 by adding an extra “black” to
the node passed into the fix-up procedure. This leads tothe node passed into the fix up procedure. This leads to
violations in rules 1 since this node is now neither red or black.
Deepak John,Department Of IT,CE Poonjar
159. Delete PossibilitiesDelete Possibilities
1:Delete Red node
• No problem
2:Delete black node with red child
• Color red child black
3:Delete black node with black child
C l hild “D bl Bl k”• Color child “Double Black”
• 3 possibilities depending on neighboring nodes
X’s sibling is black with at least one red child– X s sibling is black with at least one red child
– X’s sibling is black with no red children
– X’s sibling is reds s b g s ed
160. Deletion – Fixupp
• Idea: Move the extra black up the tree until x points to a red &
black node turn it into a black node,
• x points to the root just remove the extra black, or
• We can do certain rotations and recolorings and finish.
Withi th hil l• Within the while loop:
– x always points to a nonroot doubly black node.
– w is x’s siblingw is x s sibling.
– w cannot be nil[T ], since that would violate property 5 at
p[x].
161. Case 1 – w is red
p[x]
B
A D B
x w
D
E
p[ ]
A D
C E
B
A C
E
x new
wC E
w
•w must have black children.
•Make w black and p[x] red.
•Th l ft t t [ ]•Then left rotate on p[x].
•New sibling of x was a child of w before rotation must be black.
Go immediately to case 2, 3, or 4.
Deepak John,Department Of IT,CE Poonjar
162. Case 2 – w is black, both w’s children are
blackp[x] black
B
A D
x w
B
new xc
c
p[x]
A D
C E
A D
C E
•Take 1 black off x ( singly black) and off w ( red)
C E
C E
•Take 1 black off x ( singly black) and off w ( red).
•Move that black to p[x].
•Do the next iteration with p[x] as the new xDo the next iteration with p[x] as the new x.
•If entered this case from case 1, then p[x] was red new x is red &
black color attribute of new x is RED loop terminates. Then new
x is made black in the last line.
Deepak John,Department Of IT,CE Poonjar
163. Case 3 – w is black, w’s left child is red,
w’s right child is blackw s right child is black
B
x w
B
c
c
A D
x w
A C
D
new wx
C E
D
E
•Make w red and w’s left child black.
Make w red and w s left child black.
•Then right rotate on w.
•New sibling w of x is black with a red right child case 4.
Deepak John,Department Of IT,CE Poonjar
164. Case 4 – w is black, w’s right child is red
B
A D B
x w
D
E
c
A D
C E
B
A C
E
x
c’
C E
•Make w be p[x]’s color (c).
•Make p[x] black and w’s right child black.
Th l ft t t [ ]•Then left rotate on p[x].
•Remove the extra black on x ( x is now singly black) without
violating any red-black properties.g y p p
•All done. Setting x to root causes the loop to terminate.
Deepak John,Department Of IT,CE Poonjar
165. RB-Delete(T, z)
1. if left[z] = nil[T] or right[z] = nil[T]
2. then y z
RB-Delete(T, z)
1. if left[z] = nil[T] or right[z] = nil[T]
2. then y z
3. else y TREE-SUCCESSOR(z)
4. if left[y] = nil[T ]
5. then x left[y]
3. else y TREE-SUCCESSOR(z)
4. if left[y] = nil[T ]
5. then x left[y]
6. else x right[y]
7. p[x] p[y] // Do this, even if x is nil[T]
6. else x right[y]
7. p[x] p[y] // Do this, even if x is nil[T]
8. if p[y] = nil[T ]8. if p[y] = nil[T ]
9. then root[T ] x
10. else if y = left[p[y]]
11. then left[p[y]] x
9. then root[T ] x
10. else if y = left[p[y]]
11. then left[p[y]] x11. then left[p[y]] x
12. else right[p[y]] x
13. if y = z
14 then key[z] key[y]
11. then left[p[y]] x
12. else right[p[y]] x
13. if y = z
14 then key[z] key[y]14. then key[z] key[y]
15. copy y’s satellite data into z
16. if color[y] = BLACK
17 th RB D l t Fi (T )
14. then key[z] key[y]
15. copy y’s satellite data into z
16. if color[y] = BLACK
17 th RB D l t Fi (T )
Deepak John,Department Of IT,CE Poonjar
17. then RB-Delete-Fixup(T, x)
18. return y
17. then RB-Delete-Fixup(T, x)
18. return y
166. RB D l Fi (T )RB D l Fi (T )RB-Delete-Fixup(T, x)
1. while x root[T ] and color[x] = BLACK
2 do if x = left[p[x]]
RB-Delete-Fixup(T, x)
1. while x root[T ] and color[x] = BLACK
2 do if x = left[p[x]]2. do if x = left[p[x]]
3. then w right[p[x]]
4. if color[w] = RED
2. do if x = left[p[x]]
3. then w right[p[x]]
4. if color[w] = RED[ ]
5. then color[w] BLACK // Case 1
6. color[p[x]] RED // Case 1
[ ]
5. then color[w] BLACK // Case 1
6. color[p[x]] RED // Case 1
7. LEFT-ROTATE(T, p[x]) // Case 1
8. w right[p[x]] // Case 1
7. LEFT-ROTATE(T, p[x]) // Case 1
8. w right[p[x]] // Case 1
Deepak John,Department Of IT,CE Poonjar
167. /* x is still left[p[x]] */
9. if color[left[w]] = BLACK and color[right[w]] = BLACK
l //
/* x is still left[p[x]] */
9. if color[left[w]] = BLACK and color[right[w]] = BLACK
l //10. then color[w] RED // Case 2
11. x p[x] // Case 2
12. else if color[right[w]] = BLACK
10. then color[w] RED // Case 2
11. x p[x] // Case 2
12. else if color[right[w]] = BLACK12. else if color[right[w]] BLACK
13. then color[left[w]] BLACK // Case 3
14. color[w] RED // Case 3
12. else if color[right[w]] BLACK
13. then color[left[w]] BLACK // Case 3
14. color[w] RED // Case 3
15. RIGHT-ROTATE(T,w) // Case 3
16. w right[p[x]] // Case 3
17 color[w] color[p[x]] // Case 4
15. RIGHT-ROTATE(T,w) // Case 3
16. w right[p[x]] // Case 3
17 color[w] color[p[x]] // Case 417. color[w] color[p[x]] // Case 4
18. color[p[x]] BLACK // Case 4
19. color[right[w]] BLACK // Case 4
17. color[w] color[p[x]] // Case 4
18. color[p[x]] BLACK // Case 4
19. color[right[w]] BLACK // Case 4
20. LEFT-ROTATE(T, p[x]) // Case 4
21. x root[T ] // Case 4
22 else (same as then cla se ith “right” and “left” e changed)
20. LEFT-ROTATE(T, p[x]) // Case 4
21. x root[T ] // Case 4
22 else (same as then cla se ith “right” and “left” e changed)22. else (same as then clause with “right” and “left” exchanged)
23. color[x] BLACK
22. else (same as then clause with “right” and “left” exchanged)
23. color[x] BLACK
168. Delete Analysis
O(lg n) time to get through RB-Delete up to the call of RB-Delete-
Fixup.
Within RB-Delete-Fixup:
Case 2 is the only case in which more iterations occur.
x moves up 1 levelx moves up 1 level.
Hence, O(lg n) iterations.
Each of cases 1, 3, and 4 has 1 rotation 3 rotations in all.
Hence, O(lg n) time.
Deepak John,Department Of IT,CE Poonjar