This presentation contain almost everything about the algorithms- its definition, designing, complexity analysis, running time calculations, common sorting and searching algorithms with their running time and examples.
The document discusses hashing techniques for mapping keys to indices in a hash table. It covers:
1) Hash functions which map keys to hash codes using techniques like polynomial accumulation and then compress the codes to indices using modulo.
2) Open addressing techniques like linear probing and double hashing which store elements directly in the hash table by probing for empty slots when collisions occur.
3) Analysis showing open addressing has average probe costs of O(1/α) for unsuccessful searches and O(1/α) ln(1/(1-α)) for successful searches, where α is the load factor. Chaining has constant O(1) costs.
Data structures and algorithms involve organizing data to solve problems efficiently. An algorithm describes computational steps, while a program implements an algorithm. Key aspects of algorithms include efficiency as input size increases. Experimental studies measure running time but have limitations. Pseudocode describes algorithms at a high level. Analysis counts primitive operations to determine asymptotic running time, ignoring constant factors. The best, worst, and average cases analyze efficiency. Asymptotic notation like Big-O simplifies analysis by focusing on how time increases with input size.
The document describes the merge sort algorithm. It begins by explaining the divide and conquer approach to algorithm design, which merge sort uses. It then provides pseudocode that divides the input array into two halves, recursively sorts each half using merge sort, and then merges the two sorted halves back together. The document includes examples walking through applying merge sort to an array. It analyzes the running time of merge sort using recurrence relations and the substitution method, determining its runtime is O(n log n). Finally, it provides Java code implementing merge sort.
Quicksort is a divide-and-conquer algorithm that works by partitioning an array into two subarrays such that elements in one subarray are less than the elements in the other. It then recursively sorts the subarrays. The average runtime is O(n log n) but the worst case is O(n^2) when the array is already sorted. Randomizing the pivot selection results in an expected runtime of O(n log n) for all cases.
The document discusses divide and conquer algorithms and merge sort. It provides details on how merge sort works including: (1) Divide the input array into halves recursively until single element subarrays, (2) Sort the subarrays using merge sort recursively, (3) Merge the sorted subarrays back together. The overall running time of merge sort is analyzed to be θ(nlogn) as each level of recursion contributes θ(n) work and there are logn levels of recursion.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
This document discusses algorithm analysis and asymptotic notation. It introduces algorithms for computing prefix averages in arrays. One algorithm runs in quadratic time O(n^2) by applying the definition directly. A more efficient linear time O(n) algorithm is also presented that maintains a running sum. Asymptotic analysis determines the worst-case running time of an algorithm as a function of the input size using big-O notation. This provides an analysis of algorithms that is independent of implementation details and hardware.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
The document discusses hashing techniques for mapping keys to indices in a hash table. It covers:
1) Hash functions which map keys to hash codes using techniques like polynomial accumulation and then compress the codes to indices using modulo.
2) Open addressing techniques like linear probing and double hashing which store elements directly in the hash table by probing for empty slots when collisions occur.
3) Analysis showing open addressing has average probe costs of O(1/α) for unsuccessful searches and O(1/α) ln(1/(1-α)) for successful searches, where α is the load factor. Chaining has constant O(1) costs.
Data structures and algorithms involve organizing data to solve problems efficiently. An algorithm describes computational steps, while a program implements an algorithm. Key aspects of algorithms include efficiency as input size increases. Experimental studies measure running time but have limitations. Pseudocode describes algorithms at a high level. Analysis counts primitive operations to determine asymptotic running time, ignoring constant factors. The best, worst, and average cases analyze efficiency. Asymptotic notation like Big-O simplifies analysis by focusing on how time increases with input size.
The document describes the merge sort algorithm. It begins by explaining the divide and conquer approach to algorithm design, which merge sort uses. It then provides pseudocode that divides the input array into two halves, recursively sorts each half using merge sort, and then merges the two sorted halves back together. The document includes examples walking through applying merge sort to an array. It analyzes the running time of merge sort using recurrence relations and the substitution method, determining its runtime is O(n log n). Finally, it provides Java code implementing merge sort.
Quicksort is a divide-and-conquer algorithm that works by partitioning an array into two subarrays such that elements in one subarray are less than the elements in the other. It then recursively sorts the subarrays. The average runtime is O(n log n) but the worst case is O(n^2) when the array is already sorted. Randomizing the pivot selection results in an expected runtime of O(n log n) for all cases.
The document discusses divide and conquer algorithms and merge sort. It provides details on how merge sort works including: (1) Divide the input array into halves recursively until single element subarrays, (2) Sort the subarrays using merge sort recursively, (3) Merge the sorted subarrays back together. The overall running time of merge sort is analyzed to be θ(nlogn) as each level of recursion contributes θ(n) work and there are logn levels of recursion.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
This document discusses algorithm analysis and asymptotic notation. It introduces algorithms for computing prefix averages in arrays. One algorithm runs in quadratic time O(n^2) by applying the definition directly. A more efficient linear time O(n) algorithm is also presented that maintains a running sum. Asymptotic analysis determines the worst-case running time of an algorithm as a function of the input size using big-O notation. This provides an analysis of algorithms that is independent of implementation details and hardware.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
This experiment aims to write a Matlab program to perform convolution of two signals. Convolution relates the input, output, and impulse response of a linear time-invariant system. The Matlab code takes the length and values of two signals as input, performs convolution by multiplying and summing aligned values, and plots the resulting convolution signal. The experiment helps learn about convolution and how to implement it in Matlab code.
Radix sort considers the structure of keys and sorts based on comparing bits in the same position. Bucket sort is used to stably sort based on individual bits, with time complexity O(bn) for b-bit keys. While comparison-based sorting requires Ω(n log n) time, radix sort circumvents this lower bound by exploiting key structure rather than just comparisons.
Time complexity (linear search vs binary search)Kumar
The document discusses asymptotic analysis and big O notation. It begins with an outline of the topics to be covered: asymptotic analysis and a comparison of linear and binary search algorithms. It then provides examples of using big O notation to classify the runtime of different algorithms, such as linear search being O(n) and binary search being O(log n). It introduces the formal definitions of big O, Omega, and Theta notation. The document aims to build intuition for analyzing algorithms and classifying them according to asymptotic runtime.
Asymptotic notations(Big O, Omega, Theta )swapnac12
The document discusses different asymptotic notations used to characterize the complexity of algorithms: Big-O(O) notation provides an upper bound, Big-Omega(Ω) provides a lower bound, and Big-Theta(Θ) indicates the same order of growth. It defines each notation, explaining that Big-O represents f(n) growing less than or equal to g(n), Big-Omega represents f(n) growing greater than or equal to g(n), and Big-Theta represents f(n) growing equal to g(n). The document then discusses basics of probability theory, defining a sample space as the set of all possible outcomes of an experiment, with events being subsets of the sample space.
This document discusses time and space complexity analysis of algorithms. It analyzes the time complexity of bubble sort, which is O(n^2) as each pass through the array requires n-1 comparisons and there are n passes needed. Space complexity is typically a secondary concern to time complexity. Time complexity analysis allows comparison of algorithms to determine efficiency and whether an algorithm will complete in a reasonable time for a given input size. NP-complete problems cannot be solved in polynomial time but can be verified in polynomial time.
The document describes an experiment to create MATLAB functions for linear and circular convolution that match the functionality of the built-in conv and cconv commands. It outlines the steps to create a linear convolution function, including taking input signals x and h, computing output length, using a for loop to calculate output samples y based on the convolution expression, plotting the output vector y, and verifying that it matches the output of conv.
Selection Sort, Insertion Sort, Bubble Sort Main idea: find the smallest element put it in the first position find the next smallest element put it in the second position
This document provides an introduction to asymptotic analysis of algorithms. It discusses analyzing algorithms based on how their running time increases with the size of the input problem. The key points are:
- Algorithms are compared based on their asymptotic running time as the input size increases, which is more useful than actual running times on a specific computer.
- The main types of analysis are worst-case, best-case, and average-case running times.
- Asymptotic notations like Big-O, Omega, and Theta are used to classify algorithms based on their rate of growth as the input increases.
- Common orders of growth include constant, logarithmic, linear, quadratic, and exponential time.
Lecture 3 insertion sort and complexity analysisjayavignesh86
This document discusses algorithms and insertion sort. It begins by defining time complexity as the amount of computer time required by an algorithm to complete. Time complexity is measured by the number of basic operations like comparisons, not in physical time units. The document then discusses how to calculate time complexity by counting the number of times loops and statements are executed. It provides examples of calculating time complexities of O(n) for a simple for loop and O(n^2) for a nested for loop. Finally, it introduces insertion sort and divide-and-conquer algorithms.
i. The linear convolution of two sequences was calculated using the conv command in MATLAB. The input sequences, individual sequences, and convolved output were plotted.
ii. Linear convolution was also calculated using the DFT and IDFT. The sequences were padded with zeros and transformed to the frequency domain using FFT. The transformed sequences were multiplied and inverse transformed using IFFT to obtain the circular convolution result.
iii. The circular convolution result using DFT/IDFT was the same as the linear convolution using the conv command, demonstrating the equivalence between linear and circular convolution in the frequency domain.
how to calclute time complexity of algortihmSajid Marwat
This document discusses algorithm analysis and complexity. It defines key terms like asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like a sum function to determine their time complexity. Common analyses include looking at loops, nested loops, and sequences of statements. The goal is to classify algorithms according to their complexity, which is important for large inputs and machine-independent. Algorithms are classified based on worst, average, and best case analyses.
Big O notation describes how efficiently an algorithm or function grows as the input size increases. It focuses on the worst-case scenario and ignores constant factors. Common time complexities include O(1) for constant time, O(n) for linear time, and O(n^2) for quadratic time. To determine an algorithm's complexity, its operations are analyzed, such as the number of statements, loops, and function calls.
I am Samuel H. I am a Mechanical Engineering Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. Matlab, University of Alberta, Canada. I have been helping students with their homework for the past 12 years. I solve assignments related to Mechanical Engineering.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Mechanical Engineering Assignments.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
The document discusses multithreaded and distributed algorithms. It describes multithreaded algorithms as having concurrent execution of parts of a program to maximize CPU utilization. Key aspects include communication models, types of threading, and performance measures. Distributed algorithms do not assume a central coordinator and are run across distributed systems without shared memory. Examples of distributed algorithms provided are breadth-first search, minimum spanning tree, naive string matching, and Rabin-Karp string matching.
This document discusses analyzing algorithms and asymptotic notation. It defines running time as the number of primitive operations before termination. Examples are provided to illustrate calculating running time functions and classifying them by order of growth such as constant, logarithmic, linear, quadratic, and exponential time. Asymptotic notation such as Big-O, Big-Omega, and Big-Theta are introduced to classify functions by their asymptotic growth rates. Examples are given to demonstrate determining tight asymptotic bounds between functions. Recurrences are defined as equations describing functions in terms of smaller inputs and base cases, which are useful for analyzing recurrent algorithms.
The document discusses fundamentals of analyzing algorithm efficiency, including:
- Measuring an algorithm's time efficiency based on input size and number of basic operations.
- Using asymptotic notations like O, Ω, Θ to classify algorithms by order of growth.
- Analyzing worst-case, best-case, and average-case efficiencies.
- Setting up recurrence relations to analyze recursive algorithms like merge sort.
Queues and linked lists are common data structures. Queues follow FIFO ordering and allow insertion at the rear and removal from the front. Linked lists provide efficient insertion and removal by using nodes connected by pointers. Doubly linked lists allow efficient insertion and removal from both ends, enabling implementations of double-ended queues. Positions abstract the concept of location in a data structure and allow node-based operations on linked lists. Iterators encapsulate traversal of a data structure.
The document provides information on problem solving and office automation. It discusses key concepts like algorithms, program development cycles, and control structures. For algorithms, it covers characteristics, representations using flowcharts and pseudocode, and examples. The main program development methodologies covered are the program planning method and waterfall method. Control structures discussed include sequence, selection, and looping. Examples provided include finding largest of three numbers, quadratic equation, swapping variables, and checking leap year.
This document provides an overview of class diagrams in UML. It describes the key components of a class including the name, attributes, and operations. It explains how classes can be connected through relationships like generalizations, associations, and dependencies. The document uses examples like Person, Student, and CourseSchedule classes to illustrate attributes, operations, and relationships between classes.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
This experiment aims to write a Matlab program to perform convolution of two signals. Convolution relates the input, output, and impulse response of a linear time-invariant system. The Matlab code takes the length and values of two signals as input, performs convolution by multiplying and summing aligned values, and plots the resulting convolution signal. The experiment helps learn about convolution and how to implement it in Matlab code.
Radix sort considers the structure of keys and sorts based on comparing bits in the same position. Bucket sort is used to stably sort based on individual bits, with time complexity O(bn) for b-bit keys. While comparison-based sorting requires Ω(n log n) time, radix sort circumvents this lower bound by exploiting key structure rather than just comparisons.
Time complexity (linear search vs binary search)Kumar
The document discusses asymptotic analysis and big O notation. It begins with an outline of the topics to be covered: asymptotic analysis and a comparison of linear and binary search algorithms. It then provides examples of using big O notation to classify the runtime of different algorithms, such as linear search being O(n) and binary search being O(log n). It introduces the formal definitions of big O, Omega, and Theta notation. The document aims to build intuition for analyzing algorithms and classifying them according to asymptotic runtime.
Asymptotic notations(Big O, Omega, Theta )swapnac12
The document discusses different asymptotic notations used to characterize the complexity of algorithms: Big-O(O) notation provides an upper bound, Big-Omega(Ω) provides a lower bound, and Big-Theta(Θ) indicates the same order of growth. It defines each notation, explaining that Big-O represents f(n) growing less than or equal to g(n), Big-Omega represents f(n) growing greater than or equal to g(n), and Big-Theta represents f(n) growing equal to g(n). The document then discusses basics of probability theory, defining a sample space as the set of all possible outcomes of an experiment, with events being subsets of the sample space.
This document discusses time and space complexity analysis of algorithms. It analyzes the time complexity of bubble sort, which is O(n^2) as each pass through the array requires n-1 comparisons and there are n passes needed. Space complexity is typically a secondary concern to time complexity. Time complexity analysis allows comparison of algorithms to determine efficiency and whether an algorithm will complete in a reasonable time for a given input size. NP-complete problems cannot be solved in polynomial time but can be verified in polynomial time.
The document describes an experiment to create MATLAB functions for linear and circular convolution that match the functionality of the built-in conv and cconv commands. It outlines the steps to create a linear convolution function, including taking input signals x and h, computing output length, using a for loop to calculate output samples y based on the convolution expression, plotting the output vector y, and verifying that it matches the output of conv.
Selection Sort, Insertion Sort, Bubble Sort Main idea: find the smallest element put it in the first position find the next smallest element put it in the second position
This document provides an introduction to asymptotic analysis of algorithms. It discusses analyzing algorithms based on how their running time increases with the size of the input problem. The key points are:
- Algorithms are compared based on their asymptotic running time as the input size increases, which is more useful than actual running times on a specific computer.
- The main types of analysis are worst-case, best-case, and average-case running times.
- Asymptotic notations like Big-O, Omega, and Theta are used to classify algorithms based on their rate of growth as the input increases.
- Common orders of growth include constant, logarithmic, linear, quadratic, and exponential time.
Lecture 3 insertion sort and complexity analysisjayavignesh86
This document discusses algorithms and insertion sort. It begins by defining time complexity as the amount of computer time required by an algorithm to complete. Time complexity is measured by the number of basic operations like comparisons, not in physical time units. The document then discusses how to calculate time complexity by counting the number of times loops and statements are executed. It provides examples of calculating time complexities of O(n) for a simple for loop and O(n^2) for a nested for loop. Finally, it introduces insertion sort and divide-and-conquer algorithms.
i. The linear convolution of two sequences was calculated using the conv command in MATLAB. The input sequences, individual sequences, and convolved output were plotted.
ii. Linear convolution was also calculated using the DFT and IDFT. The sequences were padded with zeros and transformed to the frequency domain using FFT. The transformed sequences were multiplied and inverse transformed using IFFT to obtain the circular convolution result.
iii. The circular convolution result using DFT/IDFT was the same as the linear convolution using the conv command, demonstrating the equivalence between linear and circular convolution in the frequency domain.
how to calclute time complexity of algortihmSajid Marwat
This document discusses algorithm analysis and complexity. It defines key terms like asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like a sum function to determine their time complexity. Common analyses include looking at loops, nested loops, and sequences of statements. The goal is to classify algorithms according to their complexity, which is important for large inputs and machine-independent. Algorithms are classified based on worst, average, and best case analyses.
Big O notation describes how efficiently an algorithm or function grows as the input size increases. It focuses on the worst-case scenario and ignores constant factors. Common time complexities include O(1) for constant time, O(n) for linear time, and O(n^2) for quadratic time. To determine an algorithm's complexity, its operations are analyzed, such as the number of statements, loops, and function calls.
I am Samuel H. I am a Mechanical Engineering Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. Matlab, University of Alberta, Canada. I have been helping students with their homework for the past 12 years. I solve assignments related to Mechanical Engineering.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Mechanical Engineering Assignments.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
The document discusses multithreaded and distributed algorithms. It describes multithreaded algorithms as having concurrent execution of parts of a program to maximize CPU utilization. Key aspects include communication models, types of threading, and performance measures. Distributed algorithms do not assume a central coordinator and are run across distributed systems without shared memory. Examples of distributed algorithms provided are breadth-first search, minimum spanning tree, naive string matching, and Rabin-Karp string matching.
This document discusses analyzing algorithms and asymptotic notation. It defines running time as the number of primitive operations before termination. Examples are provided to illustrate calculating running time functions and classifying them by order of growth such as constant, logarithmic, linear, quadratic, and exponential time. Asymptotic notation such as Big-O, Big-Omega, and Big-Theta are introduced to classify functions by their asymptotic growth rates. Examples are given to demonstrate determining tight asymptotic bounds between functions. Recurrences are defined as equations describing functions in terms of smaller inputs and base cases, which are useful for analyzing recurrent algorithms.
The document discusses fundamentals of analyzing algorithm efficiency, including:
- Measuring an algorithm's time efficiency based on input size and number of basic operations.
- Using asymptotic notations like O, Ω, Θ to classify algorithms by order of growth.
- Analyzing worst-case, best-case, and average-case efficiencies.
- Setting up recurrence relations to analyze recursive algorithms like merge sort.
Queues and linked lists are common data structures. Queues follow FIFO ordering and allow insertion at the rear and removal from the front. Linked lists provide efficient insertion and removal by using nodes connected by pointers. Doubly linked lists allow efficient insertion and removal from both ends, enabling implementations of double-ended queues. Positions abstract the concept of location in a data structure and allow node-based operations on linked lists. Iterators encapsulate traversal of a data structure.
The document provides information on problem solving and office automation. It discusses key concepts like algorithms, program development cycles, and control structures. For algorithms, it covers characteristics, representations using flowcharts and pseudocode, and examples. The main program development methodologies covered are the program planning method and waterfall method. Control structures discussed include sequence, selection, and looping. Examples provided include finding largest of three numbers, quadratic equation, swapping variables, and checking leap year.
This document provides an overview of class diagrams in UML. It describes the key components of a class including the name, attributes, and operations. It explains how classes can be connected through relationships like generalizations, associations, and dependencies. The document uses examples like Person, Student, and CourseSchedule classes to illustrate attributes, operations, and relationships between classes.
The document discusses use case modeling and provides several examples. It describes key concepts like actors, use cases, relationships between use cases, and multiplicity. It then summarizes 4 examples - an airport check-in system, bank ATM, online library catalog, and credit card processing. The examples illustrate how use cases model systems and interactions between actors and the system.
The document discusses use case diagrams and their components. It provides examples of use cases including withdrawing money from an ATM. Key points covered include: use cases specify desired system behavior through interactions between actors and the system; actors can be human or automated systems; relationships between use cases include generalization, inclusion, and extension. Common use case elements like pre-conditions, post-conditions, flows, and alternatives are also defined.
The document describes activity diagrams and their components. It provides examples of activity diagrams for an order management system, online shopping process, a ticket vending machine, resolving software issues, and single sign-on for Google apps. Activity diagrams can show sequential, parallel, and conditional flows between activities of a system through various components like activities, decisions, forks, joins, and swimlanes.
The document discusses sequence diagrams, which show the interaction between objects and classes through a sequence of messages. Sequence diagrams are useful during the design phase to help understand system design and object interactions. They can also be used to document how existing systems work by showing the sequence of messages exchanged between objects.
The document discusses collaboration diagrams, which capture the dynamic behavior of objects collaborating to perform tasks. Collaboration diagrams illustrate object interactions through messages in a graph format. They show objects, links between objects, and messages to model control flow and coordination. Notations are used to represent classes, instances, links, messages, return values, self-messages, conditional messages, iteration, and collections of objects. Examples of converting sequence diagrams to collaboration diagrams for making a phone call, changing flight itineraries, and making a hotel reservation are provided.
UML Diagrams- Unified Modeling Language IntroductionRamakant Soni
The document provides an overview of a 3 hour lecture on object oriented modeling using UML, including definitions of key concepts like models, modeling, objects, and the Unified Modeling Language. It discusses why modeling is used, how it is done in UML, and examples of object oriented concepts and how UML can be applied, with the goal of teaching students how to design object-oriented programs and software development methodology using UML.
This document discusses using activity diagrams for business and systems modeling. It explains the basic and advanced elements of activity diagrams like activity states, transitions, decisions, synchronization bars, concurrent threads, alternative threads, conditional threads, nested activity diagrams and partitions. The objectives are to explain UML modeling, demonstrate activity diagram usage for business and systems modeling, apply activity diagram notations, and highlight common student mistakes.
The document describes the syllabus for a course on design analysis and algorithms. It covers topics like asymptotic notations, time and space complexities, sorting algorithms, greedy methods, dynamic programming, backtracking, and NP-complete problems. It also provides examples of algorithms like computing greatest common divisor, Sieve of Eratosthenes for primes, and discusses pseudocode conventions. Recursive algorithms and examples like Towers of Hanoi and permutation generation are explained. Finally, it outlines the steps for designing algorithms like understanding the problem, choosing appropriate data structures and computational devices.
This document discusses algorithms and their analysis. It begins by defining an algorithm and analyzing its time and space complexity. It then discusses different asymptotic notations used to describe an algorithm's runtime such as Big-O, Omega, and Theta notations. Examples are provided to illustrate how to determine the tight asymptotic bound of functions. The document also covers algorithm design techniques like divide-and-conquer and analyzes merge sort as an example. It concludes by defining recurrences used to describe algorithms and provides an example recurrence for merge sort.
This document discusses algorithms and analysis of algorithms. It covers key concepts like time complexity, space complexity, asymptotic notations, best case, worst case and average case time complexities. Examples are provided to illustrate linear, quadratic and logarithmic time complexities. Common sorting algorithms like quicksort, mergesort, heapsort, bubblesort and insertionsort are summarized along with their time and space complexities.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
The document discusses algorithms and data structures. It begins with an introduction to merge sort, solving recurrences, and the master theorem for analyzing divide-and-conquer algorithms. It then covers quicksort and heaps. The last part discusses heaps in more detail and provides an example heap representation as a complete binary tree.
What is an Algorithm
Time Complexity
Space Complexity
Asymptotic Notations
Recursive Analysis
Selection Sort
Insertion Sort
Recurrences
Substitution Method
Master Tree Method
Recursion Tree Method
This document discusses algorithms and their properties. It begins by defining an algorithm as a finite set of precise instructions to perform a computation or solve a problem. It provides examples of algorithms like directions to a location or a recipe. The document then discusses how some algorithms are easier than others and provides examples. It also outlines key properties of algorithms like inputs, outputs, definiteness, and effectiveness. Later sections summarize various algorithms for tasks like searching, sorting, and finding maximum/minimum elements with analysis of their running times.
This document discusses algorithm analysis and complexity. It introduces algorithm analysis as a way to predict and compare algorithm performance. Different algorithms for computing factorials and finding the maximum subsequence sum are presented, along with their time complexities. The importance of efficient algorithms for problems involving large datasets is discussed.
In this playlist
https://youtube.com/playlist?list=PLT...
I'll illustrate algorithms and data structures course, and implement the data structures using java programming language.
the playlist language is arabic.
The Topics:
--------------------
1- Arrays
2- Linear and Binary search
3- Linked List
4- Recursion
5- Algorithm analysis
6- Stack
7- Queue
8- Binary search tree
9- Selection sort
10- Insertion sort
11- Bubble sort
12- merge sort
13- Quick sort
14- Graphs
15- Hash table
16- Binary Heaps
Reference : Object-Oriented Data Structures Using Java - Third Edition by NELL DALE, DANEIEL T.JOYCE and CHIP WEIMS
Slides is owned by College of Computing & Information Technology
King Abdulaziz University, So thanks alot for these great materials
The document discusses algorithms and their analysis. It begins by defining an algorithm and key aspects like correctness, input, and output. It then discusses two aspects of algorithm performance - time and space. Examples are provided to illustrate how to analyze the time complexity of different structures like if/else statements, simple loops, and nested loops. Big O notation is introduced to describe an algorithm's growth rate. Common time complexities like constant, linear, quadratic, and cubic functions are defined. Specific sorting algorithms like insertion sort, selection sort, bubble sort, merge sort, and quicksort are then covered in detail with examples of how they work and their time complexities.
The document discusses various topics related to algorithms including introduction to algorithms, algorithm design, complexity analysis, asymptotic notations, and data structures. It provides definitions and examples of algorithms, their properties and categories. It also covers algorithm design methods and approaches. Complexity analysis covers time and space complexity. Asymptotic notations like Big-O, Omega, and Theta notations are introduced to analyze algorithms. Examples are provided to find the upper and lower bounds of algorithms.
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
This document discusses asymptotic analysis and recurrence relations. It begins by introducing asymptotic notations like Big O, Omega, and Theta notation that are used to analyze algorithms. It then discusses recurrence relations, which express the running time of algorithms in terms of input size. The document provides examples of using recurrence relations to find the time complexity of algorithms like merge sort. It also discusses how to calculate time complexity functions like f(n) asymptotically rather than calculating exact running times. The goal of this analysis is to understand how algorithm running times scale with input size.
This document provides an overview of advanced data structures and algorithm analysis taught by Dr. Sukhamay Kundu at Louisiana State University. It discusses the role of data structures in making computations faster by supporting efficient data access and storage. The document distinguishes between algorithms, which determine the computational steps and data access order, and data structures, which enable efficient reading and writing of data. It also describes different methods for measuring algorithm performance, such as theoretical time complexity analysis and empirical measurements. Examples are provided for instrumenting code to count operations. Overall, the document introduces fundamental concepts about algorithms and data structures.
Discuss seven functions, Analysis of algorithms- Experimental Studies/Primitive operations/Asymptotic notation- Big Oh/Big-Omega/Big-Theta
(Download is recommended to make the animations work)
Gentle Introduction to Functional ProgrammingSaurabh Singh
This slide is basically aimed at professionals and students to introduce them with functional programming.
I haven't used much functional programming terminologies because I personally feel they could be overwhelming to people getting introduced to FP for the first time. For similar reasons I have deliberately avoided using any functional programming language and kept the discussions programming language agnostic as far as possible.
This document discusses asymptotic analysis and big-O notation for analyzing the time complexity of algorithms. It begins by defining key concepts like growth rate, asymptotic notations such as O(n), Ω(n) and Θ(n). It then provides examples of analyzing the time efficiency of different algorithms like finding the maximum element in an array and computing prefix averages. The document explains how to determine the asymptotic complexity by counting the total number of operations and expressing it using big-O notation. It also discusses properties of big-O notation like rules for dropping constant factors and lower order terms.
This document provides a quick tour of the Python programming language. It introduces basic Python concepts like data types, variables, operators, conditional statements, loops, and functions. It explains how to get user input, perform type conversions, and work with common data types like integers, floats, strings, and booleans. It also demonstrates how to define functions, use default arguments and keyword arguments, and handle global variables. The document uses examples to illustrate concepts like arithmetic operations, string slicing, indexing, concatenation, and repetition.
In this chapter we are going to get familiar with recursion and its applications. Recursion represents a powerful programming technique in which a method makes a call to itself from within its own method body. By means of recursion we can solve complicated combinatorial problems, in which we can easily exhaust different combinatorial configurations, e.g. generating permutations and variations and simulating nested loops. We are going to demonstrate many examples of correct and incorrect usage of recursion and convince you how useful it can be.
Data Structure & Algorithms - Mathematicalbabuk110
This document discusses various mathematical notations and asymptotic analysis used for analyzing algorithms. It covers floor and ceiling functions, remainder function, summation symbol, factorial function, permutations, exponents, logarithms, Big-O, Big-Omega and Theta notations. It provides examples of calculating time complexity of insertion sort and bubble sort using asymptotic notations. It also discusses space complexity analysis and how to calculate the space required by an algorithm.
The Graduate Aptitude Test in Engineering (GATE) is a national exam conducted jointly by IISc Bangalore and 7 IITs on behalf of the National Coordination Board. Qualifying in GATE is mandatory for seeking admission and financial assistance for postgraduate programs in engineering. The GATE score is also used for recruitment by public sector companies. GATE 2021 will be conducted over 6 days in February in online mode consisting of 65 questions testing general aptitude and the selected subject. Qualifying in GATE and subsequent tests/interviews is required for admission to postgraduate programs with financial assistance from the government.
Role of Data Cleaning in Data WarehouseRamakant Soni
Data cleaning is an essential part of building a data warehouse as it improves data quality by detecting and removing errors and inconsistencies. Data warehouses integrate large amounts of data from various sources, so the probability of dirty data is high. Clean data is vital for decision making based on the data warehouse. The data cleaning process involves data analysis, defining transformation rules, verification of cleaning, applying transformations, and incorporating cleaned data. Tools can help support the different phases of data cleaning from data profiling to specialized cleaning of particular domains.
This document provides an overview of the Internet of Things (IoT). It defines IoT as a self-configuring wireless network between objects that goes beyond machine-to-machine communication to connect a variety of devices, systems, and services. The document outlines key enabling technologies for IoT like sensors, wireless networking, smart technologies, and nanotechnology. It also discusses how IoT will affect daily life through applications in various sectors like media, transportation, manufacturing, healthcare and more. Finally, the document covers challenges for IoT development like standardization, security, and data management.
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
Huffman and Arithmetic coding - Performance analysisRamakant Soni
Huffman coding and arithmetic coding are analyzed for complexity.
Huffman coding assigns variable length codes to symbols based on probability and has O(N2) complexity. Arithmetic coding encodes the entire message as a fraction between 0 and 1 by dividing intervals based on symbol probability and has better O(N log n) complexity. Arithmetic coding compresses data more efficiently with fewer bits per symbol and has lower complexity than Huffman coding asymptotically.
This document provides an overview of 5 UML diagrams for an ATM system: a use case diagram, an activity diagram for withdrawals, a swimlane diagram, a class diagram, and an entity relationship diagram. The diagrams model different aspects of how an ATM system would function and the relationships between entities in the system.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
1. 1Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
Ramakant Soni
Assistant Professor
CS Dept., BKBIET Pilani
ramakant.soni1988@gmail.com
2. What is an Algorithm ?
It is a step by step procedure or a set of steps to accomplish a task.
“An algorithm is any well-defined computational procedure that takes some value, or set of
values, as input and produces some value, or set of values as output."
Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein
Source: www.akashiclabs.com
2Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
3. Were do we use them?
To find the best path to travel (Google Maps).
For weather forecasting.
To find structural patterns and cure diseases.
For making games that can defeat us in it.(chess)
For the processing done at the server each time we check the mail.
When we take a selfie, edit them, post them to social media and get likes.
To buy some products online and pay for it sitting at our home.
For the synchronization in the traffic lights of the whole city.
For software and techniques used to make animation movie.
Even a simple program of adding, subtracting, multiplying is an algorithm and also calculating the speed, path, fuel of a
space shuttle is done by using algorithms.
And many more……….(almost everywhere!)
3Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
5. Algorithm Example: Algorithm to add two numbers entered by user.
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values num1 and num2.
Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
Step 5: Display sum
Step 6: Stop
#include <stdio.h>
int main()
{
int firstNo, secondNo, sum;
printf ("Enter two integers: ");
// Two integers entered by user is stored using scanf() function
scanf("%d %d",&firstNo, &secondNo);
// sum of two numbers in stored in variable sum
sum = firstNo + secondNo;
// Displays sum
printf ("%d + %d = %d", firstNo, secondNo, sum);
return 0;
}
Enter two integers: 12 ,11 Sum=12 + 11 = 23
5Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
6. Algorithm to find largest among three numbers.
Step 1: Start
Step 2: Declare variables a, b and c.
Step 3: Read variables a, b and c.
Step 4: If a>b
If a>c
Display a is the largest number.
Else
Display c is the largest number.
Else
If b>c
Display b is the largest number.
Else
Display c is the largest number.
Step 5: Stop
#include <stdio.h>
int main()
{
double n1, n2, n3;
printf("Enter three numbers: ");
scanf("%lf %lf %lf", &n1, &n2, &n3);
if (n1>=n2)
{ if(n1>=n3)
printf ("%.2lf is the largest number.", n1);
else
printf ("%.2lf is the largest number.", n3);
}
else
{ if(n2>=n3)
printf ("%.2lf is the largest number.", n2);
else
printf ("%.2lf is the largest number.",n3);
}
return 0;
}
6Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
7. Step 1: Start
Step 2: Declare variables n, factorial and i.
Step 3: Initialize variables: factorial←1
i←1
Step 4: Read value of n
Step 5: Repeat the steps until i=n
factorial ← factorial*i
i←i+1
Step 6: Display factorial
Step 7: Stop
#include <stdio.h>
int main()
{
int n, i;
unsigned long long factorial = 1;
printf("Enter an integer: ");
scanf("%d",&n);
// show error if the user enters a negative integer
if (n < 0)
printf("Error! Factorial of a negative number doesn't exist.");
else
{
for(i=1; i<=n; ++i)
{
factorial *= i; // factorial = factorial*i;
}
printf("Factorial of %d = %llu", n, factorial);
}
return 0;
}
Algorithm to find the factorial of a number entered by user.
Enter an integer: 10 Factorial of 10 = 10*9*8*7*6*5*4*3*2*1=3628800
7Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
8. Algorithm to find the Fibonacci series till term≤ n.
Step 1: Start
Step 2: Declare variables first_term, second_term and temp.
Step 3: Initialize variables first_term←0 second_term←1
Step 4: Display first_term and second_term
Step 5: Repeat the steps until second_term ≤ n
temp ← second_term
second_term ← second_term + first term
first_term ← temp
Display second_term
Step 6: Stop
#include <stdio.h>
int main()
{
int i, n, t1 = 0, t2 = 1, nextTerm = 0;
Printf ("Enter the number of terms: ");
Scanf ("%d",&n);
// displays the first two terms which is always 0 and 1
Printf ("Fibonacci Series: %d, %d, ", t1, t2);
// i = 3 because the first two terms are already displayed
for (i=3; i <= n; ++i)
{
nextTerm = t1 + t2;
t1 = t2;
t2 = nextTerm;
printf ("%d ", nextTerm);
}
return 0;
}
Enter a positive integer: 100 Fibonacci Series: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89,
8Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
9. Algorithm Analysis
In computer science, the analysis of algorithms is the determination of the amount of resources (such as time
and storage) necessary to execute them.
The efficiency or running time of an algorithm is stated as a function relating the input length to the number
of steps (time complexity) or storage locations (space complexity).
1) Worst Case Analysis (Usually Done):In the worst case analysis, we calculate upper bound on running time of
an algorithm by considering worst case (a situation where algorithm takes maximum time)
2) Average Case Analysis (Sometimes done) :In average case analysis, we take all possible inputs and calculate
computing time for all of the inputs.
3) Best Case Analysis (Bogus) :In the best case analysis, we calculate lower bound on running time of an
algorithm.
Source: http://quiz.geeksforgeeks.org/lmns-algorithms/
9Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
10. Asymptotic Notations
• Θ Notation: The Theta notation bounds a functions from above and below, so it defines exact asymptotic behavior.
Θ((g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
• Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a function only from above.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0}
• Ω Notation: The Omega notation provides an asymptotic lower bound.
Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= cg(n) <= f(n) for all n >= n0}.
Source: http://quiz.geeksforgeeks.org/lmns-algorithms/
10Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
11. Big O Complexity Chart
Source: http://bigocheatsheet.com/
11Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
12. Insertion Sort
Statement Running Time of Each Step
InsertionSort (A, n) {
for i = 2 to n { c1n
key = A[i] c2(n-1)
j = i - 1; c3(n-1)
while (j > 0) and (A[j] > key) { c4T
A[j+1] = A[j] c5(T-(n-1))
j = j - 1 c6(T-(n-1))
} 0
A[j+1] = key c7(n-1)
} 0
}
T = t2 + t3 + … + tn where ti is number of while expression evaluations for the ith for loop iteration
12Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
14. Insertion Sort Analysis
• T(n) = c1n + c2(n-1) + c3(n-1) + c4T + c5(T - (n-1)) + c6(T - (n-1)) + c7(n-1) + c8T + c9n + c10
= = O(n2)
• What can T be?
• Best case -- inner loop body never executed- sorted elements
• ti = 1 T(n) is a linear function
• Worst case -- inner loop body executed for all previous elements
• ti = i T(n) is a quadratic function
14Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
15. Merge Sort
MergeSort(A, left, right)
{
if (left < right)
{
mid = floor((left + right) / 2);
MergeSort (A, left, mid);
MergeSort (A, mid+1, right);
Merge(A, left, mid, right);
// Merge() takes two sorted subarrays of A and merges them into a single sorted subarray of A
}
}
When n ≥ 2, time for merge sort steps:
Divide : Just compute mid as the average of left and right, which takes constant time i.e. Θ(1).
Conquer : Recursively solve 2 sub-problems, each of size n/2, which is 2T(n/2).
Combine : MERGE on an n-element subarray takes Θ(n) time
15Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
16. MERGE (A, left, mid, right )
1. n1 ← mid − left + 1
2. n2 ← right − mid
3. Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
4. FOR i ← 1 TO n1
5. DO L[i] ← A[left + i − 1]
6. FOR j ← 1 TO n2
7. DO R[j] ← A[mid + j ]
8. L[n1 + 1] ← ∞
9. R[n2 + 1] ← ∞
10. i ← 1
11. j ← 1
12. FOR k ← left TO right
13. DO IF L[i ] ≤ R[ j]
14. THEN A[k] ← L[i]
15. i ← i + 1
16. ELSE A[k] ← R[j]
17. j ← j + 1
Merge Sort
Source: http://www.personal.kent.edu
16Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
17. Analysis of Merge Sort
Statement Running Time of Each Step______
MergeSort (A, left, right) T(n)
{ if (left < right) (1)
{ mid = floor((left + right) / 2); (1)
MergeSort (A, left, mid); T(n/2)
MergeSort (A, mid+1, right); T(n/2)
Merge(A, left, mid, right); (n)
// Merge() takes two sorted subarrays of A and merges them into a single sorted subarray of A
}}
• So T(n) = (1) when n = 1, and
2T(n/2) + (n) when n > 1
Complexity= O(n log n)
17Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
18. The Master Theorem
• Given: a divide and conquer algorithm
• An algorithm that divides the problem of size n into a sub-problems, each of
size n/b
• Let the cost of each stage (i.e., the work to divide the problem + combine
solved sub-problems) be described by the function f(n)
• T(n) = a*T(n/b) + f(n)
18Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
19. The Master Theorem
if T(n) = a*T(n/b) + f(n) , then
1
0
largefor)()/(
AND)(
)(
)(
)(
log)(
log
log
log
log
log
c
nncfbnaf
nnf
nnf
nOnf
nf
nn
n
nT
a
a
a
a
a
b
b
b
b
b
19Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
20. Quick Sort
Quicksort(A, p, r)
{ if (p < r)
{ q = Partition(A, p, r);
Quicksort(A, p, q);
Quicksort(A, q+1, r);
}
}
Another divide-and-conquer algorithm
• The array A[p .. r] is partitioned into two non-empty subarrays A[p .. q] and A[q+1 ..r]
Invariant: All elements in A[p .. q] are less than all elements in A[q+1..r]
• The subarrays are recursively sorted by calls to quicksort
Unlike merge sort, no combining step: two subarrays form an already-sorted array
Actions that takes place in the partition() function:
• Rearranges the subarray in place
• End result:
• Two subarrays
• All values in first subarray all values in second
• Returns the index of the “pivot” element separating the two subarrays
20Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
22. Quick Sort Example & Analysis
In the worst case ( unbalanced partition ):
T(1) = (1)
T(n) = T(n - 1) + (n)
Works out to
T(n) = (n2)
In the best case ( balanced partition ):
T(n) = 2T(n/2) + (n)
Works out to
T(n) = (n log n)
22Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
23. Linear Search
23Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
function find_Index (array, target)
{
for(var i = 0; i < array.length; ++i)
{
if (array[i] == target)
{
return i;
}
}
return -1;
}
• A linear search searches an element or value from an array till the
desired element or value is not found and it searches in a
sequence order.
• It compares the element with all the other elements given in the
list and if the element is matched it returns the value index else it
return -1.
• Running Time: T(n)= a(n-1) + b
Example: Search 5 in given data 5
24. Binary Search
24Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
binary_search(A, low, high)
low = 1, high = size(A)
while low <= high
mid = floor(low + (high-low)/2)
if target == A[mid]
return mid
else if target < A[mid]
binary_search(A, low, mid-1)
else
binary_search(A, mid+1, high)
else
“target was not found”
Binary Search is an instance of divide-and-conquer paradigm.
Given an ordered array of n elements, the basic idea of binary search
is that for a given element we "probe" the middle element of the
array.
We continue in either the lower or upper segment of the array,
depending on the outcome of the probe until we reached the
required (given) element.
Complexity Analysis:
Binary Search can be accomplished in logarithmic time in the worst
case , i.e., T(n) = θ(log n).
26. Algorithm to check Palindrome
26Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward or
forward.
function isPalindrome (text)
if text is null
return false
left ← 0
right ← text.length - 1
while (left < right)
if text[left] is not text[right]
return false
left ← left + 1
right ← right - 1
return true
Complexity:
The first function isPalindrome has a time
complexity of O(n/2) which is equal to O(n)
Palindrome Example:
CIVIC
LEVEL
RADAR
RACECAR
MOM
27. Algorithm to check Prime/ Not Prime
27Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani
A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward or
forward.
isPrime (int n){
int i;
if (n==2)
return 1;
if (n % 2==0)
return 0;
for (i = 3; i < =sqrt(n); i++)
if (n % I == 0)
return 0;
return 1;
}
Complexity:
O(sqrt(n)))
28. Common Data Structure Operations
Source: http://bigocheatsheet.com/
28Ramakant Soni, Asst. Professor, CS Dept., BKBIET Pilani