The document discusses algorithms and data structures. It defines an algorithm as a well-defined procedure that takes input and produces output. Algorithms are used for calculation, data processing, and automated reasoning. The document discusses different ways of describing algorithms including natural language, flowcharts, and pseudo code. It also discusses analyzing the time complexity of algorithms using asymptotic notation such as Big-O, Omega, and Theta notation. Recursive algorithms and solving recurrences are also covered.
Design Analysis of Alogorithm 1 ppt 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. It then provides examples of Euclid's algorithm for computing the greatest common divisor. The document goes on to discuss the fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also covers analyzing algorithms by measuring time and space complexity using asymptotic notations.
Analysis of Algorithm full version 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. Euclid's algorithm for computing the greatest common divisor is provided as an example. The document then covers fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also discusses analyzing algorithms based on time and space complexity, as well as worst-case, best-case, and average-case efficiencies. Common problem types like sorting, searching, and graph problems are briefly outlined.
The document discusses the analysis of algorithms, including time and space complexity analysis. It covers key aspects of analyzing algorithms such as determining the basic operation, input size, and analyzing best-case, worst-case, and average-case time complexities. Specific examples are provided, such as analyzing the space needed to store real numbers and analyzing the time complexity of sequential search. Order of growth and asymptotic analysis techniques like Big-O, Big-Omega, and Big-Theta notation are also explained.
The document discusses algorithms and their analysis. It defines an algorithm as a sequence of unambiguous steps to solve a problem within a finite time. Characteristics of algorithms include being unambiguous, having inputs/outputs, and terminating in finite time. Algorithm analysis involves determining theoretical and empirical time and space complexity as input size increases. Time complexity is analyzed by counting basic operations, while space complexity considers fixed and variable memory usage. Worst, best, and average cases analyze how efficiency varies with different inputs. Asymptotic analysis focuses on long-term growth rates to compare algorithms.
This document discusses algorithms and their analysis. It begins with definitions of algorithms and their characteristics. Different methods for specifying algorithms are described, including pseudocode. The goal of algorithm analysis is introduced as comparing algorithms based on running time and other factors. Common algorithm analysis methods like worst-case, best-case, and average-case are defined. Big-O notation and time/space complexity are explained. Common algorithm design strategies like divide-and-conquer and randomized algorithms are summarized. Specific algorithms like repeated element detection and primality testing are mentioned.
Introduction to Data Structures Sorting and searchingMvenkatarao
This document provides an overview of data structures and algorithms. It begins by defining a data structure as a way of storing and organizing data in a computer so that it can be used efficiently by algorithms. Data structures can be primitive, directly operated on by machine instructions, or non-primitive, developed from primitive structures. Linear structures maintain adjacency between elements while non-linear do not. Common operations on data structures include adding, deleting, traversing, sorting, searching, and updating elements. The document also defines algorithms and their properties, including finiteness, definiteness, inputs, outputs, and effectiveness. It discusses analyzing algorithms based on time and space complexity and provides examples of different complexities including constant, logarithmic, linear, quadratic,
This document discusses analysis of algorithms. It covers computation models like Turing machine and RAM models. It then discusses measuring the time complexity, space complexity, and order of growth of algorithms. Time complexity is measured based on the number of basic operations like comparisons. Space complexity depends on memory required. Order of growth classifies algorithms based on how their running time grows with input size (n), such as O(n), O(log n) etc. Asymptotic notations like Big O, Omega and Theta are used to represent the asymptotic time complexity of algorithms.
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
Design Analysis of Alogorithm 1 ppt 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. It then provides examples of Euclid's algorithm for computing the greatest common divisor. The document goes on to discuss the fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also covers analyzing algorithms by measuring time and space complexity using asymptotic notations.
Analysis of Algorithm full version 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. Euclid's algorithm for computing the greatest common divisor is provided as an example. The document then covers fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also discusses analyzing algorithms based on time and space complexity, as well as worst-case, best-case, and average-case efficiencies. Common problem types like sorting, searching, and graph problems are briefly outlined.
The document discusses the analysis of algorithms, including time and space complexity analysis. It covers key aspects of analyzing algorithms such as determining the basic operation, input size, and analyzing best-case, worst-case, and average-case time complexities. Specific examples are provided, such as analyzing the space needed to store real numbers and analyzing the time complexity of sequential search. Order of growth and asymptotic analysis techniques like Big-O, Big-Omega, and Big-Theta notation are also explained.
The document discusses algorithms and their analysis. It defines an algorithm as a sequence of unambiguous steps to solve a problem within a finite time. Characteristics of algorithms include being unambiguous, having inputs/outputs, and terminating in finite time. Algorithm analysis involves determining theoretical and empirical time and space complexity as input size increases. Time complexity is analyzed by counting basic operations, while space complexity considers fixed and variable memory usage. Worst, best, and average cases analyze how efficiency varies with different inputs. Asymptotic analysis focuses on long-term growth rates to compare algorithms.
This document discusses algorithms and their analysis. It begins with definitions of algorithms and their characteristics. Different methods for specifying algorithms are described, including pseudocode. The goal of algorithm analysis is introduced as comparing algorithms based on running time and other factors. Common algorithm analysis methods like worst-case, best-case, and average-case are defined. Big-O notation and time/space complexity are explained. Common algorithm design strategies like divide-and-conquer and randomized algorithms are summarized. Specific algorithms like repeated element detection and primality testing are mentioned.
Introduction to Data Structures Sorting and searchingMvenkatarao
This document provides an overview of data structures and algorithms. It begins by defining a data structure as a way of storing and organizing data in a computer so that it can be used efficiently by algorithms. Data structures can be primitive, directly operated on by machine instructions, or non-primitive, developed from primitive structures. Linear structures maintain adjacency between elements while non-linear do not. Common operations on data structures include adding, deleting, traversing, sorting, searching, and updating elements. The document also defines algorithms and their properties, including finiteness, definiteness, inputs, outputs, and effectiveness. It discusses analyzing algorithms based on time and space complexity and provides examples of different complexities including constant, logarithmic, linear, quadratic,
This document discusses analysis of algorithms. It covers computation models like Turing machine and RAM models. It then discusses measuring the time complexity, space complexity, and order of growth of algorithms. Time complexity is measured based on the number of basic operations like comparisons. Space complexity depends on memory required. Order of growth classifies algorithms based on how their running time grows with input size (n), such as O(n), O(log n) etc. Asymptotic notations like Big O, Omega and Theta are used to represent the asymptotic time complexity of algorithms.
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
The document defines algorithms and describes their characteristics and design techniques. It states that an algorithm is a step-by-step procedure to solve a problem and get the desired output. It discusses algorithm development using pseudocode and flowcharts. Various algorithm design techniques like top-down, bottom-up, incremental, divide and conquer are explained. The document also covers algorithm analysis in terms of time and space complexity and asymptotic notations like Big-O, Omega and Theta to analyze best, average and worst case running times. Common time complexities like constant, linear, quadratic, and exponential are provided with examples.
Chapter1.1 Introduction to design and analysis of algorithm.pptTekle12
This document discusses the design and analysis of algorithms. It begins with defining what an algorithm is - a well-defined computational procedure that takes inputs and produces outputs. It describes analyzing algorithms to determine their efficiency and comparing different algorithms that solve the same problem. The document outlines steps for designing algorithms, including understanding the problem, deciding a solution approach, designing the algorithm, proving correctness, and analyzing and coding it. It discusses using mathematical techniques like asymptotic analysis and Big O notation to analyze algorithms independently of implementations or inputs. The importance of analysis is also covered.
This document discusses the design and analysis of algorithms. It begins with defining what an algorithm is - a well-defined computational procedure that takes inputs and produces outputs. It describes analyzing algorithms to determine their efficiency and comparing different algorithms that solve the same problem. The document outlines steps for designing algorithms, including understanding the problem, deciding a solution approach, designing the algorithm, proving correctness, and analyzing and coding it. It discusses using mathematical techniques like asymptotic analysis and Big O notation to analyze algorithms independently of implementations or data. The importance of analyzing algorithms and techniques like divide-and-conquer are also covered.
This document provides an overview of algorithm analysis and asymptotic complexity. It discusses learning outcomes related to analyzing algorithm efficiency using Big O, Omega, and Theta notation. Key points covered include:
- Defining the problem size n and relating algorithm running time to n
- Distinguishing between best-case, worst-case, and average-case complexity
- Using asymptotic notation like Big O to give upper bounds on complexity rather than precise calculations
- Common asymptotic categories like O(n), O(n^2), O(n log n) that classify algorithm growth rates
This document outlines a course on data structures and algorithms. It includes the following topics: asymptotic and algorithm analysis, complexity analysis, abstract lists and implementations, arrays, linked lists, stacks, queues, trees, graphs, sorting algorithms, minimum spanning trees, hashing, and more. The course objectives are to enable students to understand various ways to organize data, understand algorithms to manipulate data, use analyses to compare data structures and algorithms, and select relevant structures and algorithms for problems. The document also lists reference books and provides outlines on defining algorithms, analyzing time/space complexity, and asymptotic notations.
This document provides an overview of data structures and algorithms analysis. It discusses big-O notation and how it is used to analyze computational complexity and asymptotic complexity of algorithms. Various growth functions like O(n), O(n^2), O(log n) are explained. Experimental and theoretical analysis methods are described and limitations of experimental analysis are highlighted. Key aspects like analyzing loop executions and nested loops are covered. The document also provides examples of analyzing algorithms and comparing their efficiency using big-O notation.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
The document discusses algorithm analysis and determining the efficiency of algorithms. It introduces key concepts such as:
- Algorithms must be correct and efficient to solve problems.
- The time and space complexity of algorithms is analyzed to compare efficiencies. Common growth rates include constant, logarithmic, linear, quadratic, and exponential time.
- The asymptotic worst-case time complexity of an algorithm (its "order") is expressed using Big O notation, such as O(n) for linear time. Higher order terms indicate slower growth.
This document provides an overview and introduction to the concepts taught in a data structures and algorithms course. It discusses the goals of reinforcing that every data structure has costs and benefits, learning commonly used data structures, and understanding how to analyze the efficiency of algorithms. Key topics covered include abstract data types, common data structures, algorithm analysis techniques like best/worst/average cases and asymptotic notation, and examples of analyzing the time complexity of various algorithms. The document emphasizes that problems can have multiple potential algorithms and that problems should be carefully defined in terms of inputs, outputs, and resource constraints.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
This document provides an overview of a lecture on designing and analyzing computer algorithms. It discusses key concepts like what an algorithm and program are, common algorithm design techniques like divide-and-conquer and greedy methods, and how to analyze algorithms' time and space complexity. The goals of analyzing algorithms are to understand their behavior, improve efficiency, and determine whether problems can be solved within a reasonable time frame.
The document provides an introduction and overview of the Design and Analysis of Algorithms course. It covers key topics like asymptotic notations and their properties, analyzing recursive and non-recursive algorithms, divide-and-conquer algorithms like quicksort and mergesort, and sorting algorithms like heap sort. Examples of insertion sort and analysis of its worst-case running time of O(n2) are provided. Asymptotic notation like Big-O, Ω, and Θ are introduced to analyze algorithms' time complexities as the problem size n approaches infinity.
This document discusses algorithm analysis and determining the time complexity of algorithms. It begins by defining an algorithm and noting that the efficiency of algorithms should be analyzed independently of specific implementations or hardware. The document then discusses analyzing the time complexity of various algorithms by counting the number of operations and expressing efficiency using growth functions. Common growth functions like constant, linear, quadratic, and exponential are introduced. The concept of asymptotic notation (Big O) for describing an algorithm's time complexity is also covered. Examples are provided to demonstrate how to determine the time complexity of iterative and recursive algorithms.
This document provides an introduction to algorithms and algorithm analysis. It defines what an algorithm is, provides examples, and discusses analyzing algorithms to determine their efficiency. Insertion sort and merge sort are presented as examples and their time complexities are analyzed. Asymptotic notation is introduced to describe an algorithm's order of growth and provide bounds on its running time. Key points covered include analyzing best-case and worst-case time complexities, using recurrences to model algorithms, and the properties of asymptotic notation. Homework problems are assigned from the textbook chapters.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
The document defines algorithms and describes their characteristics and design techniques. It states that an algorithm is a step-by-step procedure to solve a problem and get the desired output. It discusses algorithm development using pseudocode and flowcharts. Various algorithm design techniques like top-down, bottom-up, incremental, divide and conquer are explained. The document also covers algorithm analysis in terms of time and space complexity and asymptotic notations like Big-O, Omega and Theta to analyze best, average and worst case running times. Common time complexities like constant, linear, quadratic, and exponential are provided with examples.
Chapter1.1 Introduction to design and analysis of algorithm.pptTekle12
This document discusses the design and analysis of algorithms. It begins with defining what an algorithm is - a well-defined computational procedure that takes inputs and produces outputs. It describes analyzing algorithms to determine their efficiency and comparing different algorithms that solve the same problem. The document outlines steps for designing algorithms, including understanding the problem, deciding a solution approach, designing the algorithm, proving correctness, and analyzing and coding it. It discusses using mathematical techniques like asymptotic analysis and Big O notation to analyze algorithms independently of implementations or inputs. The importance of analysis is also covered.
This document discusses the design and analysis of algorithms. It begins with defining what an algorithm is - a well-defined computational procedure that takes inputs and produces outputs. It describes analyzing algorithms to determine their efficiency and comparing different algorithms that solve the same problem. The document outlines steps for designing algorithms, including understanding the problem, deciding a solution approach, designing the algorithm, proving correctness, and analyzing and coding it. It discusses using mathematical techniques like asymptotic analysis and Big O notation to analyze algorithms independently of implementations or data. The importance of analyzing algorithms and techniques like divide-and-conquer are also covered.
This document provides an overview of algorithm analysis and asymptotic complexity. It discusses learning outcomes related to analyzing algorithm efficiency using Big O, Omega, and Theta notation. Key points covered include:
- Defining the problem size n and relating algorithm running time to n
- Distinguishing between best-case, worst-case, and average-case complexity
- Using asymptotic notation like Big O to give upper bounds on complexity rather than precise calculations
- Common asymptotic categories like O(n), O(n^2), O(n log n) that classify algorithm growth rates
This document outlines a course on data structures and algorithms. It includes the following topics: asymptotic and algorithm analysis, complexity analysis, abstract lists and implementations, arrays, linked lists, stacks, queues, trees, graphs, sorting algorithms, minimum spanning trees, hashing, and more. The course objectives are to enable students to understand various ways to organize data, understand algorithms to manipulate data, use analyses to compare data structures and algorithms, and select relevant structures and algorithms for problems. The document also lists reference books and provides outlines on defining algorithms, analyzing time/space complexity, and asymptotic notations.
This document provides an overview of data structures and algorithms analysis. It discusses big-O notation and how it is used to analyze computational complexity and asymptotic complexity of algorithms. Various growth functions like O(n), O(n^2), O(log n) are explained. Experimental and theoretical analysis methods are described and limitations of experimental analysis are highlighted. Key aspects like analyzing loop executions and nested loops are covered. The document also provides examples of analyzing algorithms and comparing their efficiency using big-O notation.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
The document discusses algorithm analysis and determining the efficiency of algorithms. It introduces key concepts such as:
- Algorithms must be correct and efficient to solve problems.
- The time and space complexity of algorithms is analyzed to compare efficiencies. Common growth rates include constant, logarithmic, linear, quadratic, and exponential time.
- The asymptotic worst-case time complexity of an algorithm (its "order") is expressed using Big O notation, such as O(n) for linear time. Higher order terms indicate slower growth.
This document provides an overview and introduction to the concepts taught in a data structures and algorithms course. It discusses the goals of reinforcing that every data structure has costs and benefits, learning commonly used data structures, and understanding how to analyze the efficiency of algorithms. Key topics covered include abstract data types, common data structures, algorithm analysis techniques like best/worst/average cases and asymptotic notation, and examples of analyzing the time complexity of various algorithms. The document emphasizes that problems can have multiple potential algorithms and that problems should be carefully defined in terms of inputs, outputs, and resource constraints.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
This document provides an overview of a lecture on designing and analyzing computer algorithms. It discusses key concepts like what an algorithm and program are, common algorithm design techniques like divide-and-conquer and greedy methods, and how to analyze algorithms' time and space complexity. The goals of analyzing algorithms are to understand their behavior, improve efficiency, and determine whether problems can be solved within a reasonable time frame.
The document provides an introduction and overview of the Design and Analysis of Algorithms course. It covers key topics like asymptotic notations and their properties, analyzing recursive and non-recursive algorithms, divide-and-conquer algorithms like quicksort and mergesort, and sorting algorithms like heap sort. Examples of insertion sort and analysis of its worst-case running time of O(n2) are provided. Asymptotic notation like Big-O, Ω, and Θ are introduced to analyze algorithms' time complexities as the problem size n approaches infinity.
This document discusses algorithm analysis and determining the time complexity of algorithms. It begins by defining an algorithm and noting that the efficiency of algorithms should be analyzed independently of specific implementations or hardware. The document then discusses analyzing the time complexity of various algorithms by counting the number of operations and expressing efficiency using growth functions. Common growth functions like constant, linear, quadratic, and exponential are introduced. The concept of asymptotic notation (Big O) for describing an algorithm's time complexity is also covered. Examples are provided to demonstrate how to determine the time complexity of iterative and recursive algorithms.
This document provides an introduction to algorithms and algorithm analysis. It defines what an algorithm is, provides examples, and discusses analyzing algorithms to determine their efficiency. Insertion sort and merge sort are presented as examples and their time complexities are analyzed. Asymptotic notation is introduced to describe an algorithm's order of growth and provide bounds on its running time. Key points covered include analyzing best-case and worst-case time complexities, using recurrences to model algorithms, and the properties of asymptotic notation. Homework problems are assigned from the textbook chapters.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
2. Data Structures
• What is the "Data Structure" ?
– Ways to represent data
• Why data structure ?
– To design and implement large-scale computer
system
– Have proven correct algorithms
– The art of programming
• How to master in data structure ?
– practice, discuss, and think
3. Algorithms
• Algorithms are the building blocks of computer programs. They
are as important to programming as recipes are to cooking.
• An algorithm is a well-defined procedure that takes input and
produces output. ... The main difference here is that algorithms are
mathematical or textual in nature.
• A programming algorithm is a computer procedure that is a lot
like a recipe (called a procedure) and tells your computer precisely
what steps to take to solve a problem or reach a goal.
• The ingredients are called inputs, while the results are called the
outputs.
4. Applications/Use of algorithms:
• In mathematics and computer science, an algorithm is a step-by-
step procedure for calculations.
• Algorithms are used for calculation, data processing, and
automated reasoning.
5. Algorithm Specification
• Definition
– An algorithm is a finite set of instructions that, if
followed, accomplishes a particular task. In
addition, all algorithms must satisfy the following
criteria:
(1)Input. There are zero or more quantities that are
externally supplied.
(2)Output. At least one quantity is produced.
(3)Definiteness. Each instruction is clear and
unambiguous.
6. Algorithm Specification
(4)Finiteness. If we trace out the instructions of an
algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
(5)Effectiveness. Every instruction must be basic
enough to be carried out, in principle, by a person
using only pencil and paper. It is not enough that
each operation be definite as in (3); it also must
be feasible.
7. Describing Algorithms
• Natural language
– English
• Instructions must be definite and effectiveness
• Graphic representation
– Flowchart
• work well only if the algorithm is small and simple
• Pseudo language
– Readable
– Instructions must be definite and effectiveness
• Combining English and C++
– In this text
8. Translating a Problem into an
Algorithm
• Problem
– Devise a program that sorts a set of n>= 1 integers
• Step I - Concept
– From those integers that are currently unsorted, find the
smallest and place it next in the sorted list
• Step II - Algorithm
– for (i= 0; i< n; i++){
Examine list[i] to list[n-1] and suppose that the smallest
integer is list[min];
Interchange list[i] and list[min];
}
9. Recursive Algorithms
• Direct recursion
– Functions call themselves
• Indirect recursion
– Functions call other functions that invoke the calling
function again
• When is recursion an appropriate mechanism?
– The problem itself is defined recursively
– Statements: if-else and while can be written
recursively
– Art of programming
• Why recursive algorithms ?
– Powerful, express an complex process very clearly
10. 10
Analysis of algorithms
• Issues:
– correctness
– time efficiency
– space efficiency
– optimality
• Approaches:
– theoretical analysis
– empirical analysis
11. 11
Theoretical analysis of time
efficiency
Time efficiency is analyzed by determining the
number of repetitions of the basic operation
as a function of input size
• Basic operation: the operation that contributes
most towards the running time of the
algorithm
12. 12
Theoretical analysis of time
efficiency
T(n) ≈ copC(n)
running time execution time
for basic operation
Number of times
basic operation is
executed
input size
13. 13
Input size and basic operation
examples
Problem Input size measure Basic operation
Searching for key in a
list of n items
Number of list’s items, i.e.
n
Key comparison
Multiplication of two
matrices
Matrix dimensions or total
number of elements
Multiplication of two
numbers
Checking primality of a
given integer n
n’size = number of digits
(in binary representation)
Division
Typical graph problem #vertices and/or edges
Visiting a vertex or
traversing an edge
14. 14
Empirical analysis of time efficiency
• Select a specific (typical) sample of inputs
• Use physical unit of time (e.g., milliseconds)
or Count actual number of basic
operation’s executions
• Analyze the empirical data
• We mostly do theoretical analysis (may do
empirical in assignment)
16. 16
Best-case, average-case, worst-
case
– NOT the average of worst and best case
– Expected number of basic operations considered as a
random variable under some assumption about the
probability distribution of all possible inputs of size n
– Consider all possible input sets of size n, average C(n)
for all sets
• Some algorithms are same for all three (eg all
case performance)
17. 17
Example: Find maximum
• Worst case
• Best case
• Average case: depends on assumputions
about input (eg proportion of found vs not-
found keys)
• All case
18. 18
Order of growth
• Most important: Order of growth within a
constant multiple as n→∞
• Examples:
– How much faster will algorithm run on computer that
is twice as fast? What say you?
• Time = …
– How much longer does it take to solve problem of
double input size? What say you?
• Time =
19. Performance Analysis
• Performance evaluation
– Performance analysis
– Performance measurement
• Performance analysis - prior
– an important branch of CS, complexity theory
– estimate time and space
– machine independent
• Performance measurement -posterior
– The actual time and space requirements
– machine dependent
20. Time Complexity
Definition
The time complexity, T(p), taken by a program P is
the sum of the compile time and the run time
Total time
T(P)= compile time + run (or execution) time
= c + tp(instance characteristics)
Compile time does not depend on the instance
characteristics
How to evaluate?
Use the system clock
Number of steps performed
machine-independent
21. Cont..
Definition of a program step
A program step is a syntactically or semantically
meaningful program segment whose execution
time is independent of the instance characteristics
(10 additions can be one step, 100 multiplications
can also be one step)
22. Comp 122
Asymptotic Complexity
• Running time of an algorithm as a function of
input size n for large n.
• Expressed using only the highest-order term
in the expression for the exact running time.
– Instead of exact running time, say Q(n2).
• Describes behavior of function in the limit.
• Written using Asymptotic Notation.
23. Asymptotic Notation(O, , Q)
• motivation
– Target: Compare the time complexity of two programs that
computing the same function and predict the growth in run time
as instance characteristics change
– Determining the exact step count is difficult task
– Not very useful for comparative purpose
ex: C1n2+C2n <= C3n for n <= 98, (C1=1, C2=2, C3=100)
C1n2+C2n > C3n for n > 98,
– Determining the exact step count usually not worth(can not get
exact run time)
• Asymptotic notation
– Big "oh“ O
• upper bound(current trend)
– Omega
• lower bound
– Theta Q
• upper and lower bound
24. Comp 122
Asymptotic Notation
• Q, O,
• Defined for functions over the natural numbers.
– Ex: f(n) = Q(n2).
– Describes how f(n) grows in comparison to n2.
• Define a set of functions; in practice used to compare
two function sizes.
• The notations describe different rate-of-growth
relations between the defining function and the
defined set of functions.
25. Comp 122
Q-notation
Q(g(n)) = {f(n) :
positive constants c1, c2,
and n0, such that n n0,
we have 0 c1g(n) f(n)
c2g(n)
}
For function g(n), we define
Q(g(n)), big-Theta of n, as the set:
g(n) is an asymptotically tight bound for f(n).
Intuitively: Set of all functions that
have the same rate of growth as g(n).
26. Comp 122
O-notation
O(g(n)) = {f(n) :
positive constants c and n0,
such that n n0,
we have 0 f(n) cg(n) }
For function g(n), we define
O(g(n)), big-O of n, as the set:
g(n) is an asymptotic upper bound for f(n).
Intuitively: Set of all functions
whose rate of growth is the
same as or lower than that of
g(n).
f(n) = Q(g(n)) f(n) = O(g(n)).
Q(g(n)) O(g(n)).
27. Comp 122
-notation
g(n) is an asymptotic lower bound for f(n).
Intuitively: Set of all functions
whose rate of growth is the
same as or higher than that of
g(n).
f(n) = Q(g(n)) f(n) = (g(n)).
Q(g(n)) (g(n)).
(g(n)) = {f(n) :
positive constants c and n0,
such that n n0,
we have 0 cg(n) f(n)}
For function g(n), we define
(g(n)), big-Omega of n, as the
set:
29. Comp 122
Relations Between Q, , O
• I.e., Q(g(n)) = O(g(n)) (g(n))
• In practice, asymptotically tight bounds are
obtained from asymptotic upper and lower
bounds.
Theorem : For any two functions g(n) and
f(n),
f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = (g(n)).
30. Standard Notation and
Common Functions
• Monotonicity
A function f(n) is monotonically increasing if m n
implies f(m) f(n) .
A function f(n) is monotonically decreasing if m n
implies f(m) f(n) .
A function f(n) is strictly increasing
if m < n implies f(m) < f(n) .
A function f(n) is strictly decreasing
if m < n implies f(m) > f(n) .
31. Cont..
• Floors and ceilings
For any real number x, the greatest integer less than
or equal to x is denoted by x.
For any real number x, the least integer greater than
or equal to x is denoted by x.
For all real numbers x,
x1 < x x x < x+1.
Both functions are monotonically increasing.
32. Cont..
• Exponentials
For all n and a1, the function an is the exponential function
with base a and is monotonically increasing.
• Logarithms
Textbook adopts the following convention
lg n = log2n (binary logarithm),
ln n = logen (natural logarithm),
lgk n = (lg n)k (exponentiation),
lg lg n = lg(lg n) (composition),
lg n + k = (lg n)+k (precedence of lg).
ai
33. • Important relationships
For all real constants a and b such that a>1,
nb = o(an)
that is, any exponential function with a base
strictly greater than unity grows faster than any
polynomial function.
For all real constants a and b such that a>0,
lgbn = o(na)
that is, any positive polynomial function grows
faster than any polylogarithmic function.
Cont..
34. Cont..
• Factorials
For all n the function n! or “n factorial” is given by
n! = n (n1) (n 2) (n 3) … 2 1
It can be established that
n! = o(nn)
n! = (2n)
lg(n!) = Q(nlgn)
35. • Functional iteration
The notation f (i)(n) represents the function f(n) iteratively applied
i times to an initial value of n, or, recursively
f (i)(n) = n if n=0
f (i)(n) = f(f (i1)(n)) if n>0
Example:
If f(n) = 2n
then f (2)(n) = f(2n) = 2(2n) = 22n
then f (3)(n) = f(f (2)(n)) = 2(22n) = 23n
then f (i)(n) = 2in
Cont..
36. • Iterated logarithmic function
The notation lg* n which reads “log star of n” is defined as
lg* n = min {i0 : lg(i) n 1
Example:
lg* 2 = 1
lg* 4 = 2
lg* 16 = 3
lg* 65536 = 4
lg* 265536 = 5
Cont..
39. Comp 122
• There are mainly three ways for solving recurrences.
• 1) Substitution Method: We make a guess for the
solution and then we use mathematical induction to
prove the guess is correct or incorrect.
• For example consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(n Log n).
• Now we use induction to prove our guess. We need to
prove that T(n) <= cn Log n.
• We can assume that it is true for values smaller than n.
Substitution Method
43. L2.43
Recursion-tree method
• A recursion tree models the costs (time) of a
recursive execution of an algorithm.
• The recursion tree method is good for generating
guesses for the substitution method.
• The recursion-tree method can be unreliable, just
like any method that uses ellipses (…).
• The recursion-tree method promotes intuition,
however.
47. L2.47
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2
T(n/16) T(n/8) T(n/8) T(n/4)
48. L2.48
Example of recursion tree
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
49. L2.49
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
2
n
n2
50. L2.50
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
2
16
5 n
2
n
n2
51. L2.51
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2
Q(1)
2
16
5 n
2
n
2
256
25 n
n2
(n/2)2
…
52. L2.52
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2
Q(1)
2
16
5 n
2
n
2
256
25 n
1
3
16
5
2
16
5
16
5
2
n
…
Total =
= Q(n2)
n2
(n/2)2
geometric series
53. L2.53
The master method
Master Theorem-
Master’s Theorem is a popular method for solving the
recurrence relations.
Master’s theorem solves recurrence relations of the form-
Here, a >= 1, b > 1, k >= 0 and p is a real number.
54. L2.54
Cont..
Master Theorem Cases-
To solve recurrence relations using Master’s theorem, we
compare a with bk.
Then, we follow the following cases-
Case-01:
If a > bk, then T(n) = θ (nlog
b
a)
Case-02:
If a = bk and
If p < -1, then T(n) = θ (nlog
b
a)
55. L2.55
Cont..
If p = -1, then T(n) = θ (nlog
b
a.log2n)
If p > -1, then T(n) = θ (nlog
b
a.logp+1n)
Case-03:
If a < bk and
If p < 0, then T(n) = O (nk)
If p >= 0, then T(n) = θ (nklogpn)
56. L2.56
Cont..
PRACTICE PROBLEMS BASED ON MASTER
THEOREM-
Problem-01:
Solve the following recurrence relation using Master’s
theorem-
T(n) = 3T(n/2) + n2
Solution-
We compare the given recurrence relation with T(n) =
aT(n/b) + θ (nklogpn).
Then, we have-
a = 3
57. L2.57
Cont..
b = 2
k = 2
p = 0
Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
Thus,
T(n) = θ (n2)
58. L2.58
Cont..
Problem-02:
Solve the following recurrence relation using Master’s
theorem-
T(n) = 2T(n/2) + nlogn
Solution-
We compare the given recurrence relation with T(n) =
aT(n/b) + θ (nklogpn).
Then, we have-
a = 2
b = 2
k = 1
p = 1
59. L2.59
Cont..
Now, a = 2 and bk = 21 = 2.
Clearly, a = bk.
So, we follow case-02.
Since p = 1, so we have-
T(n) = θ (nlog
b
a.logp+1n)
T(n) = θ (nlog
2
2.log1+1n)
Thus,
T(n) = θ (nlog2n)
60. Amortized Analysis
• Amortized Analysis is used for algorithms where an
occasional operation is very slow, but most of the other
operations are faster.
• In Amortized Analysis, we analyze a sequence of
operations and guarantee a worst case average time
which is lower than the worst case time of a particular
expensive operation.
• The example data structures whose operations are
analyzed using Amortized Analysis are Hash Tables,
Disjoint Sets and Splay Trees.
61. Cont..
• Not just consider one operation, but a sequence of operations
on a given data structure.
• Average cost over a sequence of operations.
• Probabilistic analysis:
– Average case running time: average over all possible inputs
for one algorithm (operation).
– If using probability, called expected running time.
• Amortized analysis:
– No involvement of probability
– Average performance on a sequence of operations, even
some operation is expensive.
– Guarantee average performance of each operation among
the sequence in worst case.
62. Three Methods of Amortized Analysis
• Aggregate analysis:
– Total cost of n operations/n,
• Accounting method:
– Assign each type of operation an (different) amortized cost
– overcharge some operations,
– store the overcharge as credit on specific objects,
– then use the credit for compensation for some later operations.
• Potential method:
– Same as accounting method
– But store the credit as “potential energy” and as a whole.
63. Example for amortized analysis
• Stack operations:
– PUSH(S,x), O(1)
– POP(S), O(1)
– MULTIPOP(S,k), min(s,k)
• while not STACK-EMPTY(S) and k>0
• do POP(S)
• k=k-1
• Let us consider a sequence of n PUSH, POP, MULTIPOP.
– The worst case cost for MULTIPOP in the sequence is O(n),
since the stack size is at most n.
– thus the cost of the sequence is O(n2). Correct, but not tight.
64. Aggregate Analysis
• In fact, a sequence of n operations on an initially
empty stack cost at most O(n). Why?
Each object can be POP only once (including in MULTIPOP) for each time
it is PUSHed. #POPs is at most #PUSHs, which is at most n.
Thus the average cost of an operation is O(n)/n = O(1).
Amortized cost in aggregate analysis is defined to be average cost.
65. Another example: increasing a binary counter
• Binary counter of length k, A[0..k-1] of bit array.
• INCREMENT(A)
1. i0
2. while i<k and A[i]=1
3. do A[i]0 (flip, reset)
4. ii+1
5. if i<k
6. then A[i]1 (flip, set)
66. Analysis of INCREMENT(A)
• Cursory analysis:
– A single execution of INCREMENT takes
O(k) in the worst case (when A contains all
1s)
– So a sequence of n executions takes O(nk)
in worst case (suppose initial counter is 0).
– This bound is correct, but not tight.
• The tight bound is O(n) for n executions.
67. Amortized (Aggregate) Analysis of INCREMENT(A)
Observation: The running time determined by #flips
but not all bits flip each time INCREMENT is called.
A[0] flips every time, total n times.
A[1] flips every other time, n/2 times.
A[2] flips every forth time, n/4 times.
….
for i=0,1,…,k-1, A[i] flips n/2i times.
Thus total #flips is i=0
k-1 n/2i
< ni=0
1/2i
=2n.
68. Amortized Analysis: Accounting Method
• Idea:
– Assign differing charges to different operations.
– The amount of the charge is called amortized cost.
– amortized cost is more or less than actual cost.
– When amortized cost > actual cost, the difference is saved
in specific objects as credits.
– The credits can be used by later operations whose
amortized cost < actual cost.
• As a comparison, in aggregate analysis, all operations
have same amortized costs.
69. Accounting Method (cont.)
• Conditions:
– suppose actual cost is ci for the ith operation in the sequence,
and amortized cost is ci',
– i=1
n ci' i=1
n ci should hold.
• since we want to show the average cost per operation is
small using amortized cost, we need the total amortized cost
is an upper bound of total actual cost.
• holds for all sequences of operations.
– Total credits is i=1
n ci' - i=1
n ci , which should be
nonnegative,
• Moreover, i=1
t ci' - i=1
t ci ≥0 for any t >0.
70. Accounting Method: Stack Operations
• Actual costs:
– PUSH :1, POP :1, MULTIPOP: min(s,k).
• Let assign the following amortized costs:
– PUSH:2, POP: 0, MULTIPOP: 0.
• Similar to a stack of plates in a cafeteria.
– Suppose $1 represents a unit cost.
– When pushing a plate, use one dollar to pay the actual
cost of the push and leave one dollar on the plate as
credit.
– Whenever POP ing a plate, the one dollar on the plate is
used to pay the actual cost of the POP. (same for
MULTIPOP).
71. Cont..
– By charging PUSH a little more, do not charge POP or
MULTIPOP.
• The total amortized cost for n PUSH, POP, MULTIPOP is O(n),
thus O(1) for average amortized cost for each operation.
• Conditions hold: total amortized cost ≥total actual cost, and
amount of credits never becomes negative.
72. Accounting method: binary counter
• Let $1 represent each unit of cost (i.e., the flip of one bit).
• Charge an amortized cost of $2 to set a bit to 1.
• Whenever a bit is set, use $1 to pay the actual cost, and store
another $1 on the bit as credit.
• When a bit is reset, the stored $1 pays the cost.
• At any point, a 1 in the counter stores $1, the number of 1’s is
never negative, so is the total credits.
• At most one bit is set in each operation, so the amortized cost of
an operation is at most $2.
• Thus, total amortized cost of n operations is O(n), and average is
O(1).
73. The Potential Method
• Same as accounting method: something prepaid is used later.
• Different from accounting method
– The prepaid work not as credit, but as “potential energy”,
or “potential”.
– The potential is associated with the data structure as a
whole rather than with specific objects within the data
structure.
74. The Potential Method (cont.)
• Initial data structure D0,
• n operations, resulting in D0, D1,…, Dn with costs c1, c2,…, cn.
• A potential function : {Di} R (real numbers)
• (Di) is called the potential of Di.
• Amortized cost ci' of the ith operation is:
ci' = ci + (Di) - (Di-1). (actual cost + potential change)
• i=1
n ci'= i=1
n (ci + (Di) - (Di-1))
• = i=1
nci + (Dn) - (D0)
75. The Potential Method (cont.)
• If (Dn) (D0), then total amortized cost is an upper bound of
total actual cost.
• But we do not know how many operations, so (Di) (D0) is
required for any i.
• It is convenient to define (D0)=0,and so (Di) 0, for all i.
• If the potential change is positive (i.e., (Di) - (Di-1)>0), then ci'
is an overcharge (so store the increase as potential),
• otherwise, undercharge (discharge the potential to pay the
actual cost).
76. Potential method: stack operation
• Potential for a stack is the number of objects in the stack.
• So (D0)=0, and (Di) 0
• Amortized cost of stack operations:
– PUSH:
• Potential change: (Di)- (Di-1) =(s+1)-s =1.
• Amortized cost: ci' = ci + (Di) - (Di-1)=1+1=2.
– POP:
• Potential change: (Di)- (Di-1) =(s-1) –s= -1.
• Amortized cost: ci' = ci + (Di) - (Di-1)=1+(-1)=0.
– MULTIPOP(S,k): k'=min(s,k)
• Potential change: (Di)- (Di-1) = –k'.
• Amortized cost: ci' = ci + (Di) - (Di-1)=k'+(-k')=0.
77. Cont..
• So amortized cost of each operation is O(1), and total
amortized cost of n operations is O(n).
• Since total amortized cost is an upper bound of actual cost,
the worse case cost of n operations is O(n).
78. Potential method: binary counter
• Define the potential of the counter after the ith INCREMENT is
(Di) =bi, the number of 1’s. clearly, (Di)0.
• Let us compute amortized cost of an operation
– Suppose the ith operation resets ti bits.
– Actual cost ci of the operation is at most ti +1.
– If bi=0, then the ith operation resets all k bits, so bi-1=ti=k.
– If bi>0, then bi=bi-1-ti+1
– In either case, bibi-1-ti+1.
– So potential change is (Di) - (Di-1) bi-1-ti+1-bi-1=1-ti.
– So amortized cost is: ci' = ci + (Di) - (Di-1) ti +1+1-ti=2.
• The total amortized cost of n operations is O(n).
• Thus worst case cost is O(n).