SlideShare a Scribd company logo
1 of 50
Algorithmic Analysis
           & Design
           Introduction
Introduction
   Abu Ja'far Muhammad ibn Musa Al-
    Khwarizmi
    [Born: about 780 in Baghdad (now in Iraq).
    Died: about 850]
   [al-jabr means "restoring", referring to the
    process of moving a subtracted quantity
    to the other side of an equation; al-
    muqabala is "comparing" and refers to
    subtracting equal quantities from both
    sides of an equation.]
Introduction
    An algorithm, named after the ninth century scholar Abu Jafar
    Muhammad Ibn Musu Al-Khowarizmi, is defined as follows:
    Roughly speaking:
   An algorithm is a set of rules for carrying out calculation either by
    hand or on a machine.
   An algorithm is a finite step-by-step procedure to achieve a
    required result.
   An algorithm is a sequence of computational steps that transform
    the input into the output.
   An algorithm is a sequence of operations performed on data that
    have to be organized in data structures.
   An algorithm is an abstraction of a program to be executed on a
    physical machine (model of Computation).
What is an Algorithm?
   An algorithm is a sequence of
    unambiguous instructions for solving a
    problem, i.e., for obtaining a required
    output for any legitimate input in a finite
    amount of time.
   The most famous algorithm in history dates
    well before the time of the ancient Greeks:
    this is Euclid's algorithm for calculating the
    greatest common divisor of two integers.
Example-1: The Classic
Multiplication Algorithm
1. Multiplication, the American way:
  Multiply the multiplicand one after another by
  each digit of the multiplier taken from right to
  left.
2. Multiplication, the English way:
  Multiply the multiplicand one after another by
  each digit of the multiplier taken from left to
  right.
Example-2: Chain Matrix
Multiplication



   Want: ABCD =?
   Method 1: (AB)(CD)
   Method 2: A((BC)D)
   Method 1 is much more efficient than Method
    2.
What is this course about?
   There are usually more than one algorithm
    for solving a problem.
   Some algorithms are more efficient than
    others.
   We want the most efficient algorithm.
What is this course about?
   If we have two algorithms, we need to find
    out which one is more efficient.
   To do so, we need to analyze each of them to
    determine their efficiency.
   Of course, we must also make sure the
    algorithm are correct.
What is this course about?
In this course, we will discuss fundamental techniques for:
1. Proving the correctness of algorithms,
2. Analyzing the running times of algorithms,
3. Designing efficient algorithms,
4. Showing efficient algorithms do not exist for some problems

Notes:
 Analysis and design go hand in hand:
   By analyzing the running times of algorithms, we will know how
     to design fast algorithms.
 In CS207, you have learned techniques for 1, 2, 3 w.r.t sorting
  and searching. In CS401, we will discuss them in the contexts of
  other problems.
Differences between CS-207 &
CS-401
Analysis and Design
  Algorithmic is a branch of computer science
   that consists of designing and analyzing
   computer algorithms
1. The “design” pertain to
       The description of algorithm at an abstract level by
        means of a pseudo language, and
       Proof of correctness that is, the algorithm solves the
        given problem in all cases.
2. The “analysis” deals with performance
   evaluation (complexity analysis).
Analysis and Design
   We start with defining the model of
    computation, which is usually the Random
    Access Machine (RAM) model, but other
    models of computations can be use such as
    PRAM. Once the model of computation has
    been defined, an algorithm can be describe
    using a simple language (or pseudo
    language) whose syntax is close to
    programming language such as C# or java.
Analysis of algorithms
  The theoretical study of computer-
  program performance and resource usage.
  What’s more important than performance?
   • modularity       • user-friendliness
   • correctness      • programmer time
   • maintainability  • simplicity
   • functionality    • extensibility
   • robustness       • reliability
Why study algorithms and
performance?
   Algorithms help us to understand scalability.
   Performance often draws the line between
    what is feasible and what is impossible.
   Algorithmic     mathematics     provides    a
    language for talking about program behavior.
   Performance is the currency of computing.
   The lessons of program performance
    generalize to other computing resources.
   Speed is fun!
Algorithm's Performance
   Two important ways to characterize the
    effectiveness of an algorithm are its:

       Time Complexity: how much time will the program
        take?

       Space Complexity: how much storage will the
        program need?
Complexity
   Complexity refers to the rate at which the storage or time
    grows as a function of the problem size. The absolute
    growth depends on the machine used to execute the
    program, the compiler used to construct the program,
    and many other factors. We would like to have a way of
    describing the inherent complexity of a program (or piece
    of a program), independent of machine/compiler
    considerations. This means that we must not try to
    describe the absolute time or storage needed. We must
    instead concentrate on a "proportionality" approach,
    expressing the complexity in terms of its relationship to
    some known function. This type of analysis is known as
    asymptotic analysis.
Complexity
   Time complexity of an algorithm concerns
    determining an expression of the number of steps
    needed as a function of the problem size. Since the
    step count measure is somewhat coarse, one does
    not aim at obtaining an exact step count.
   Instead, one attempts only to get asymptotic bounds
    on the step count. Asymptotic analysis makes use of
    the O (Big Oh) notation. Two other notational
    constructs used by computer scientists in the
    analysis of algorithms are Θ (Big Theta) notation
    and Ω (Big Omega) notation.
O-Notation (Upper Bound)
   This notation gives an upper bound for a
    function to within a constant factor. We write
    f(n) = O(g(n)) if there are positive constants
    n0 and c such that to the right of n0, the
    value of f(n) always lies on or below cg(n).
Θ-Notation (Same order)
   This notation bounds a function to within
    constant factors. We say f(n) = Θ(g(n)) if
    there exist positive constants n0, c1 and c2
    such that to the right of n0 the value of f(n)
    always lies between c1g(n) and c2g(n)
    inclusive.
Ω-Notation (Lower Bound)
   This notation gives a lower bound for a
    function to within a constant factor. We write
    f(n) = Ω(g(n)) if there are positive constants
    n0 and c such that to the right of n0, the
    value of f(n) always lies on or above cg(n).
Asymptotic Analysis
   Asymptotic analysis is based on the idea that as the
    problem size grows, the complexity can be described as
    a simple proportionality to some known function. This
    idea is incorporated in the "Big Oh" notation for
    asymptotic performance.
   Definition: T(n) = O(f(n))
    if and only if there are constants c0 and n0 such that
    T(n) <= c0 f(n) for all n >= n0.
    The expression "T(n) = O(f(n))" is read as "T of n is in
    Big Oh of f of n." Big Oh is sometimes said to describe
    an "upper-bound" on the complexity. Other forms of
    asymptotic analysis ("Big Omega", "Little Oh", "Theta")
    are similar in spirit to Big Oh.
Example 1
   Suppose we have a program that takes some constant amount
    of time to set up, then grows linearly with the problem size n. The
    constant time might be used to prompt the user for a filename
    and open the file. Neither of these operations are dependent on
    the amount of data in the file. After these setup operations, we
    read the data from the file and do something with it (say print it).
    The amount of time required to read the file is certainly
    proportional to the amount of data in the file. We let n be the
    amount of data. This program has time complexity of O(n). To
    see this, let's assume that the setup time is really long, say 500
    time units. Let's also assume that the time taken to read the data
    is 10n, 10 time units for each data point read. The following
    graph shows the function 500 + 10n plotted against n, the
    problem size. Also shown are the functions n and 20 n.
   Note that the function n will never be larger
    than the function 500 + 10 n, no matter how
    large n gets. However, there are constants c0
    and n0 such that 500 + 10n <= c0 n when n
    >= n0. One choice for these constants is c0 =
    20 and n0 = 50. Therefore, 500 + 10n = O(n).
    There are, of course, other choices for c0 and
    n0. For example, any value of c0 > 10 will
    work for n0 = 50.
Big Oh Does Not Tell the
Whole Story
   Suppose you have a choice of two approaches to writing
    a program. Both approaches have the same asymptotic
    performance (for example, both are O(n lg(n)). Why
    select one over the other, they're both the same, right?
    They may not be the same. There is this small matter of
    the constant of proportionality. Suppose algorithms A
    and B have the same asymptotic performance, TA(n) =
    TB(n) = O(g(n)). Now suppose that A does ten
    operations for each data item, but algorithm B only does
    three. It is reasonable to expect B to be faster than A
    even though both have the same asymptotic
    performance. The reason is that asymptotic analysis
    ignores constants of proportionality.
   As a specific example, let's say that algorithm A is:
    {
    set up the algorithm, taking 50 time units;
    read in n elements into array A; /* 3 units per element */
    for (i = 0; i < n; i++)
    {
    do operation1 on A[i]; /* takes 10 units */
    do operation2 on A[i]; /* takes 5 units */ do operation3 on A[i]; /*
    takes 15 units */
    }
    }
   Let's now say that algorithm B is
    {
    set up the algorithm, taking 200 time units;
    read in n elements into array A;        /* 3 units per element */
    for (i = 0; i < n; i++)
    {
    do operation1 on A[i]; /* takes 10 units */
    do operation2 on A[i]; /* takes 5 units */
    }
    }
   Algorithm A sets up faster than B, but does
    more operations on the data. The execution
    time of A and B will be
       TA(n) = 50 + 3*n + (10 + 5 + 15)*n = 50 + 33*n

    and
       TB(n) =200 + 3*n + (10 + 5)*n = 200 + 18*n
    respectively.
   The following graph shows the execution
    time for the two algorithms as a function of n.
    Algorithm A is the better choice for small
    values of n. For values of n > 10, algorithm B
    is the better choice. Remember that both
    algorithms have time complexity O(n).
Analyzing Algorithms
Predict resource utilization
   1- Running time (time complexity) — focus of this course
   2- Memory (space complexity)
Running time: the number of primitive operations (e.g., addition,
multiplication, comparisons) used to solve the problem
Depends on problem instance: often we find an upper bound:
T(input size) and use asymptotic notations (big-Oh)
    To make your life easier!
    Growth rate much more important than constants in big-
      Oh.
Input size n: rigorous definition given later
    sorting: number of items to be sorted
    graphs: number of vertices and edges
   The running time depends on the input: an
    already sorted sequence is easier to sort.
   Parameterize the running time by the size of
    the input, since short sequences are easier to
    sort than long ones.
   Generally, we seek upper bounds on the
    running time, because everybody likes a
    guarantee.
Three Cases of Analysis: I
   Best Case: An instance for a given size n that
    results in the fastest possible running time.
Three Cases of Analysis: II
   Worst Case: An instance for a given size n
    that results in the slowest possible running
    time.
Three Cases of Analysis: III
   Average Case: Average running time over
    every possible instances for the given size,
    assuming some probability distribution of the
    instances.
Three Cases of Analysis
   Best case: Clearly the worst
   Worst case: Commonly used, will also be used in this
    course
     Gives a running time guarantee no matter what the input
      is
     Fair comparison among different algorithms
   Average case: Used sometimes
     Need to assume some distribution: real-world inputs are

      seldom uniformly random!
     Analysis is complicated
     Will not use in this course
   Because, it is quite difficult to estimate the
    statistical behavior of the input, most of the
    time we content ourselves to a worst case
    behavior. Most of the time, the complexity of
    g(n) is approximated by its family o(f(n))
    where f(n) is one of the following functions. n
    (linear complexity), log n (logarithmic
    complexity), na where a≥2 (polynomial
    complexity), an (exponential complexity).
Optimality
   Once the complexity of an algorithm has been
    estimated, the question arises whether this
    algorithm is optimal. An algorithm for a given
    problem is optimal if its complexity reaches the
    lower bound over all the algorithms solving this
    problem. For example, any algorithm solving “the
    intersection of n segments” problem will execute at
    least n2 operations in the worst case even if it does
    nothing but print the output. This is abbreviated by
    saying that the problem has Ω(n2) complexity. If one
    finds an O(n2) algorithm that solve this problem, it
    will be optimal and of complexity Θ(n2).
Reduction
   Another technique for estimating the complexity of a
    problem is the transformation of problems, also
    called problem reduction. As an example, suppose
    we know a lower bound for a problem A, and that
    we would like to estimate a lower bound for a
    problem B. If we can transform A into B by a
    transformation step whose cost is less than that for
    solving A, then B has the same bound as A.
   The Convex hull problem nicely illustrates
    "reduction" technique. A lower bound of Convex-
    hull problem established by reducing the sorting
    problem (complexity: Θ(nlogn)) to the Convex hull
    problem.
Algorithm
   An algorithm is a well defined computational
    procedure that transforms inputs into outputs,
    achieving      the    desired      input-output
    relationship
   A correct algorithm halts with the correct
    output for every input instance. We can then
    say that the algorithm solves the problem
    Definition: An algorithm is a finite set of instructions which, if
     followed, will accomplish a particular task. In addition every
     algorithm must satisfy the following criteria:
     i)     input: there are zero or more quantities which are externally
                   supplied;
     ii) output: at least one quantity is produced;
        iii) definiteness:    each instruction must be clear and
                           unambiguous;
    iv) finiteness:        if we trace out the instructions of the algorithm,
                           then for all valid cases the algorithm will
                           terminate after a finite number of steps;
     v) effectiveness: every instruction must be sufficiently basic that
                           it can in principle be carried out by a person
                           using only a pencil and paper. It is not enough
                           that each operation be definite as in (iii), but it
                           must be feasible.          [Hor76]
Algorithmic Effectiveness
   Note the above requirement that an algorithm
    “in principle be carried out by a person using
    only a pencil and paper”.
   This limits the operations that are allowable
    in a “pure algorithm”.
    The basic list is:
       addition and subtraction;
       multiplication and division;
       and the operations immediately derived from
        these.
   One might argue for expansion of this list to include
    a few other operations that can be done by pencil
    and paper. I know an easy algorithm for taking the
    square root of integers, and can do that with a pencil
    on paper.
   However, this algorithm will produce an exact
    answer for only those integers that are the square of
    other integers; their square root must be an integer.
   Operations, such as the trigonometric functions, are
    not basic operations.        These can appear in
    algorithms only to the extent that they are
    understood to represent a sequence of basic
    operations that can be well defined.
More on Effectiveness
   Valid Examples:
     Sort an array in non–increasing order.
     Note: The step does not need to correspond to a single
       instruction.
     Find the square root of a given real number to a specific
       precision.
   Invalid Examples:
     Find four integers A ≥ 0, B ≥ 0, C ≥ 0, and N ≥ 3, such that AN +
       BN = CN. This is generally thought to be impossible.
     Find the exact square root of an arbitrary real number.
     The square root of 4.0 is exactly 2.0.
     The square root of 7.0 is some number between 2.645 751 311
       064 590 and 2.645 751 311 064 591.
     Later we shall consider methods to terminate an algorithm based
       on a known bound on the precision of the answer.
Heuristics vs. Algorithms
   Computer Science Definition
    A heuristic is a procedure, similar to an algorithm, that often produces
    a correct answer, but is not guaranteed to produce anything.

   Operations Research Definition
    Problems are usually optimization problems;
         either maximize profits and minimize costs.
    An algorithm is defined in the sense of computer science
    with the additional requirement that it always produces an optimal
    answer.
   A heuristic is a procedure that often produces an optimal answer,
    but cannot be proven to do so.
   NOTE: Both algorithms and heuristics can contain high–level
    statements that cannot be implemented directly in common
    programming languages; e.g. “sort the array low–to–high”.
Euclid’s Algorithm
   Compute the greatest common divisor of two non–negative
    integers.
   For two integers K ≥ 0 and N ≥ 0, we say that “K divides N”,
    equivalently
        “N is a multiple of K”, if N is an integral multiple of K: N =
    K•L.
   This property is denoted “K | N”, which is read as “K divides N”.
   The greatest common divisor of two integers M ≥ 0 and N ≥ 0,
    denoted
   gcd(M, N) is defined as follows.
    1) gcd(M, 0) = gcd(0, M) = M
    2) gcd(M, N) = K, such that
         a)      K|M
         b)      K|N
         c)      K is the largest integer with this property.
   Euclid’s Algorithm: gcd(M, N) = gcd (N, M mod N)
Example of Euclid’s Algorithm
gcd(M, N) = gcd (N, M mod N)
gcd(225, 63) = gcd(63, 36) as 225 = 63 • 3 + 36, so 225 mod
                                                     63 = 36
              = gcd(36, 27) as 63 = 36 • 1 + 27, so 63 mod 36
                                                             = 27
              = gcd(27, 9) as 36 = 27 • 1 + 9, so 36 mod 27 = 9
              = gcd(9, 0) as 27 = 9 • 3, so 27 mod 9 = 0
              = 9 as gcd(N, 0) = N for any N ≥ 0.

  gcd(63, 255) = gcd(255, 63)as 63 = 0 • 255 + 63, so 63 mod
  255 = 63

More Related Content

What's hot

Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and AlgorithmDhaval Kaneria
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Muhammad Hammad Waseem
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithmsGanesh Solanke
 
Analysis and design of algorithms part2
Analysis and design of algorithms part2Analysis and design of algorithms part2
Analysis and design of algorithms part2Deepak John
 
Lecture 1 objective and course plan
Lecture 1   objective and course planLecture 1   objective and course plan
Lecture 1 objective and course planjayavignesh86
 
Fundamentals of data structures ellis horowitz & sartaj sahni
Fundamentals of data structures   ellis horowitz & sartaj sahniFundamentals of data structures   ellis horowitz & sartaj sahni
Fundamentals of data structures ellis horowitz & sartaj sahniHitesh Wagle
 
Lecture01 algorithm analysis
Lecture01 algorithm analysisLecture01 algorithm analysis
Lecture01 algorithm analysisZara Nawaz
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)jehan1987
 
Notion of an algorithm
Notion of an algorithmNotion of an algorithm
Notion of an algorithmNisha Soms
 
Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis Dr. Pankaj Agarwal
 
Fundamentals of algorithms
Fundamentals of algorithmsFundamentals of algorithms
Fundamentals of algorithmsAmit Kumar Rathi
 
Algorithm analysis and efficiency
Algorithm analysis and efficiencyAlgorithm analysis and efficiency
Algorithm analysis and efficiencyppts123456
 

What's hot (18)

Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
 
Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and Algorithm
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Analysis and design of algorithms part2
Analysis and design of algorithms part2Analysis and design of algorithms part2
Analysis and design of algorithms part2
 
Lecture 1 objective and course plan
Lecture 1   objective and course planLecture 1   objective and course plan
Lecture 1 objective and course plan
 
Fundamentals of data structures ellis horowitz & sartaj sahni
Fundamentals of data structures   ellis horowitz & sartaj sahniFundamentals of data structures   ellis horowitz & sartaj sahni
Fundamentals of data structures ellis horowitz & sartaj sahni
 
Lecture01 algorithm analysis
Lecture01 algorithm analysisLecture01 algorithm analysis
Lecture01 algorithm analysis
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)
 
ADA complete notes
ADA complete notesADA complete notes
ADA complete notes
 
#1 designandanalysis of algo
#1 designandanalysis of algo#1 designandanalysis of algo
#1 designandanalysis of algo
 
Fundamental of Algorithms
Fundamental of Algorithms Fundamental of Algorithms
Fundamental of Algorithms
 
Notion of an algorithm
Notion of an algorithmNotion of an algorithm
Notion of an algorithm
 
Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis
 
Algorithm Introduction
Algorithm IntroductionAlgorithm Introduction
Algorithm Introduction
 
Fundamentals of algorithms
Fundamentals of algorithmsFundamentals of algorithms
Fundamentals of algorithms
 
Algorithm analysis and efficiency
Algorithm analysis and efficiencyAlgorithm analysis and efficiency
Algorithm analysis and efficiency
 

Similar to Aad introduction

TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdfMemMem25
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptxSayanSen36
 
9 big o-notation
9 big o-notation9 big o-notation
9 big o-notationirdginfo
 
DAA-Unit1.pptx
DAA-Unit1.pptxDAA-Unit1.pptx
DAA-Unit1.pptxNishaS88
 
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...TechVision8
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxskilljiolms
 
complexity analysis.pdf
complexity analysis.pdfcomplexity analysis.pdf
complexity analysis.pdfpasinduneshan
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agorithamlilyMalar1
 
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptxData Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptxRashidFaridChishti
 
Basic Computer Engineering Unit II as per RGPV Syllabus
Basic Computer Engineering Unit II as per RGPV SyllabusBasic Computer Engineering Unit II as per RGPV Syllabus
Basic Computer Engineering Unit II as per RGPV SyllabusNANDINI SHARMA
 

Similar to Aad introduction (20)

TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptx
 
Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
 
9 big o-notation
9 big o-notation9 big o-notation
9 big o-notation
 
DAA-Unit1.pptx
DAA-Unit1.pptxDAA-Unit1.pptx
DAA-Unit1.pptx
 
chapter 1
chapter 1chapter 1
chapter 1
 
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...
Data Structures and Algorithms Lecture 2: Analysis of Algorithms, Asymptotic ...
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
 
complexity analysis.pdf
complexity analysis.pdfcomplexity analysis.pdf
complexity analysis.pdf
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
 
Daa chapter 1
Daa chapter 1Daa chapter 1
Daa chapter 1
 
Unit 1.pptx
Unit 1.pptxUnit 1.pptx
Unit 1.pptx
 
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptxData Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
 
Basic Computer Engineering Unit II as per RGPV Syllabus
Basic Computer Engineering Unit II as per RGPV SyllabusBasic Computer Engineering Unit II as per RGPV Syllabus
Basic Computer Engineering Unit II as per RGPV Syllabus
 

More from Mr SMAK

Fyp list batch-2009 (project approval -rejected list)
Fyp list batch-2009 (project approval -rejected list)Fyp list batch-2009 (project approval -rejected list)
Fyp list batch-2009 (project approval -rejected list)Mr SMAK
 
Assigments2009
Assigments2009Assigments2009
Assigments2009Mr SMAK
 
Evaluation of cellular network
Evaluation of cellular networkEvaluation of cellular network
Evaluation of cellular networkMr SMAK
 
Common protocols
Common protocolsCommon protocols
Common protocolsMr SMAK
 
Cellular network
Cellular networkCellular network
Cellular networkMr SMAK
 
Lecture 6.1
Lecture  6.1Lecture  6.1
Lecture 6.1Mr SMAK
 
Lecture 6
Lecture  6Lecture  6
Lecture 6Mr SMAK
 
Parallel architecture
Parallel architectureParallel architecture
Parallel architectureMr SMAK
 
Lecture 3
Lecture 3Lecture 3
Lecture 3Mr SMAK
 
Lecture 2
Lecture 2Lecture 2
Lecture 2Mr SMAK
 
Lecture 1
Lecture 1Lecture 1
Lecture 1Mr SMAK
 
Lecture 6
Lecture  6Lecture  6
Lecture 6Mr SMAK
 
Lecture 6.1
Lecture  6.1Lecture  6.1
Lecture 6.1Mr SMAK
 
Chapter 2 ASE
Chapter 2 ASEChapter 2 ASE
Chapter 2 ASEMr SMAK
 
Structure of project plan and schedule
Structure of project plan and scheduleStructure of project plan and schedule
Structure of project plan and scheduleMr SMAK
 
Proposal format
Proposal formatProposal format
Proposal formatMr SMAK
 
Proposal announcement batch2009
Proposal announcement batch2009Proposal announcement batch2009
Proposal announcement batch2009Mr SMAK
 
List ofsuparco projectsforuniversities
List ofsuparco projectsforuniversitiesList ofsuparco projectsforuniversities
List ofsuparco projectsforuniversitiesMr SMAK
 
Fyp timeline & assessment policy batch 2009
Fyp timeline & assessment policy batch 2009Fyp timeline & assessment policy batch 2009
Fyp timeline & assessment policy batch 2009Mr SMAK
 

More from Mr SMAK (20)

Fyp list batch-2009 (project approval -rejected list)
Fyp list batch-2009 (project approval -rejected list)Fyp list batch-2009 (project approval -rejected list)
Fyp list batch-2009 (project approval -rejected list)
 
Assigments2009
Assigments2009Assigments2009
Assigments2009
 
Week1
Week1Week1
Week1
 
Evaluation of cellular network
Evaluation of cellular networkEvaluation of cellular network
Evaluation of cellular network
 
Common protocols
Common protocolsCommon protocols
Common protocols
 
Cellular network
Cellular networkCellular network
Cellular network
 
Lecture 6.1
Lecture  6.1Lecture  6.1
Lecture 6.1
 
Lecture 6
Lecture  6Lecture  6
Lecture 6
 
Parallel architecture
Parallel architectureParallel architecture
Parallel architecture
 
Lecture 3
Lecture 3Lecture 3
Lecture 3
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Lecture 6
Lecture  6Lecture  6
Lecture 6
 
Lecture 6.1
Lecture  6.1Lecture  6.1
Lecture 6.1
 
Chapter 2 ASE
Chapter 2 ASEChapter 2 ASE
Chapter 2 ASE
 
Structure of project plan and schedule
Structure of project plan and scheduleStructure of project plan and schedule
Structure of project plan and schedule
 
Proposal format
Proposal formatProposal format
Proposal format
 
Proposal announcement batch2009
Proposal announcement batch2009Proposal announcement batch2009
Proposal announcement batch2009
 
List ofsuparco projectsforuniversities
List ofsuparco projectsforuniversitiesList ofsuparco projectsforuniversities
List ofsuparco projectsforuniversities
 
Fyp timeline & assessment policy batch 2009
Fyp timeline & assessment policy batch 2009Fyp timeline & assessment policy batch 2009
Fyp timeline & assessment policy batch 2009
 

Recently uploaded

State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!Memoori
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?Mark Billinghurst
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfFIDO Alliance
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Paige Cruz
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxFIDO Alliance
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe中 央社
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsLeah Henrickson
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuidePixlogix Infotech
 
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdf
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdfThe Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdf
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdfFIDO Alliance
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxFIDO Alliance
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxFIDO Alliance
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
 
Vector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptxVector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptxjbellis
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform EngineeringMarcus Vechiato
 
How we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdfHow we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdfSrushith Repakula
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceSamy Fodil
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data SciencePaolo Missier
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...ScyllaDB
 
Google I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGoogle I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGDSC PJATK
 

Recently uploaded (20)

State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
 
Overview of Hyperledger Foundation
Overview of Hyperledger FoundationOverview of Hyperledger Foundation
Overview of Hyperledger Foundation
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdf
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdfThe Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdf
The Value of Certifying Products for FDO _ Paul at FIDO Alliance.pdf
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptx
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Vector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptxVector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptx
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
How we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdfHow we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdf
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM Performance
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data Science
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
 
Google I/O Extended 2024 Warsaw
Google I/O Extended 2024 WarsawGoogle I/O Extended 2024 Warsaw
Google I/O Extended 2024 Warsaw
 

Aad introduction

  • 1. Algorithmic Analysis & Design Introduction
  • 2. Introduction  Abu Ja'far Muhammad ibn Musa Al- Khwarizmi [Born: about 780 in Baghdad (now in Iraq). Died: about 850]  [al-jabr means "restoring", referring to the process of moving a subtracted quantity to the other side of an equation; al- muqabala is "comparing" and refers to subtracting equal quantities from both sides of an equation.]
  • 3. Introduction An algorithm, named after the ninth century scholar Abu Jafar Muhammad Ibn Musu Al-Khowarizmi, is defined as follows: Roughly speaking:  An algorithm is a set of rules for carrying out calculation either by hand or on a machine.  An algorithm is a finite step-by-step procedure to achieve a required result.  An algorithm is a sequence of computational steps that transform the input into the output.  An algorithm is a sequence of operations performed on data that have to be organized in data structures.  An algorithm is an abstraction of a program to be executed on a physical machine (model of Computation).
  • 4. What is an Algorithm?  An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time.
  • 5.
  • 6. The most famous algorithm in history dates well before the time of the ancient Greeks: this is Euclid's algorithm for calculating the greatest common divisor of two integers.
  • 7. Example-1: The Classic Multiplication Algorithm 1. Multiplication, the American way: Multiply the multiplicand one after another by each digit of the multiplier taken from right to left.
  • 8. 2. Multiplication, the English way: Multiply the multiplicand one after another by each digit of the multiplier taken from left to right.
  • 9. Example-2: Chain Matrix Multiplication  Want: ABCD =?  Method 1: (AB)(CD)  Method 2: A((BC)D)  Method 1 is much more efficient than Method 2.
  • 10. What is this course about?  There are usually more than one algorithm for solving a problem.  Some algorithms are more efficient than others.  We want the most efficient algorithm.
  • 11. What is this course about?  If we have two algorithms, we need to find out which one is more efficient.  To do so, we need to analyze each of them to determine their efficiency.  Of course, we must also make sure the algorithm are correct.
  • 12. What is this course about? In this course, we will discuss fundamental techniques for: 1. Proving the correctness of algorithms, 2. Analyzing the running times of algorithms, 3. Designing efficient algorithms, 4. Showing efficient algorithms do not exist for some problems Notes:  Analysis and design go hand in hand:  By analyzing the running times of algorithms, we will know how to design fast algorithms.  In CS207, you have learned techniques for 1, 2, 3 w.r.t sorting and searching. In CS401, we will discuss them in the contexts of other problems.
  • 14. Analysis and Design  Algorithmic is a branch of computer science that consists of designing and analyzing computer algorithms 1. The “design” pertain to  The description of algorithm at an abstract level by means of a pseudo language, and  Proof of correctness that is, the algorithm solves the given problem in all cases. 2. The “analysis” deals with performance evaluation (complexity analysis).
  • 15. Analysis and Design  We start with defining the model of computation, which is usually the Random Access Machine (RAM) model, but other models of computations can be use such as PRAM. Once the model of computation has been defined, an algorithm can be describe using a simple language (or pseudo language) whose syntax is close to programming language such as C# or java.
  • 16. Analysis of algorithms The theoretical study of computer- program performance and resource usage. What’s more important than performance? • modularity • user-friendliness • correctness • programmer time • maintainability • simplicity • functionality • extensibility • robustness • reliability
  • 17. Why study algorithms and performance?  Algorithms help us to understand scalability.  Performance often draws the line between what is feasible and what is impossible.  Algorithmic mathematics provides a language for talking about program behavior.  Performance is the currency of computing.  The lessons of program performance generalize to other computing resources.  Speed is fun!
  • 18. Algorithm's Performance  Two important ways to characterize the effectiveness of an algorithm are its:  Time Complexity: how much time will the program take?  Space Complexity: how much storage will the program need?
  • 19. Complexity  Complexity refers to the rate at which the storage or time grows as a function of the problem size. The absolute growth depends on the machine used to execute the program, the compiler used to construct the program, and many other factors. We would like to have a way of describing the inherent complexity of a program (or piece of a program), independent of machine/compiler considerations. This means that we must not try to describe the absolute time or storage needed. We must instead concentrate on a "proportionality" approach, expressing the complexity in terms of its relationship to some known function. This type of analysis is known as asymptotic analysis.
  • 20. Complexity  Time complexity of an algorithm concerns determining an expression of the number of steps needed as a function of the problem size. Since the step count measure is somewhat coarse, one does not aim at obtaining an exact step count.  Instead, one attempts only to get asymptotic bounds on the step count. Asymptotic analysis makes use of the O (Big Oh) notation. Two other notational constructs used by computer scientists in the analysis of algorithms are Θ (Big Theta) notation and Ω (Big Omega) notation.
  • 21. O-Notation (Upper Bound)  This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or below cg(n).
  • 22. Θ-Notation (Same order)  This notation bounds a function to within constant factors. We say f(n) = Θ(g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0 the value of f(n) always lies between c1g(n) and c2g(n) inclusive.
  • 23. Ω-Notation (Lower Bound)  This notation gives a lower bound for a function to within a constant factor. We write f(n) = Ω(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or above cg(n).
  • 24. Asymptotic Analysis  Asymptotic analysis is based on the idea that as the problem size grows, the complexity can be described as a simple proportionality to some known function. This idea is incorporated in the "Big Oh" notation for asymptotic performance.  Definition: T(n) = O(f(n)) if and only if there are constants c0 and n0 such that T(n) <= c0 f(n) for all n >= n0. The expression "T(n) = O(f(n))" is read as "T of n is in Big Oh of f of n." Big Oh is sometimes said to describe an "upper-bound" on the complexity. Other forms of asymptotic analysis ("Big Omega", "Little Oh", "Theta") are similar in spirit to Big Oh.
  • 25. Example 1  Suppose we have a program that takes some constant amount of time to set up, then grows linearly with the problem size n. The constant time might be used to prompt the user for a filename and open the file. Neither of these operations are dependent on the amount of data in the file. After these setup operations, we read the data from the file and do something with it (say print it). The amount of time required to read the file is certainly proportional to the amount of data in the file. We let n be the amount of data. This program has time complexity of O(n). To see this, let's assume that the setup time is really long, say 500 time units. Let's also assume that the time taken to read the data is 10n, 10 time units for each data point read. The following graph shows the function 500 + 10n plotted against n, the problem size. Also shown are the functions n and 20 n.
  • 26.
  • 27. Note that the function n will never be larger than the function 500 + 10 n, no matter how large n gets. However, there are constants c0 and n0 such that 500 + 10n <= c0 n when n >= n0. One choice for these constants is c0 = 20 and n0 = 50. Therefore, 500 + 10n = O(n). There are, of course, other choices for c0 and n0. For example, any value of c0 > 10 will work for n0 = 50.
  • 28. Big Oh Does Not Tell the Whole Story  Suppose you have a choice of two approaches to writing a program. Both approaches have the same asymptotic performance (for example, both are O(n lg(n)). Why select one over the other, they're both the same, right? They may not be the same. There is this small matter of the constant of proportionality. Suppose algorithms A and B have the same asymptotic performance, TA(n) = TB(n) = O(g(n)). Now suppose that A does ten operations for each data item, but algorithm B only does three. It is reasonable to expect B to be faster than A even though both have the same asymptotic performance. The reason is that asymptotic analysis ignores constants of proportionality.
  • 29. As a specific example, let's say that algorithm A is: { set up the algorithm, taking 50 time units; read in n elements into array A; /* 3 units per element */ for (i = 0; i < n; i++) { do operation1 on A[i]; /* takes 10 units */ do operation2 on A[i]; /* takes 5 units */ do operation3 on A[i]; /* takes 15 units */ } }
  • 30. Let's now say that algorithm B is { set up the algorithm, taking 200 time units; read in n elements into array A; /* 3 units per element */ for (i = 0; i < n; i++) { do operation1 on A[i]; /* takes 10 units */ do operation2 on A[i]; /* takes 5 units */ } }
  • 31. Algorithm A sets up faster than B, but does more operations on the data. The execution time of A and B will be  TA(n) = 50 + 3*n + (10 + 5 + 15)*n = 50 + 33*n and  TB(n) =200 + 3*n + (10 + 5)*n = 200 + 18*n respectively.
  • 32. The following graph shows the execution time for the two algorithms as a function of n. Algorithm A is the better choice for small values of n. For values of n > 10, algorithm B is the better choice. Remember that both algorithms have time complexity O(n).
  • 33.
  • 34. Analyzing Algorithms Predict resource utilization 1- Running time (time complexity) — focus of this course 2- Memory (space complexity) Running time: the number of primitive operations (e.g., addition, multiplication, comparisons) used to solve the problem Depends on problem instance: often we find an upper bound: T(input size) and use asymptotic notations (big-Oh)  To make your life easier!  Growth rate much more important than constants in big- Oh. Input size n: rigorous definition given later  sorting: number of items to be sorted  graphs: number of vertices and edges
  • 35. The running time depends on the input: an already sorted sequence is easier to sort.  Parameterize the running time by the size of the input, since short sequences are easier to sort than long ones.  Generally, we seek upper bounds on the running time, because everybody likes a guarantee.
  • 36. Three Cases of Analysis: I  Best Case: An instance for a given size n that results in the fastest possible running time.
  • 37. Three Cases of Analysis: II  Worst Case: An instance for a given size n that results in the slowest possible running time.
  • 38. Three Cases of Analysis: III  Average Case: Average running time over every possible instances for the given size, assuming some probability distribution of the instances.
  • 39. Three Cases of Analysis  Best case: Clearly the worst  Worst case: Commonly used, will also be used in this course  Gives a running time guarantee no matter what the input is  Fair comparison among different algorithms  Average case: Used sometimes  Need to assume some distribution: real-world inputs are seldom uniformly random!  Analysis is complicated  Will not use in this course
  • 40. Because, it is quite difficult to estimate the statistical behavior of the input, most of the time we content ourselves to a worst case behavior. Most of the time, the complexity of g(n) is approximated by its family o(f(n)) where f(n) is one of the following functions. n (linear complexity), log n (logarithmic complexity), na where a≥2 (polynomial complexity), an (exponential complexity).
  • 41. Optimality  Once the complexity of an algorithm has been estimated, the question arises whether this algorithm is optimal. An algorithm for a given problem is optimal if its complexity reaches the lower bound over all the algorithms solving this problem. For example, any algorithm solving “the intersection of n segments” problem will execute at least n2 operations in the worst case even if it does nothing but print the output. This is abbreviated by saying that the problem has Ω(n2) complexity. If one finds an O(n2) algorithm that solve this problem, it will be optimal and of complexity Θ(n2).
  • 42. Reduction  Another technique for estimating the complexity of a problem is the transformation of problems, also called problem reduction. As an example, suppose we know a lower bound for a problem A, and that we would like to estimate a lower bound for a problem B. If we can transform A into B by a transformation step whose cost is less than that for solving A, then B has the same bound as A.  The Convex hull problem nicely illustrates "reduction" technique. A lower bound of Convex- hull problem established by reducing the sorting problem (complexity: Θ(nlogn)) to the Convex hull problem.
  • 43. Algorithm  An algorithm is a well defined computational procedure that transforms inputs into outputs, achieving the desired input-output relationship  A correct algorithm halts with the correct output for every input instance. We can then say that the algorithm solves the problem
  • 44. Definition: An algorithm is a finite set of instructions which, if followed, will accomplish a particular task. In addition every algorithm must satisfy the following criteria: i) input: there are zero or more quantities which are externally supplied; ii) output: at least one quantity is produced; iii) definiteness: each instruction must be clear and unambiguous; iv) finiteness: if we trace out the instructions of the algorithm, then for all valid cases the algorithm will terminate after a finite number of steps; v) effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by a person using only a pencil and paper. It is not enough that each operation be definite as in (iii), but it must be feasible. [Hor76]
  • 45. Algorithmic Effectiveness  Note the above requirement that an algorithm “in principle be carried out by a person using only a pencil and paper”.  This limits the operations that are allowable in a “pure algorithm”. The basic list is:  addition and subtraction;  multiplication and division;  and the operations immediately derived from these.
  • 46. One might argue for expansion of this list to include a few other operations that can be done by pencil and paper. I know an easy algorithm for taking the square root of integers, and can do that with a pencil on paper.  However, this algorithm will produce an exact answer for only those integers that are the square of other integers; their square root must be an integer.  Operations, such as the trigonometric functions, are not basic operations. These can appear in algorithms only to the extent that they are understood to represent a sequence of basic operations that can be well defined.
  • 47. More on Effectiveness  Valid Examples:  Sort an array in non–increasing order.  Note: The step does not need to correspond to a single instruction.  Find the square root of a given real number to a specific precision.  Invalid Examples:  Find four integers A ≥ 0, B ≥ 0, C ≥ 0, and N ≥ 3, such that AN + BN = CN. This is generally thought to be impossible.  Find the exact square root of an arbitrary real number.  The square root of 4.0 is exactly 2.0.  The square root of 7.0 is some number between 2.645 751 311 064 590 and 2.645 751 311 064 591.  Later we shall consider methods to terminate an algorithm based on a known bound on the precision of the answer.
  • 48. Heuristics vs. Algorithms  Computer Science Definition A heuristic is a procedure, similar to an algorithm, that often produces a correct answer, but is not guaranteed to produce anything.  Operations Research Definition Problems are usually optimization problems; either maximize profits and minimize costs. An algorithm is defined in the sense of computer science with the additional requirement that it always produces an optimal answer.  A heuristic is a procedure that often produces an optimal answer, but cannot be proven to do so.  NOTE: Both algorithms and heuristics can contain high–level statements that cannot be implemented directly in common programming languages; e.g. “sort the array low–to–high”.
  • 49. Euclid’s Algorithm  Compute the greatest common divisor of two non–negative integers.  For two integers K ≥ 0 and N ≥ 0, we say that “K divides N”, equivalently  “N is a multiple of K”, if N is an integral multiple of K: N = K•L.  This property is denoted “K | N”, which is read as “K divides N”.  The greatest common divisor of two integers M ≥ 0 and N ≥ 0, denoted  gcd(M, N) is defined as follows. 1) gcd(M, 0) = gcd(0, M) = M 2) gcd(M, N) = K, such that a) K|M b) K|N c) K is the largest integer with this property.  Euclid’s Algorithm: gcd(M, N) = gcd (N, M mod N)
  • 50. Example of Euclid’s Algorithm gcd(M, N) = gcd (N, M mod N) gcd(225, 63) = gcd(63, 36) as 225 = 63 • 3 + 36, so 225 mod 63 = 36 = gcd(36, 27) as 63 = 36 • 1 + 27, so 63 mod 36 = 27 = gcd(27, 9) as 36 = 27 • 1 + 9, so 36 mod 27 = 9 = gcd(9, 0) as 27 = 9 • 3, so 27 mod 9 = 0 = 9 as gcd(N, 0) = N for any N ≥ 0. gcd(63, 255) = gcd(255, 63)as 63 = 0 • 255 + 63, so 63 mod 255 = 63