This document discusses algorithms. It begins by defining an algorithm and its key properties: algorithms must have inputs, outputs, defined steps, and be finite and correct. It then discusses measuring algorithm performance through time and space complexity. Common algorithm design approaches are also introduced: greedy algorithms make locally optimal choices; divide-and-conquer breaks problems into subproblems; and dynamic programming optimizes recursion through storing subproblem solutions. Examples are provided for each approach.
Development of the
notionof "algorithm"
The word algorithm comes from the name of the
9th century Persian mathematician Abu Abdullah
Muhammad ibnMusa Al-Khwarizmi, whose
work built upon that of the 7th-century Indian
mathematician Brahmagupta.
The word algorism originally referred only to the
rules of performing arithmetic using Hindu-Arabic
numerals but evolved via European Latin
translation of Al-Khwarizmi's name into algorithm
by the 18th century.
The use of the word evolved to include all definite
procedures for solving problems or performing
tasks.
6.
What is anAlgorithm?
• An algorithm is a finite set of instructions or logic,
written in order, to accomplish a certain predefined
task. Algorithm is not the complete code or program,
it is just the core logic(solution) of a problem, which
can be expressed either as an informal high level
description as pseudocode or using a flowchart.
• An algorithm is said to be efficient and fast, if it takes
less time to execute and consumes less memory
space.
7.
Every Algorithm mustsatisfy
the following properties:
• Input- There should be 0 or more inputs supplied
externally to the algorithm.
• Output- There should be at least 1 output obtained.
• Definiteness- Every step of the algorithm should be
clear and well defined.
• Finiteness- The algorithm should have finite number
of steps.
• Correctness- Every step of the algorithm must
generate a correct output.
8.
The performance ofan algorithm
is measured on the basis of
following properties :
• Time Complexity
- A way to represent the amount of time required by the program
to run till its completion. It's generally a good practice to try to
keep the time required minimum, so that our algorithm
completes it's execution in the minimum time possible.
• Space Complexity
- The amount of memory space required by the algorithm,
during the course of its execution. Space complexity must be
taken seriously for multi-user systems and in situations where
limited memory is available.
9.
An algorithm generallyrequires space for
following components :
1. Instruction Space: Its the space required to
store the executable version of the program.
This space is fixed, but varies depending upon
the number of lines of code in the program.
2. Data Space: Its the space required to store all
the constants and variables(including
temporary variables) value.
3. Environment Space: Its the space required to
store the environment information needed to
resume the suspended function.
10.
1. Greedy Approach− finding solution by
choosing next best option
2. Divide and Conquer − dividing the problem
into as few sub-problems as possible and solving
them independently
3. Dynamic Programming − dividing the problem
into the fewest possible sub-problems and solving
them all together
3 commonly used to
develop Algorithm
11.
is an approachfor solving a problem
by selecting the best option available
at the moment. It doesn't worry
whether the current best result will
bring the overall optimal result. The
algorithm never reverses the earlier
decision even if the choice is wrong.
Greedy Approach
12.
Kruskal's algorithm andPrim's algorithm
>>for finding minimum spanning trees and the
algorithm for finding optimum Huffman trees.
Greedy algorithms appear in the network
routing as well.
Example
13.
is a recursiveproblem-solving
approach which break a problem into
smaller subproblems, recursively solve
the subproblems, and finally combines
the solutions to the subproblems to
solve the original problem. This method
usually allows us to reduce the time
complexity to a large extent.
Divide and Conquer
14.
A classic exampleof Divide
and Conquer is Merge Sort
demonstrated below. In Merge
Sort, we divide array into two
halves, sort the two halves
recursively, and then merge
the sorted halves.
Example
15.
is mainly anoptimization over plain recursion.
Wherever we see a recursive solution that has
repeated calls for same inputs, we can optimize it
using Dynamic Programming. The idea is to
simply store the results of subproblems, so that
we do not have to re-compute them when needed
later. This simple optimization reduces time
complexities from exponential to polynomial.
Dynamic Programming
16.
A classic exampleis
fibonacci series is
the sequence of
numbers in which
each number is the
sum of the two
preceding ones.
Example