2. SYLLABUS
Pattern matching
Matrix multiplication
Graph algorithm
Algebraic problem
NP Hard and NP complete problem
3. Pattern matching
Pattern matching in computer science is the checking and locating of specific
sequences of data of some pattern among raw data or a sequence of tokens.
Unlike pattern recognition, the match has to be exact in the case of pattern
matching.
Pattern matching is one of the most fundamental and important paradigms in
several programming languages.
Many applications make use of pattern matching as a major part of their
tasks.
4. Cont….
Pattern matching, in its classical form, involves the use of one-dimensional
string matching.
Patterns are either tree structures or sequences. There are different classes
of programming languages and machines which make use of pattern matching.
In the case of machines, the major classifications include deterministic finite
state automata, deterministic pushdown automata, nondeterministic
pushdown automata and Turing machines.
Regular programming languages make use of regular expressions for pattern
matching.
Tree patterns are also used in certain programming languages like Haskell as a
tool to process data based on the structure. Compared to regular expressions,
tree patterns lack simplicity and efficiency.
5. Cont…
There are many applications for pattern matching in computer science. High-
level language compilers make use of pattern matching in order to parse
source files to determine if they are syntactically correct.
In programming languages and applications, pattern matching is used in
identifying the matching pattern or substituting the matching pattern with
another token sequence.
6. Matrix multiplication
Given a sequence of matrices, find the most efficient way to multiply these
matrices together.
The problem is not actually to perform the multiplications, but merely to
decide in which order to perform the multiplications.
We have many options to multiply a chain of matrices because matrix
multiplication is associative.
In other words, no matter how we parenthesize the product, the result will be
the same. For example, if we had four matrices A, B, C, and D, we would
have:
(ABC)D = (AB)(CD) = A(BCD) = ....
7. Cont….
However, the order in which we parenthesize the product affects the number
of simple arithmetic operations needed to compute the product, or the
efficiency.
For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 ×
60 matrix. Then,
(AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations
A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.
Clearly the first parenthesization requires less number of operations.
Given an array p[] which represents the chain of matrices such that the ith
matrix Ai is of dimension p[i-1] x p[i].
We need to write a function MatrixChainOrder() that should return the
minimum number of multiplications needed to multiply the chain.
8. Graph algorithm
A graph is an abstract notation used to represent the connection between
pairs of objects. A graph consists of −
Vertices − Interconnected objects in a graph are called vertices. Vertices are
also known as nodes.
Edges − Edges are the links that connect the vertices.
There are two types of graphs −
Directed graph − In a directed graph, edges have direction, i.e., edges go
from one vertex to another.
Undirected graph − In an undirected graph, edges have no direction.
https://www.tutorialspoint.com/parallel_algorithm/graph_algorithm.htm
9. Algebraic problem
The following is a list of several popular design approaches:
1. Divide and Conquer Approach: It is a top-down approach. The algorithms
which follow the divide & conquer techniques involve three steps:
Divide the original problem into a set of subproblems.
Solve every subproblem individually, recursively.
Combine the solution of the subproblems (top level) into a solution of the
whole original problem.
10. Cont…
2. Greedy Technique: Greedy method is used to solve the optimization
problem. An optimization problem is one in which we are given a set of input
values, which are required either to be maximized or minimized (known as
objective), i.e. some constraints or conditions.
Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
The greedy algorithm doesn't always guarantee the optimal solution however
it generally produces a solution that is very close in value to the optimal.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we
solve all possible small problems and then combine them to obtain solutions
for bigger problems.
11. Cont…
This is particularly helpful when the number of copying subproblems is
exponentially large. Dynamic Programming is frequently related
to Optimization Problems.
4. Branch and Bound: In Branch & Bound algorithm a given subproblem,
which cannot be bounded, has to be divided into at least two new restricted
subproblems. Branch and Bound algorithm are methods for global
optimization in non-convex problems. Branch and Bound algorithms can be
slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage
with much less effort.
5. Randomized Algorithms: A randomized algorithm is defined as an
algorithm that is allowed to access a source of independent, unbiased random
bits, and it is then allowed to use these random bits to influence its
computation.
12. Cont…
6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until
they find the right one. It is a depth-first search of the set of possible
solution. During the search, if an alternative doesn't work, then backtrack to
the choice point, the place which presented different alternatives, and tries
the next alternative.
7. Randomized Algorithm: A randomized algorithm uses a random number at
least once during the computation make a decision.
Example 1: In Quick Sort, using a random number to choose a pivot.
13. NP Hard and NP complete problem
NP Problem:
The NP problems set of problems whose solutions are hard to find but easy to
verify and are solved by Non-Deterministic Machine in polynomial time.
NP-Hard Problem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is
reducible to X in polynomial time. NP-Hard problems are as hard as NP-
Complete problems. NP-Hard Problem need not be in NP class.
NP-Complete Problem:
A problem X is NP-Complete if there is an NP problem Y, such that Y is
reducible to X in polynomial time. NP-Complete problems are as hard as NP
problems. A problem is NP-Complete if it is a part of both NP and NP-Hard
Problem. A non-deterministic Turing machine can solve NP-Complete problem
in polynomial time.