Shellsort is a sorting algorithm invented by Donald Shell in 1959 that was the first to break the quadratic time barrier of simpler sorting algorithms like insertion sort. It works by sorting elements with increasing proximity over multiple passes rather than just adjacent elements. The algorithm uses an increment sequence to determine the spacing between elements to compare and sort in each pass until the final pass sorts adjacent elements like an insertion sort. While faster than older quadratic algorithms, shellsort is still outperformed by more efficient algorithms like merge, heap, and quicksort for larger data sets.
- Quicksort is a simple and fast sorting algorithm that can sort arrays "in place" without using extra space.
- It works by recursively partitioning the array around a pivot value, sorting the left and right subarrays, and then combining them.
- On average, it has a runtime of O(n log n) but in the worst case of an already sorted array it can have a quadratic runtime of O(n^2) like bubble sort. However, its randomized choice of pivots means it rarely encounters worst-case inputs in practice.
The document discusses hash tables and how they can be used to implement dictionaries. Hash tables map keys to table slots using a hash function in order to store and retrieve items efficiently. Collisions may occur when multiple keys hash to the same slot. Chaining is described as a method to handle collisions by storing colliding items in linked lists attached to table slots. Analysis shows that with simple uniform hashing, dictionary operations like search, insert and delete take expected O(1) time on average.
The document discusses various sorting algorithms. It begins by defining a sorting algorithm as arranging elements of a list in a certain order, such as numerical or alphabetical order. It then discusses popular sorting algorithms like insertion sort, bubble sort, merge sort, quicksort, selection sort, and heap sort. For each algorithm, it provides examples to illustrate how the algorithm works step-by-step to sort a list of numbers. Code snippets are also included for insertion sort and bubble sort.
Binary search provides an efficient O(log n) solution for searching a sorted list. It works by repeatedly dividing the search space in half and focusing on only one subdivision, based on comparing the search key to the middle element. This recursively narrows down possible locations until the key is found or the entire list has been searched. Binary search mimics traversing a binary search tree built from the sorted list, with divide-and-conquer reducing the search space at each step.
Shellsort is a sorting algorithm invented by Donald Shell in 1959 that was the first to break the quadratic time barrier of simpler sorting algorithms like insertion sort. It works by sorting elements with increasing proximity over multiple passes rather than just adjacent elements. The algorithm uses an increment sequence to determine the spacing between elements to compare and sort in each pass until the final pass sorts adjacent elements like an insertion sort. While faster than older quadratic algorithms, shellsort is still outperformed by more efficient algorithms like merge, heap, and quicksort for larger data sets.
- Quicksort is a simple and fast sorting algorithm that can sort arrays "in place" without using extra space.
- It works by recursively partitioning the array around a pivot value, sorting the left and right subarrays, and then combining them.
- On average, it has a runtime of O(n log n) but in the worst case of an already sorted array it can have a quadratic runtime of O(n^2) like bubble sort. However, its randomized choice of pivots means it rarely encounters worst-case inputs in practice.
The document discusses hash tables and how they can be used to implement dictionaries. Hash tables map keys to table slots using a hash function in order to store and retrieve items efficiently. Collisions may occur when multiple keys hash to the same slot. Chaining is described as a method to handle collisions by storing colliding items in linked lists attached to table slots. Analysis shows that with simple uniform hashing, dictionary operations like search, insert and delete take expected O(1) time on average.
The document discusses various sorting algorithms. It begins by defining a sorting algorithm as arranging elements of a list in a certain order, such as numerical or alphabetical order. It then discusses popular sorting algorithms like insertion sort, bubble sort, merge sort, quicksort, selection sort, and heap sort. For each algorithm, it provides examples to illustrate how the algorithm works step-by-step to sort a list of numbers. Code snippets are also included for insertion sort and bubble sort.
Binary search provides an efficient O(log n) solution for searching a sorted list. It works by repeatedly dividing the search space in half and focusing on only one subdivision, based on comparing the search key to the middle element. This recursively narrows down possible locations until the key is found or the entire list has been searched. Binary search mimics traversing a binary search tree built from the sorted list, with divide-and-conquer reducing the search space at each step.
This document summarizes a lecture on hashing techniques. It discusses using a hash function to map keys to table slots, addressing collisions through chaining or open addressing. Chaining stores colliding keys in linked lists, while open addressing resolves collisions by probing for empty slots using techniques like linear or quadratic probing. The performance of search, insertion and deletion is analyzed in terms of load factor and different hashing methods. Universal hashing and perfect hashing are also introduced to improve performance.
This is a presentation on Arrays, one of the most important topics on Data Structures and algorithms. Anyone who is new to DSA or wants to have a theoretical understanding of the same can refer to it :D
An array is a data structure that stores fixed number of items of the same type. It allows fast access of elements using indices. Basic array operations include traversing elements, inserting/deleting elements, searching for elements, and updating elements. Arrays are zero-indexed and elements are accessed via their index.
The document discusses queue data structures. A queue is a linear data structure where additions are made at the end/tail and removals are made from the front/head, following a First-In First-Out (FIFO) approach. Common queue operations include add, remove, check if empty, check if full. A queue can be stored using either a static array or dynamic linked nodes. The key aspects are maintaining references to the head and tail of the queue.
This document provides an overview of different data structures and sorting algorithms. It begins with an introduction to data structures and describes linear data structures like arrays, stacks, queues, and linked lists as well as non-linear data structures like trees and graphs. It then provides more detailed descriptions of stacks, queues, linked lists, and common sorting algorithms like selection sort and bubble sort.
The document discusses disjoint sets and operations on disjoint sets such as union and find. Disjoint sets are sets that do not have any common elements. The union of two disjoint sets combines all the elements of both sets. The find operation takes an element as input and returns the set that contains that element. Disjoint sets can be represented using a tree structure. Algorithms for union and find operations are presented, including weighted union and collapsing find techniques that improve the efficiency.
This document discusses radix sorting and provides details on its implementation. Radix sorting is a non-comparative sorting algorithm that uses bucket sorting to sort integers by their individual digits in multiple passes. Pseudocode and C code examples are provided to demonstrate how radix sorting works by distributing numbers into buckets based on their digits and recombining the buckets in sorted order. The time complexity of radix sorting is linear for a constant number of digits.
The document describes the bucket sorting algorithm. It works by dividing the input range into buckets and distributing elements into the buckets based on their values. Each bucket is then sorted individually, usually with insertion sort, and concatenated together in order. The time complexity is O(n) when the number of buckets k is Θ(n), as distributing elements takes O(n) time, sorting each bucket takes O(n log(n/k)) time, and concatenating takes O(k) time. Bucket sort assumes uniform distribution of input elements across the range.
This document discusses arrays, records, and pointers. It begins by defining linear and non-linear data structures. Linear data structures have elements stored in sequential memory locations or linked by pointers. Arrays and linked lists are two ways to represent linear structures. Common operations on linear structures are traversal, search, insertion, deletion, sorting, and merging. Arrays are preferred over linked lists when the data needs to be traversed, searched, or sorted frequently. The document then discusses linear arrays in more detail, including representation in memory, traversing arrays, inserting and deleting elements, and multidimensional arrays.
This document provides summaries of different data structures: arrays, stacks, queues, and linked lists. Arrays allow storing multiple values in a single variable using indexes. Stacks follow LIFO order using push and pop operations. Queues follow FIFO order using enqueue and dequeue operations. Linked lists contain nodes with a data and pointer to the next node, allowing dynamic size lists.
Hash table in data structure and algorithmAamir Sohail
The document discusses hash tables and their use for efficient data retrieval. It begins by comparing the time complexity of different data structures for searching, noting that hash tables provide constant time O(1) search. It then provides examples of using hash tables to store student records and complaints by number. Key aspects covered include hash functions mapping data to table indices, minimizing collisions, open and closed addressing for collisions, and linked lists or probing as solutions. Types of hash functions and their parameters are defined. The document aims to explain the core concepts of hashing, hash functions, hash tables and approaches for handling collisions.
This document discusses linear search and binary search algorithms. Linear search sequentially checks each element of an unsorted array to find a target value, resulting in O(n) time complexity. Binary search works on a sorted array, comparing the target to the middle element and recursively searching half the array, requiring O(log n) time. The document provides pseudocode for both algorithms and compares their performance on different sized inputs. It also discusses properties of greedy algorithms and provides an example of when a greedy solution fails to find the optimal result.
The document discusses different data structures for representing queues and linked lists, including their implementations and operations. Queues follow FIFO ordering and can be implemented using arrays or linked lists. Linked lists allow efficient insertion/removal at both ends and can be used to implement double-ended queues (deques). Deques support efficient insertion/removal from both ends and can implement stacks and queues. Sequences generalize vectors and linked lists, introducing the concept of positions to provide implementation independence.
1. Linear search sequentially checks each element of an array to find a target item. It adds the item to the end of the array and uses a counter to check each element until it finds a match.
2. Binary search works on a sorted array. It checks the middle element first, then searches either the left or right half depending on if the target is smaller or larger than the middle element.
3. The example demonstrates linear search finding the letter 'G' in an array and binary search locating the number 44 through a series of steps that narrow the search space.
The document discusses data structures and lists in Python. It begins by defining data structures as a way to organize and store data for efficient access and modification. It then covers the different types of data structures, including primitive structures like integers and strings, and non-primitive structures like lists, tuples, and dictionaries. A large portion of the document focuses on lists in Python, describing how to perform common list manipulations like adding and removing elements using various methods. These methods include append(), insert(), remove(), pop(), and clear(). The document also discusses accessing list elements and other list operations such as sorting, counting, and reversing.
The document discusses various sorting algorithms that use the divide-and-conquer approach, including quicksort, mergesort, and heapsort. It provides examples of how each algorithm works by recursively dividing problems into subproblems until a base case is reached. Code implementations and pseudocode are presented for key steps like partitioning arrays in quicksort, merging sorted subarrays in mergesort, and adding and removing elements from a heap data structure in heapsort. The algorithms are compared in terms of their time and space complexity and best uses.
This document summarizes a lecture on hashing techniques. It discusses using a hash function to map keys to table slots, addressing collisions through chaining or open addressing. Chaining stores colliding keys in linked lists, while open addressing resolves collisions by probing for empty slots using techniques like linear or quadratic probing. The performance of search, insertion and deletion is analyzed in terms of load factor and different hashing methods. Universal hashing and perfect hashing are also introduced to improve performance.
This is a presentation on Arrays, one of the most important topics on Data Structures and algorithms. Anyone who is new to DSA or wants to have a theoretical understanding of the same can refer to it :D
An array is a data structure that stores fixed number of items of the same type. It allows fast access of elements using indices. Basic array operations include traversing elements, inserting/deleting elements, searching for elements, and updating elements. Arrays are zero-indexed and elements are accessed via their index.
The document discusses queue data structures. A queue is a linear data structure where additions are made at the end/tail and removals are made from the front/head, following a First-In First-Out (FIFO) approach. Common queue operations include add, remove, check if empty, check if full. A queue can be stored using either a static array or dynamic linked nodes. The key aspects are maintaining references to the head and tail of the queue.
This document provides an overview of different data structures and sorting algorithms. It begins with an introduction to data structures and describes linear data structures like arrays, stacks, queues, and linked lists as well as non-linear data structures like trees and graphs. It then provides more detailed descriptions of stacks, queues, linked lists, and common sorting algorithms like selection sort and bubble sort.
The document discusses disjoint sets and operations on disjoint sets such as union and find. Disjoint sets are sets that do not have any common elements. The union of two disjoint sets combines all the elements of both sets. The find operation takes an element as input and returns the set that contains that element. Disjoint sets can be represented using a tree structure. Algorithms for union and find operations are presented, including weighted union and collapsing find techniques that improve the efficiency.
This document discusses radix sorting and provides details on its implementation. Radix sorting is a non-comparative sorting algorithm that uses bucket sorting to sort integers by their individual digits in multiple passes. Pseudocode and C code examples are provided to demonstrate how radix sorting works by distributing numbers into buckets based on their digits and recombining the buckets in sorted order. The time complexity of radix sorting is linear for a constant number of digits.
The document describes the bucket sorting algorithm. It works by dividing the input range into buckets and distributing elements into the buckets based on their values. Each bucket is then sorted individually, usually with insertion sort, and concatenated together in order. The time complexity is O(n) when the number of buckets k is Θ(n), as distributing elements takes O(n) time, sorting each bucket takes O(n log(n/k)) time, and concatenating takes O(k) time. Bucket sort assumes uniform distribution of input elements across the range.
This document discusses arrays, records, and pointers. It begins by defining linear and non-linear data structures. Linear data structures have elements stored in sequential memory locations or linked by pointers. Arrays and linked lists are two ways to represent linear structures. Common operations on linear structures are traversal, search, insertion, deletion, sorting, and merging. Arrays are preferred over linked lists when the data needs to be traversed, searched, or sorted frequently. The document then discusses linear arrays in more detail, including representation in memory, traversing arrays, inserting and deleting elements, and multidimensional arrays.
This document provides summaries of different data structures: arrays, stacks, queues, and linked lists. Arrays allow storing multiple values in a single variable using indexes. Stacks follow LIFO order using push and pop operations. Queues follow FIFO order using enqueue and dequeue operations. Linked lists contain nodes with a data and pointer to the next node, allowing dynamic size lists.
Hash table in data structure and algorithmAamir Sohail
The document discusses hash tables and their use for efficient data retrieval. It begins by comparing the time complexity of different data structures for searching, noting that hash tables provide constant time O(1) search. It then provides examples of using hash tables to store student records and complaints by number. Key aspects covered include hash functions mapping data to table indices, minimizing collisions, open and closed addressing for collisions, and linked lists or probing as solutions. Types of hash functions and their parameters are defined. The document aims to explain the core concepts of hashing, hash functions, hash tables and approaches for handling collisions.
This document discusses linear search and binary search algorithms. Linear search sequentially checks each element of an unsorted array to find a target value, resulting in O(n) time complexity. Binary search works on a sorted array, comparing the target to the middle element and recursively searching half the array, requiring O(log n) time. The document provides pseudocode for both algorithms and compares their performance on different sized inputs. It also discusses properties of greedy algorithms and provides an example of when a greedy solution fails to find the optimal result.
The document discusses different data structures for representing queues and linked lists, including their implementations and operations. Queues follow FIFO ordering and can be implemented using arrays or linked lists. Linked lists allow efficient insertion/removal at both ends and can be used to implement double-ended queues (deques). Deques support efficient insertion/removal from both ends and can implement stacks and queues. Sequences generalize vectors and linked lists, introducing the concept of positions to provide implementation independence.
1. Linear search sequentially checks each element of an array to find a target item. It adds the item to the end of the array and uses a counter to check each element until it finds a match.
2. Binary search works on a sorted array. It checks the middle element first, then searches either the left or right half depending on if the target is smaller or larger than the middle element.
3. The example demonstrates linear search finding the letter 'G' in an array and binary search locating the number 44 through a series of steps that narrow the search space.
The document discusses data structures and lists in Python. It begins by defining data structures as a way to organize and store data for efficient access and modification. It then covers the different types of data structures, including primitive structures like integers and strings, and non-primitive structures like lists, tuples, and dictionaries. A large portion of the document focuses on lists in Python, describing how to perform common list manipulations like adding and removing elements using various methods. These methods include append(), insert(), remove(), pop(), and clear(). The document also discusses accessing list elements and other list operations such as sorting, counting, and reversing.
The document discusses various sorting algorithms that use the divide-and-conquer approach, including quicksort, mergesort, and heapsort. It provides examples of how each algorithm works by recursively dividing problems into subproblems until a base case is reached. Code implementations and pseudocode are presented for key steps like partitioning arrays in quicksort, merging sorted subarrays in mergesort, and adding and removing elements from a heap data structure in heapsort. The algorithms are compared in terms of their time and space complexity and best uses.
7 апреля 2016 года исполнилось 70 лет со дня рождения основателя кафедры математики и системного анализа Нижегородского института управления Надееву Александру Тимофеевичу
1. РОССИЙСКАЯ АКАДЕМИЯ НАРОДНОГО ХОЗЯЙСТВА И
ГОСУДАРСТВЕННОЙ СЛУЖБЫ ПРИ ПРЕЗИДЕНТЕ
РОССИЙСКОЙ ФЕДЕРАЦИИ
Нижегородский институт управления
Кафедра информатики и информационных технологий
Введение в алгоритмы и
структуры данных
Ивина Наталья Львовна
доцент кафедры Информатики и ИТ
2. Тема 4. Алгоритмы поиска
Поиск заданного элемента в неупорядоченном массиве.
Поиск заданного элемента в упорядоченном массиве.
Дихотомический поиск.
Поиск заданной подпоследовательности в тексте (массиве).
3. Введение
Будем считать, что множество из N элементов задано в виде
массива целых чисел (int a[N]). Задача заключается в поиске
элемента a[i], равного заданному «аргументу поиска» x. Алгоритмы
поиска:
•Линейный поиск
•Линейный поиск с барьером
•Двоичный поиск (поиск делением пополам, бинарный поиск)
•Интерполяционный поиск
4. Поиск заданного элемента в
неупорядоченном массиве.
Линейный поиск
Если нет никакой дополнительной информации о разыскиваемых
данных, то очевидный подход - простой последовательный
просмотр массива. Такой метод называется линейным поиском.
Условия окончания поиска таковы:
1) элемент найден, т.е. a[i] = x;
2) весь массив просмотрен, и совпадения не обнаружено.
8. Поиск заданного элемента в
неупорядоченном массиве.
Линейный поиск.
Оценка сложности.
Длина массива - N элементов
Число сравнений:
лучший случай - 1
худший случай - N
средний случай - N/2
Временная сложность: O(N).
Если данные не отсортированы, то последовательный поиск
является единственным возможным методом поиска!!!
9. Поиск заданного элемента в
неупорядоченном массиве.
Линейный поиск.
Преимущества:
• Не требует сортировки значений множества.
• Не требует дополнительного анализа функции.
• Не требует дополнительной памяти.
Следовательно, может работать в потоковом режиме при
непосредственном получении данных из любого источника.
Недостатки:
• Малоэффективен по сравнению с другими алгоритмами
поиска.
Следовательно, используется, если множество содержит
небольшое количество элементов
10. Поиск заданного элемента в
неупорядоченном массиве.
Линейный поиск с барьером.
В конец массива поместим дополнительный элемент со значением
x. Назовем такой вспомогательный элемент «барьером» - он
ограждает нас от выхода за границу массива. В этом случае
размер массива увеличится на единицу, а сам массив будет
описываться так: int a[N+1].
13. Поиск заданного элемента в
упорядоченном массиве.
Дихотомический поиск.
Дихотомический поиск - метод быстрого поиска, при котором
упорядоченный набор данных разделяется на две части и
операция сравнения всегда выполняется для среднего элемента
списка: после сравнения одна половина списка отбрасывается и
операция выполняется для оставшейся половины и т.д.
Временная сложность: O(log2n).
14. Поиск заданного элемента в
упорядоченном массиве.
Двоичный поиск (поиск делением
пополам, бинарный поиск)
Основная идея – выбрать случайным образом некоторый
элемент a[m], и сравнить его с аргументом поиска x.
Если a[m] = x, то поиск заканчивается;
если a[m] > x, то продолжаем искать x в левой от a[m] части
массива;
если a[m] < x, то продолжаем искать x в правой от a[m] части
массива.
15. Поиск заданного элемента в
упорядоченном массиве. Двоичный
поиск (поиск делением пополам,
бинарный поиск). Алгоритм.
1. Определим L и R как левую и правую границу интервала поиска
соответственно.
2. Выберем произвольное m, лежащее между L и R, т.е. L ≤ m ≤ R.
3. Сравним x с элементом массива a[m]; если они равны, то алгоритм
завершен, иначе выполняем шаг 4.
4. Если x > a[m], то изменяем левую границу интервала: L = m+1,
иначе изменяем правую границу интервала: R = m–1.
5. Если интервал не пуст, т.е. L ≤ R, идем на шаг 2.
16. Поиск заданного элемента в
упорядоченном массиве.
Двоичный поиск (поиск делением
пополам, бинарный поиск).
Оценка эффективности.
Выбор m произволен в том смысле, что корректность алгоритма от
него не зависит.
Однако выбор m влияет на эффективность алгоритма.
Оптимальное решение - выбор среднего элемента, так как при
этом в любом случае будет исключаться половина интервала.
Число сравнений:
в лучшем случае = 1
в худшем случае = log n.
17. Поиск заданного элемента в
упорядоченном массиве.
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
34
Двоичный
поиск. Пример 1.
L = 0, R = 9,
m = (0 + 9) / 2 = 4
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
34
L = 5, R = 9,
m = (5 + 9) / 2 = 7
0 1 2 3 4 5 6 7 8 9
L=m R
01 05 09 11 16 17 20 24 34 48
34
L = 8, R = 9,
m = (8 + 9) / 2 = 8
Результат поиска положителен. Искомое число обнаружено на 9 месте.
18. Поиск заданного элемента в
упорядоченном мДваосисчинвыей.
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
02
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
02
0 1 2 3 4 5 6 7 8 9
L=m=R
01 05 09 11 16 17 20 24 34 48
02
поиск.
Пример 2.
L = 0, R = 9,
m = (0+9)/2= 4
L = 0, R = 3,
m = (0+3)/2= 1
L = 0, R = 0,
m = (0+0)/2= 0
Здесь L стало равно единице, R осталось равным нулю, т.е. L > R,
следовательно, искомого числа нет в данном массиве.
21. Поиск заданного элемента в
упорядоченном массиве. Двоичный
поиск. Пример к блок-схеме 2.
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
34
0 1 2 3 4 5 6 7 8 9
L m R
01 05 09 11 16 17 20 24 34 48
34
0 1 2 3 4 5 6 7 8 9
L=m R
01 05 09 11 16 17 20 24 34 48
34
0 1 2 3 4 5 6 7 8 9
L=m=R
01 05 09 11 16 17 20 24 34 48
34
L = 0, R = 9,
m = (0 + 9) / 2 = 4
L = 5, R = 9,
m = (5 + 9) / 2 = 7
L = 8, R = 9,
m = (8 + 9) / 2 = 8
L = 8, R = 8,
m = (8 + 8) / 2 = 8
22. Поиск заданного элемента в
упорядоченном массиве.
Интерполяционный поиск.
От двоичного поиска отличается лишь выбором m.
Если закон возрастания элементов массива линейный
(a[m] ≈ km + b), то индекс m определяется из соотношения
23. Поиск заданного элемента в
упорядоченном массиве.
Интерполяционный поиск.
В общем случае, если закон возрастания элементов имеет вид
a[m] ≈ f[m], индекс m определяется из соотношения
В остальном интерполяционный поиск работает так же, как и
линейный, т.е. алгоритм и блок-схемы везде, кроме выбора m,
остаются без изменений.
24. Поиск заданной
подпоследовательности в
тексте (массиве). Поиск
подстроки в строке.
Пусть задана строка S из N элементов и строка Р из M элементов.
Описаны они так:
string S[N], P[M];
Задача поиска подстроки P в строке S заключается в нахождении
первого слева вхождения P в S, т.е. найти значение индекса i,
начиная с которого
S[i] = P[0], S[i + 1] = P[1],…, S[i + M – 1] = P[M – 1].
25. Поиск заданной
подпоследовательности в тексте
(массиве). Поиск подстроки в
строке. Прямой поиск подстроки в
строке. Алгоритм
1. Установить i на начало строки S, т.е. i = 0.
2. Проверить, не вышло ли i + M за границу N строки S. Если да, то
алгоритм завершен (вхождения нет).
3. Начиная с i-го символа s, провести посимвольное сравнение
строк S и Р, т. е. S[i] и P[0], S[i+1] и P[1],…,
S[i + M – 1] и P[M – 1].
4. Если хотя бы одна пара символов не совпала, то увеличить i и
повторить шаг 2, иначе алгоритм завершен (вхождение
найдено).
26. н а д в о р е т р а в а , н а т р а в е д р о в а
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
т р а в е
Пример
27. Поиск подстроки в строке.
Блок-схема 1
Комментарий: используется
дополнительная переменная
flag, которая явно изменяет
свое значение с 0 на 1 при
обнаружении вхождения
образца P в текст S
28. Поиск подстроки в строке.
Блок-схема 2
Комментарий: используется
тот факт, что при j = M мы
дошли до конца образца P,
тем самым обнаружив его
вхождение в строку S
29. Поиск заданной
подпоследовательности в тексте
(массиве). Поиск подстроки в
строке.
Алгоритм работает достаточно эффективно, если при сравнении образца
P с фрагментом текста S довольно быстро выявляется несовпадение
(после нескольких сравнений во внутреннем цикле).
Это случается довольно часто, но в худшем случае (когда в строке часто
встречаются фрагменты, во многих символах совпадающие с образцом)
производительность алгоритма значительно падает.
Пример:
S: учить, учиться, учитель
P: учитель
Editor's Notes
Результат поиска положителен. Искомое число обнаружено на девятом месте.