This document provides an introduction to data structures using C programming language. It defines key terminology related to data organization such as data, data types, variables, records, programs and entities. It also describes common data structures like arrays, linked lists, stacks, queues, trees and graphs. Operations on data structures like traversing, searching, inserting and deleting are discussed. Asymptotic notations for analyzing algorithms like Big-O, Big-Omega and Big-Theta are introduced. The document provides a foundation for understanding different ways to organize and store data in computer programs.
This document provides information on various searching and sorting algorithms, including linear search, binary search, insertion sort, selection sort, bubble sort, quicksort, merge sort, and heap sort. It explains the basic concepts, algorithms, and complexity analyses for each method. Key points covered include how each algorithm works through examples, time complexities ranging from O(n) to O(n log n), and comparisons of their relative efficiencies.
This document discusses different data structures used in C programming including stacks, queues, dequeue, and priority queues. It provides definitions and explanations of each data structure as well as their common operations. Stacks follow LIFO order while queues follow FIFO order. Dequeues can add or remove elements from both ends like a queue and stack. Priority queues insert and remove elements based on associated priorities. Common operations for each include push, pop, enqueue, dequeue, insert_with_priority, and pull_highest_priority_element. The document is intended to teach these fundamental data structures.
In this slide I explained about merge sort algorithm. By reading attentively and watching my slide topic you will easily understand merge sort algorithm.
Quicksort is a divide and conquer algorithm that works by partitioning an array around a pivot value and recursively sorting the sub-partitions. It first chooses a pivot element and partitions the array by placing all elements less than the pivot before it and all elements greater than it after it. It then recursively quicksorts the two partitions. This continues until the individual partitions only contain single elements at which point they are sorted. Quicksort has average case performance of O(n log n) time making it very efficient for large data sets.
It is a presentation on some Searching and Sorting Techniques for Computer Science.
It consists of the following techniques:
Sequential Search
Binary Search
Selection Sort
Bubble Sort
Insertion Sort
The document discusses different sorting algorithms:
- Insertion sort is good for small datasets but has poor performance for large datasets.
- Merge sort has predictable, stable performance but uses more memory than quicksort.
- Quicksort uses little extra memory and has average case performance comparable to merge sort, but can have worst case quadratic performance in some situations.
- Benchmarking shows built-in Ruby .sort outperforms implementations of merge sort, quicksort, and insertion sort in Ruby due to optimizations in the C implementation. The document then provides pseudocode to implement quicksort in Ruby.
This document provides information on various searching and sorting algorithms, including linear search, binary search, insertion sort, selection sort, bubble sort, quicksort, merge sort, and heap sort. It explains the basic concepts, algorithms, and complexity analyses for each method. Key points covered include how each algorithm works through examples, time complexities ranging from O(n) to O(n log n), and comparisons of their relative efficiencies.
This document discusses different data structures used in C programming including stacks, queues, dequeue, and priority queues. It provides definitions and explanations of each data structure as well as their common operations. Stacks follow LIFO order while queues follow FIFO order. Dequeues can add or remove elements from both ends like a queue and stack. Priority queues insert and remove elements based on associated priorities. Common operations for each include push, pop, enqueue, dequeue, insert_with_priority, and pull_highest_priority_element. The document is intended to teach these fundamental data structures.
In this slide I explained about merge sort algorithm. By reading attentively and watching my slide topic you will easily understand merge sort algorithm.
Quicksort is a divide and conquer algorithm that works by partitioning an array around a pivot value and recursively sorting the sub-partitions. It first chooses a pivot element and partitions the array by placing all elements less than the pivot before it and all elements greater than it after it. It then recursively quicksorts the two partitions. This continues until the individual partitions only contain single elements at which point they are sorted. Quicksort has average case performance of O(n log n) time making it very efficient for large data sets.
It is a presentation on some Searching and Sorting Techniques for Computer Science.
It consists of the following techniques:
Sequential Search
Binary Search
Selection Sort
Bubble Sort
Insertion Sort
The document discusses different sorting algorithms:
- Insertion sort is good for small datasets but has poor performance for large datasets.
- Merge sort has predictable, stable performance but uses more memory than quicksort.
- Quicksort uses little extra memory and has average case performance comparable to merge sort, but can have worst case quadratic performance in some situations.
- Benchmarking shows built-in Ruby .sort outperforms implementations of merge sort, quicksort, and insertion sort in Ruby due to optimizations in the C implementation. The document then provides pseudocode to implement quicksort in Ruby.
The document discusses several sorting algorithms: selection sort, bubble sort, quicksort, and merge sort. Selection sort has linear time complexity for swaps but quadratic time for comparisons. Bubble sort is quadratic time for both swaps and comparisons, making it the least efficient. Quicksort and merge sort are the fastest algorithms, both with logarithmic time complexity of O(n log n) for swaps and comparisons. Quicksort risks quadratic behavior in the worst case if pivots are chosen poorly, while merge sort requires more data copying between temporary and full lists.
The document discusses various sorting algorithms and their time complexities, including:
1) Quicksort, which has an average case time complexity of O(n log n) but a worst case of O(n^2). It works by recursively partitioning an array around a pivot element.
2) Heapsort, which also has a time complexity of O(n log n). It uses a binary heap to extract elements in sorted order.
3) Counting sort and radix sort, which can sort in linear time O(n) when the input has certain properties like a limited range of values or being represented by a small number of digits.
The document describes the merge sort algorithm. It works by dividing an input array into two halves, recursively sorting the halves, and then merging the sorted halves back together. The algorithm has a runtime of Θ(nlog(n)) in all cases. Pseudocode and implementations in C++, Java, and Python are provided to illustrate how merge sort divides, sorts, and merges the array halves.
The document discusses sorting algorithms and randomized quicksort. It explains that quicksort is an efficient sorting algorithm that was developed by Tony Hoare in 1960. The quicksort algorithm works by picking a pivot element and reordering the array so that all smaller elements come before the pivot and larger elements come after. It then recursively applies this process to the subarrays. Randomized quicksort improves upon quicksort by choosing the pivot element randomly, making the expected performance of the algorithm good for any input.
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:
Sorting algorithms in C++
An introduction to sorting algorithm, with details on bubble sort and merge sort algorithms
Computer science principles course
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
Here are the answers:
1. b. Merge sort is generally more efficient than bubble sort.
2. c. Both quick sort and merge sort use a divide and conquer strategy.
3. b. Pivot element is used in quick sort.
4. b. Quick sort is generally considered the fastest sorting algorithm in practice.
5. c. The quick sort is faster than merge sort.
This is a seminar presentation on "SORTING" for Semester 2 exam at St. Xavier's College.The power point presenation deals with the requirement of sorting in our life,types of sorting techniques,code for implementing them,the time and space complexity of different sorting algorithms,the applications of sorting,its use in the industry and its future scope.The slide show contains .gif files which can't be seen here.For more details or any queries send me a mail at agmajumder@gmail.com
The document discusses the quicksort algorithm. It begins by stating the learning goals which are to explain how quicksort works, compare it to other sorting algorithms, and discuss its advantages and disadvantages. It then provides an introduction and overview of quicksort, describing how it uses a divide and conquer approach. The document goes on to explain the details of how quicksort partitions arrays and provides examples. It analyzes the best, average, and worst case complexities of quicksort and discusses its strengths and limitations.
The document presents information on insertion sort, including:
- Insertion sort works by partitioning an array into sorted and unsorted portions, iteratively finding the correct insertion point for elements in the unsorted portion and shifting other elements over to make space.
- The insertion sort algorithm uses a nested loop structure to iterate through the array, comparing elements and shifting them if needed to insert the current element in the proper sorted position.
- The time complexity of insertion sort is O(n^2) in the worst case when the array is reverse sorted, requiring up to n(n-1)/2 comparisons and shifts, but it is O(n) in the best case of a presorted array. On average,
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
This document provides an overview and comparison of insertion sort and shellsort sorting algorithms. It describes how insertion sort works by repeatedly inserting elements into a sorted left portion of the array. Shellsort improves on insertion sort by making passes with larger increments to shift values into approximate positions before final sorting. The document discusses the time complexities of both algorithms and provides examples to illustrate how they work.
The document discusses various sorting algorithms including selection sort, insertion sort, merge sort, quick sort, heap sort, and external sort. It provides descriptions of each algorithm, examples of how they work, and discusses implementation in languages like C++. Key steps and properties of each algorithm are outlined. Implementation details like pseudocode and functions are also described.
Shell sort is a generalization of insertion sort that improves efficiency by allowing exchanges of elements far apart. It works by sorting arrays with increasingly smaller increments or gaps between elements, starting with the largest possible gap and reducing it until a gap of 1 is reached, at which point the list will be fully sorted. The algorithm avoids large shifts compared to insertion sort by first sorting sublists with far-apart elements to put items in nearly sorted order before switching to adjacent elements.
The document discusses several sorting algorithms and their time complexities:
- Bubble sort, insertion sort, and selection sort have O(n^2) time complexity.
- Quicksort uses a divide-and-conquer approach and has O(n log n) time complexity on average but can be O(n^2) in the worst case.
- Heapsort uses a heap data structure and has O(n log n) time complexity.
Stacks are linear data structures that follow the LIFO (last in, first out) principle. Elements can only be inserted or removed from one end, called the top. Common stack operations include push to add an element and pop to remove an element. Stacks have many applications, such as converting infix notation to postfix notation and evaluating postfix expressions.
The document summarizes sorting and searching algorithms. It describes linear and binary search algorithms and analyzes their performance. It also describes selection sort, bubble sort, and heapsort algorithms. Selection sort has O(n^2) performance while bubble sort and heapsort have comparable performance to selection sort. Heapsort improves on these and has O(nlogn) performance by using a heap data structure implemented as an array.
This document provides a 90-minute discussion on algorithms including quicksort, order statistics, searching, and substring searching. It begins with an overview of the topics and then provides details on quicksort, including the divide and conquer approach and partitioning elements around a pivot. It also describes algorithms for order statistics to find the kth smallest element, binary search, and a basic substring searching approach. Special cases and better solutions like Boyer-Moore are also mentioned.
This document discusses various sorting algorithms including merge sort. It begins with an introduction to sorting and searching. It then provides pseudocode for the merge sort algorithm which works by dividing the array into halves, recursively sorting the halves, and then merging the sorted halves back together. An example is provided to illustrate the merge sort process. Key steps include dividing, conquering by recursively sorting subarrays, and combining through merging. The overall time complexity of merge sort is O(n log n).
Data structures is one of the important subject of computer science Engineering and plays important role in competitive programming. This PPT is all about introduction about data structures in easy language
This document discusses data structures and algorithms. It defines data structures as organized collections of data elements that allow for efficient use of data in a computer. It then covers the need for data structures, their advantages, and classifications including linear structures like arrays, linked lists, stacks and queues. The document also discusses operations on data structures like traversing, insertion, deletion, searching and sorting. It defines algorithms and provides examples of common categories like sorting, searching, deletion and insertion. Finally, it discusses analyzing algorithms based on time and space complexity.
The document discusses several sorting algorithms: selection sort, bubble sort, quicksort, and merge sort. Selection sort has linear time complexity for swaps but quadratic time for comparisons. Bubble sort is quadratic time for both swaps and comparisons, making it the least efficient. Quicksort and merge sort are the fastest algorithms, both with logarithmic time complexity of O(n log n) for swaps and comparisons. Quicksort risks quadratic behavior in the worst case if pivots are chosen poorly, while merge sort requires more data copying between temporary and full lists.
The document discusses various sorting algorithms and their time complexities, including:
1) Quicksort, which has an average case time complexity of O(n log n) but a worst case of O(n^2). It works by recursively partitioning an array around a pivot element.
2) Heapsort, which also has a time complexity of O(n log n). It uses a binary heap to extract elements in sorted order.
3) Counting sort and radix sort, which can sort in linear time O(n) when the input has certain properties like a limited range of values or being represented by a small number of digits.
The document describes the merge sort algorithm. It works by dividing an input array into two halves, recursively sorting the halves, and then merging the sorted halves back together. The algorithm has a runtime of Θ(nlog(n)) in all cases. Pseudocode and implementations in C++, Java, and Python are provided to illustrate how merge sort divides, sorts, and merges the array halves.
The document discusses sorting algorithms and randomized quicksort. It explains that quicksort is an efficient sorting algorithm that was developed by Tony Hoare in 1960. The quicksort algorithm works by picking a pivot element and reordering the array so that all smaller elements come before the pivot and larger elements come after. It then recursively applies this process to the subarrays. Randomized quicksort improves upon quicksort by choosing the pivot element randomly, making the expected performance of the algorithm good for any input.
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:
Sorting algorithms in C++
An introduction to sorting algorithm, with details on bubble sort and merge sort algorithms
Computer science principles course
There are two broad categories of sorting methods based on merging: internal merge sort and external merge sort. Internal merge sort handles small lists that fit into primary memory, including simple merge sort and two-way merge sort. External merge sort is for very large lists that exceed primary memory, including balanced two-way merge sort and multi-way merge sort. The simple merge sort uses a divide-and-conquer approach to recursively split lists in half, sort each sublist, and then merge the sorted sublists.
Here are the answers:
1. b. Merge sort is generally more efficient than bubble sort.
2. c. Both quick sort and merge sort use a divide and conquer strategy.
3. b. Pivot element is used in quick sort.
4. b. Quick sort is generally considered the fastest sorting algorithm in practice.
5. c. The quick sort is faster than merge sort.
This is a seminar presentation on "SORTING" for Semester 2 exam at St. Xavier's College.The power point presenation deals with the requirement of sorting in our life,types of sorting techniques,code for implementing them,the time and space complexity of different sorting algorithms,the applications of sorting,its use in the industry and its future scope.The slide show contains .gif files which can't be seen here.For more details or any queries send me a mail at agmajumder@gmail.com
The document discusses the quicksort algorithm. It begins by stating the learning goals which are to explain how quicksort works, compare it to other sorting algorithms, and discuss its advantages and disadvantages. It then provides an introduction and overview of quicksort, describing how it uses a divide and conquer approach. The document goes on to explain the details of how quicksort partitions arrays and provides examples. It analyzes the best, average, and worst case complexities of quicksort and discusses its strengths and limitations.
The document presents information on insertion sort, including:
- Insertion sort works by partitioning an array into sorted and unsorted portions, iteratively finding the correct insertion point for elements in the unsorted portion and shifting other elements over to make space.
- The insertion sort algorithm uses a nested loop structure to iterate through the array, comparing elements and shifting them if needed to insert the current element in the proper sorted position.
- The time complexity of insertion sort is O(n^2) in the worst case when the array is reverse sorted, requiring up to n(n-1)/2 comparisons and shifts, but it is O(n) in the best case of a presorted array. On average,
The document discusses various searching and sorting algorithms. It describes linear search, binary search, and interpolation search for searching unsorted and sorted lists. It also explains different sorting algorithms like bubble sort, selection sort, insertion sort, quicksort, shellsort, heap sort, and merge sort. Linear search searches sequentially while binary search uses divide and conquer. Sorting algorithms like bubble sort, selection sort, and insertion sort are in-place and have quadratic time complexity in the worst case. Quicksort, mergesort, and heapsort generally have better performance.
This document provides an overview and comparison of insertion sort and shellsort sorting algorithms. It describes how insertion sort works by repeatedly inserting elements into a sorted left portion of the array. Shellsort improves on insertion sort by making passes with larger increments to shift values into approximate positions before final sorting. The document discusses the time complexities of both algorithms and provides examples to illustrate how they work.
The document discusses various sorting algorithms including selection sort, insertion sort, merge sort, quick sort, heap sort, and external sort. It provides descriptions of each algorithm, examples of how they work, and discusses implementation in languages like C++. Key steps and properties of each algorithm are outlined. Implementation details like pseudocode and functions are also described.
Shell sort is a generalization of insertion sort that improves efficiency by allowing exchanges of elements far apart. It works by sorting arrays with increasingly smaller increments or gaps between elements, starting with the largest possible gap and reducing it until a gap of 1 is reached, at which point the list will be fully sorted. The algorithm avoids large shifts compared to insertion sort by first sorting sublists with far-apart elements to put items in nearly sorted order before switching to adjacent elements.
The document discusses several sorting algorithms and their time complexities:
- Bubble sort, insertion sort, and selection sort have O(n^2) time complexity.
- Quicksort uses a divide-and-conquer approach and has O(n log n) time complexity on average but can be O(n^2) in the worst case.
- Heapsort uses a heap data structure and has O(n log n) time complexity.
Stacks are linear data structures that follow the LIFO (last in, first out) principle. Elements can only be inserted or removed from one end, called the top. Common stack operations include push to add an element and pop to remove an element. Stacks have many applications, such as converting infix notation to postfix notation and evaluating postfix expressions.
The document summarizes sorting and searching algorithms. It describes linear and binary search algorithms and analyzes their performance. It also describes selection sort, bubble sort, and heapsort algorithms. Selection sort has O(n^2) performance while bubble sort and heapsort have comparable performance to selection sort. Heapsort improves on these and has O(nlogn) performance by using a heap data structure implemented as an array.
This document provides a 90-minute discussion on algorithms including quicksort, order statistics, searching, and substring searching. It begins with an overview of the topics and then provides details on quicksort, including the divide and conquer approach and partitioning elements around a pivot. It also describes algorithms for order statistics to find the kth smallest element, binary search, and a basic substring searching approach. Special cases and better solutions like Boyer-Moore are also mentioned.
This document discusses various sorting algorithms including merge sort. It begins with an introduction to sorting and searching. It then provides pseudocode for the merge sort algorithm which works by dividing the array into halves, recursively sorting the halves, and then merging the sorted halves back together. An example is provided to illustrate the merge sort process. Key steps include dividing, conquering by recursively sorting subarrays, and combining through merging. The overall time complexity of merge sort is O(n log n).
Data structures is one of the important subject of computer science Engineering and plays important role in competitive programming. This PPT is all about introduction about data structures in easy language
This document discusses data structures and algorithms. It defines data structures as organized collections of data elements that allow for efficient use of data in a computer. It then covers the need for data structures, their advantages, and classifications including linear structures like arrays, linked lists, stacks and queues. The document also discusses operations on data structures like traversing, insertion, deletion, searching and sorting. It defines algorithms and provides examples of common categories like sorting, searching, deletion and insertion. Finally, it discusses analyzing algorithms based on time and space complexity.
This document provides an overview of data structures and algorithms. It discusses topics like arrays, stacks, queues, sparse matrices, and analysis of algorithms. Key points include:
- Arrays allow storing elements in contiguous memory locations and accessing via indexes. Representations include one-dimensional, two-dimensional, and sparse arrays.
- Stacks follow LIFO while queues follow FIFO using operations like push, pop for stacks and enqueue, dequeue for queues.
- Sparse matrices store only non-zero elements to save space using representations like triplet format and linked lists.
- Algorithm analysis includes asymptotic analysis of time and space complexity using notations like Big O. Performance of common operations on data structures is also
A data structure is a way of organizing data in a computer's memory so that it can be used efficiently by algorithms. The choice of data structure depends on the abstract data type and the operations that will be performed on the data. Some key characteristics of data structures include whether they are linear, static, homogeneous, or dynamic. Common operations on data structures include traversing, searching, inserting, deleting, sorting, and merging.
A data structure is a way of organizing data in a computer's memory so that it can be used efficiently by algorithms. The choice of data structure depends on the abstract data type and the operations that will be performed on the data. Some key characteristics of data structures include whether they are linear, static, homogeneous, or dynamic. Common operations on data structures include traversing, searching, inserting, deleting, sorting, and merging. The efficiency of sorting algorithms is analyzed based on best case, worst case, and average case time complexities, which typically range from O(n log n) to O(n2).
An algorithm is a finite set of instructions to accomplish a predefined task. Performance of an algorithm is measured by its time and space complexity, with common metrics being big O, big Omega, and big Theta notation. Common data structures include arrays, linked lists, stacks, queues, trees and graphs. Key concepts are asymptotic analysis of algorithms, recursion, and analyzing complexity classes like constant, linear, quadratic and logarithmic time.
C++ is an object oriented Programming language and extension of C.
Bjarne stroustrup , master of simula67 and c combined the features of both and developed a powerful language that supports object oriented programming with features of c.
Stroustrup called the new language as “ c with classes “ . in 1983 it was named as C++.
C++ is a superset of C.
Ansi – American national standards institute.
Institute founded in 1918 – goal of this institute was
To suggest
Reform
Recommend
And publish standards for data processing in usa.
Recognized council made international standard for c++
C++ is also referred as iso standard.
First draft was made on 25 January 1994.
Ansi standard attempt to ensure that c++ is portable.
The procedural language does not allow the data to flow freely around the systems
Ties to the functions that operate on it and prevents from accidental change
Oop permits to analyze a problem into number of items called objects and assembles data and functions.
Oop pays more importance of data rather than function
Programs are divided into classes and their member function
Oop follows bottom up approach
New data items and functions can be added whenever needed
Data is private and prevented from accessing external functions
Objects communicate with each other through functions
C++ is an object-oriented programming language.
Everything in C++ is associated with classes and objects, along with its attributes and methods. For example. in real life, a car is an object. The car has attributes, such as weight and color, and methods, such as drive and brake.
Attributes and methods are basically variables and functions that belongs to the class, These are often referred to as "class members".
A class is a user-defined data type that we can use in our program, and it works as an object constructor, or a "blueprint" for creating objects.
A class is a grouping of objects that have identical properties , common behavior and shared relationship
A class binds the data and its related functions together
Class identifies the nature and methods that act on the data structure and abstract data type
Group of data and code of an object can be built as user-defined data type by using class.
Objects are nothing but variables of type class.
Once a class is created , a number of objects associated with that class can be created
Syntax used to create an object is similar to create an integer variable
Class defines characteristic and action of objects
An operation required for an object or entity when coded in a class is called a method
Operations required for an object are to be defined in the class
All objects carry certain actions or operations
Each action needs an object that becomes a function in the class that defines it and is referred as a method
The document describes data structures and arrays. It defines a data structure as a particular way of organizing data in computer memory. Arrays are described as a basic linear data structure that stores elements at contiguous memory locations that can be accessed using an index. The disadvantages of arrays include a fixed size, slow insertion and deletion, and needing to shift elements to insert in the middle.
This document discusses key concepts of object-oriented programming (OOP) in C++. It defines objects, classes, methods, encapsulation, inheritance, polymorphism and other OOP pillars. It also covers data abstraction, message passing, reusability, delegation and genericity in OOP. Advantages of OOP include easy upgradability and code reuse through inheritance. OOP is widely used in simulations, databases, office software, AI systems, CAD/CAM, networking and system programming.
This document provides an introduction to data structures. It defines key terms like data, information, records and files. It also describes different types of data structures like arrays, linked lists, stacks, queues, trees and graphs. Linear and non-linear data structures are explained. Common operations on data structures like insertion, deletion and searching are outlined. The document also defines what an algorithm is and provides an example of an algorithm to add two numbers. It concludes by describing time and space complexity analysis of algorithms.
The document provides information about data structures and algorithms. It defines key terms like data, information, data structure, algorithm and different types of algorithms. It discusses linear data structures like arrays, operations on arrays like traversing, searching, inserting and deleting elements. It also covers recursive functions, recursion concept, searching algorithms like sequential search and binary search along with their algorithms and examples.
The document discusses data structures and algorithms. It defines data structures and different types including primitive and non-primitive structures. It describes operations on data structures like traversing, searching, insertion and deletion. It also defines concepts like abstract data types, asymptotic analysis, and different algorithm analysis methods. Examples provided include linear search algorithm and binary search algorithm in pseudocode and C code.
The document provides information about data structures, including definitions of key terms, examples of different data structure types, and operations that can be performed on data structures.
It begins by defining a data structure as a collection of elements and operations on those elements. Linear data structures like stacks, queues, and linked lists are described, where elements are arranged sequentially. Non-linear structures like trees and graphs are also mentioned.
Common operations on data structures include creation, insertion, deletion, searching, sorting, and reversing elements. Abstract data types are defined, and several applications of data structures in areas like operating systems, databases, and artificial intelligence are listed. Specific data structure types like linked lists, stacks, and queues are then defined
This document provides an introduction to data structures. It discusses primitive and non-primitive data structures and their classifications. Linear data structures like arrays, stacks, queues and linked lists are covered, along with non-linear structures like trees and graphs. Common operations on data structures are also summarized such as traversing, searching, inserting and deleting. Finally, abstract data types and examples of common ADTs like lists, stacks and queues are introduced.
The document provides an introduction to various data structures and algorithms concepts. It discusses different types of data structures like simple, compound, linear and non-linear data structures. It also covers algorithm analysis concepts like time complexity, asymptotic notations and different searching and sorting algorithms like linear search, binary search, bubble sort, selection sort, insertion sort, quick sort and merge sort. It provides pseudocode examples of recursive algorithms like factorial, Fibonacci sequence and towers of Hanoi problem.
Concurrency Control Techniques: Concurrency Control, Locking Techniques for Concurrency
Control, Time Stamping Protocols for Concurrency Control, Validation Based Protocol, Multiple
Granularity, Multi Version Schemes, Recovery with Concurrent Transaction,
Data Base Design & Normalization: Functional dependencies, normal forms, first, second, 8 third
normal forms, BCNF, inclusion dependence, loss less join decompositions, normalization using
FD, MVD, and JDs, alternative approaches to database design
Overview, Database System vs File System, Database System Concept and
Architecture, Data Model Schema and Instances, Data Independence and Database Language and
Interfaces, Data Definitions Language, DML, Overall Database Structure. Data Modeling Using the
Entity Relationship Model: ER Model Concepts, Notation for ER Diagram, Mapping Constraints,
Keys, Concepts of Super Key, Candidate Key, Primary Key, Generalization, Aggregation,
Reduction of an ER Diagrams to Tables, Extended ER Model, Relationship of Higher Degree.
This document provides an introduction and overview of graphs and graph algorithms using C. It defines key graph terminology like vertices, edges, paths, and cycles. It discusses different graph representations like the adjacency matrix and adjacency list. It also covers common graph algorithms like breadth-first search, depth-first search, minimum spanning trees (using Kruskal's and Prim's algorithms), and the transitive closure algorithm. The document is intended as a reference for data structures and algorithms related to graph theory and their implementation in C.
The document discusses different tree data structures and traversal algorithms. It begins by defining tree terminology like nodes, leaves, height, depth. It then explains binary trees in more detail and their properties like full/complete binary trees. Different tree traversal algorithms are covered like preorder, inorder, postorder traversals. Binary search trees and their insertion process are described. Extended binary trees, threaded binary trees, and Huffman coding trees are also summarized. Examples and diagrams are provided to illustrate the various tree concepts and algorithms.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
3. BASIC TERMINOLOGY: ELEMENTARY
DATA ORGANIZATION
• DATA AND DATA ITEM
• DATA ARE SIMPLY COLLECTION OF FACTS AND FIGURES. DATA ARE VALUES OR SET
OF VALUES. A DATA ITEM REFERS TO A SINGLE UNIT OF VALUES.
• DATA ITEMS THAT ARE DIVIDED INTO SUB ITEMS ARE GROUP ITEMS; THOSE THAT
ARE NOT ARE CALLED ELEMENTARY ITEMS.
• A STUDENT’S NAME MAY BE DIVIDED INTO THREE SUB ITEMS – [FIRST NAME,
MIDDLE NAME AND LAST NAME] BUT THE ID OF A STUDENT WOULD NORMALLY BE
TREATED AS A SINGLE ITEM.
• DATA TYPE
• DATA TYPE IS A CLASSIFICATION IDENTIFYING ONE OF VARIOUS TYPES OF DATA,
SUCH AS FLOATING-POINT, INTEGER, OR BOOLEAN, THAT DETERMINES THE
POSSIBLE VALUES FOR THAT TYPE; THE OPERATIONS THAT CAN BE DONE ON
VALUES OF THAT TYPE; AND THE WAY VALUES OF THAT TYPE CAN BE STORED. IT IS
OF TWO TYPES: PRIMITIVE AND NON-PRIMITIVE DATA TYPE.
4. BASIC TERMINOLOGY: ELEMENTARY
DATA ORGANIZATION
• VARIABLE
IT IS A SYMBOLIC NAME GIVEN TO SOME KNOWN OR UNKNOWN QUANTITY OR INFORMATION,
FOR THE PURPOSE OF ALLOWING THE NAME TO BE USED INDEPENDENTLY OF THE
INFORMATION IT REPRESENTS.
• RECORD
COLLECTION OF RELATED DATA ITEMS IS KNOWN AS RECORD. THE ELEMENTS OF RECORDS
ARE USUALLY CALLED FIELDS OR MEMBERS. RECORDS ARE DISTINGUISHED FROM ARRAYS BY
THE FACT THAT THEIR NUMBER OF FIELDS IS TYPICALLY FIXED, EACH FIELD HAS A NAME, AND
THAT EACH FIELD MAY HAVE A DIFFERENT TYPE .
• PROGRAM
A SEQUENCE OF INSTRUCTIONS THAT A COMPUTER CAN INTERPRET AND EXECUTE IS TERMED
AS PROGRAM.
• ENTITY
AN ENTITY IS SOMETHING THAT HAS CERTAIN ATTRIBUTES OR PROPERTIES WHICH MAY BE
5. WHAT IS ALGORITHM ?
• ALGORITHM A WELL-DEFINED COMPUTATIONAL PROCEDURE THAT TAKES SOME VALUE, OR A SET OF VALUES, AS INPUT
AND PRODUCES SOME VALUE, OR A SET OF VALUES, AS OUTPUT.
• IT CAN ALSO BE DEFINED AS SEQUENCE OF COMPUTATIONAL STEPS THAT TRANSFORM THE INPUT INTO THE OUTPUT.
• AN ALGORITHM CAN BE EXPRESSED IN THREE WAYS:-
(I) IN ANY NATURAL LANGUAGE SUCH AS ENGLISH, CALLED PSEUDO CODE.
(II) IN A PROGRAMMING LANGUAGE OR
(III) IN THE FORM OF A FLOWCHART.
EFFICIENCY OF AN ALGORITHM
ALGORITHMIC EFFICIENCY ARE THE PROPERTIES OF AN ALGORITHM WHICH RELATE TO THE AMOUNT OF RESOURCES
USED BY THE ALGORITHM. AN ALGORITHM MUST BE ANALYZED TO DETERMINE ITS RESOURCE USAGE.
• WORST CASE EFFICIENCY: IT IS THE MAXIMUM NUMBER OF STEPS THAT AN ALGORITHM CAN TAKE FOR ANY COLLECTION
OF DATA VALUES.
• BEST CASE EFFICIENCY: IT IS THE MINIMUM NUMBER OF STEPS THAT AN ALGORITHM CAN TAKE ANY COLLECTION OF DATA
VALUES.
• AVERAGE CASE EFFICIENCY: IT CAN BE DEFINED AS - THE EFFICIENCY AVERAGED ON ALL POSSIBLE INPUTS - MUST
ASSUME A DISTRIBUTION OF THE INPUT
6. TIME AND SPACE COMPLEXITY
• COMPLEXITY OF ALGORITHM IS A FUNCTION OF SIZE OF INPUT OF A GIVEN
PROBLEM INSTANCE WHICH DETERMINES HOW MUCH RUNNING TIME/MEMORY
SPACE IS NEEDED BY THE ALGORITHM IN ORDER TO RUN TO COMPLETION.
• TIME COMPLEXITY: TIME COMPLEXITY OF AN ALGORITHM IS THE AMOUNT OF
TIME IT NEEDS IN ORDER TO RUN TO COMPLETION.
• SPACE COMPLEXITY: SPACE COMPLEXITY OF AN ALGORITHM IS THE AMOUNT
OF SPACE IT NEEDS IN ORDER TO RUN TO COMPLETION. THERE ARE TWO
POINTS WHICH WE SHOULD CONSIDER ABOUT COMPUTER PROGRAMMING:-
• (I) AN APPROPRIATE DATA STRUCTURE AND
• (II) AN APPROPRIATE ALGORITHM.
7. ASYMPTOTIC NOTATIONS
• IT MEANS A LINE THAT CONTINUALLY APPROACHES A GIVEN CURVE BUT DOES NOT MEET IT
AT ANY FINITE DISTANCE. EXAMPLE
• X IS ASYMPTOTIC WITH X + 1 AS SHOWN IN GRAPH.
• ASYMPTOTIC MAY ALSO BE DEFINED AS A WAY TO DESCRIBE THE BEHAVIOR OF
FUNCTIONS IN THE LIMIT OR WITHOUT BOUNDS.
• LET F(X) AND G(X) BE FUNCTIONS FROM THE SET OF REAL NUMBERS TO THE SET OF REAL
NUMBERS.
• BIG-OH NOTATION (O)
• IT PROVIDES POSSIBLY ASYMPTOTICALLY TIGHT UPPER BOUND FOR F(N) AND IT DOES NOT
GIVE BEST CASE COMPLEXITY BUT CAN GIVE WORST CASE COM
• F(N) IS BIG-O OF G(N), WRITTEN AS F(N) = O(G(N)), IF THERE ARE POSITIVE CONSTANTS C
AND N0 SUCH THAT 0 ≤ F(N) ≤ C G(N) FOR ALL N ≥ N0 .
8. ASYMPTOTIC NOTATIONS
• BIG-OH NOTATION (O)
• BIG-OMEGA NOTATION (Ω)
IT PROVIDES POSSIBLY ASYMPTOTICALLY TIGHT LOWER BOUND FOR F(N) AND IT DOES
NOT GIVE WORST CASE COMPLEXITY BUT CAN GIVE BEST CASE COMPLEXITY F(N) IS
SAID TO BE BIG-OMEGA OF G(N), WRITTEN AS F(N) = Ω(G(N)), IFF THERE ARE POSITIVE
CONSTANTS C AND N0 SUCH THAT 0 ≤ C G(N) ≤ F(N) FOR ALL N ≥ N0
9. ASYMPTOTIC NOTATIONS
IF F(N) = Ω(G(N)), WE SAY THAT G(N) IS A
LOWER BOUND ON F(N).
• BIG-THETA NOTATION (Θ)
F(N) IS BIG-THETA OF G(N), WRITTEN AS F(N) =
Θ(G(N)), IFF THERE ARE POSITIVE CONSTANTS
C1, C2 AND N0 SUCH THAT 0 ≤ c1 g(n) ≤ f(n) ≤ c2
g(n) for all n ≥ n0
f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
10. DATA STRUCTURE
• A DATA STRUCTURE IS A PARTICULAR WAY OF STORING AND ORGANIZING DATA IN A
COMPUTER’S MEMORY SO THAT IT CAN BE USED EFFICIENTLY. DATA MAY BE ORGANIZED IN
MANY DIFFERENT WAYS; THE LOGICAL OR MATHEMATICAL MODEL OF A PARTICULAR
ORGANIZATION OF DATA IS CALLED A DATA STRUCTURE.
• THE CHOICE OF A PARTICULAR DATA MODEL DEPENDS ON THE TWO CONSIDERATIONS FIRST;
IT MUST BE RICH ENOUGH IN STRUCTURE TO MIRROR THE ACTUAL RELATIONSHIPS OF THE
DATA IN THE REAL WORLD.
NEED OF DATA STRUCTURE
• IT GIVES DIFFERENT LEVEL OF ORGANIZATION DATA.
• IT TELLS HOW DATA CAN BE STORED AND ACCESSED IN ITS ELEMENTARY LEVEL.
• PROVIDE OPERATION ON GROUP OF DATA, SUCH AS ADDING AN ITEM, LOOKING UP HIGHEST
PRIORITY ITEM.
• PROVIDE A MEANS TO MANAGE HUGE AMOUNT OF DATA EFFICIENTLY.
11. TYPE OF DATA STRUCTURE
• STATIC DATA STRUCTURE
A DATA STRUCTURE WHOSE ORGANIZATIONAL CHARACTERISTICS ARE INVARIANT
THROUGHOUT ITS LIFETIME. SUCH STRUCTURES ARE WELL SUPPORTED BY HIGH-LEVEL
LANGUAGES AND FAMILIAR EXAMPLES ARE ARRAYS AND RECORDS. THE PRIME FEATURES
OF STATIC STRUCTURES ARE
(a) NONE OF THE STRUCTURAL INFORMATION NEED BE STORED EXPLICITLY WITHIN THE
ELEMENTS – IT IS OFTEN HELD IN A DISTINCT LOGICAL/PHYSICAL HEADER;
(b) THE ELEMENTS OF AN ALLOCATED STRUCTURE ARE PHYSICALLY CONTIGUOUS, HELD IN
A SINGLE SEGMENT OF MEMORY;
• DYNAMIC DATA STRUCTURE
A DATA STRUCTURE WHOSE ORGANIZATIONAL CHARACTERISTICS MAY CHANGE DURING ITS
LIFETIME. THE ADAPTABILITY AFFORDED BY SUCH STRUCTURES, e.g. LINKED LISTS, IS
OFTEN AT THE EXPENSE OF DECREASED EFFICIENCY IN ACCESSING ELEMENTS OF THE
STRUCTURE. TWO MAIN FEATURES DISTINGUISH DYNAMIC STRUCTURES FROM STATIC DATA
12. TYPE OF DATA STRUCTURE
• LINEAR DATA STRUCTURE
A DATA STRUCTURE IS SAID TO BE LINEAR IF ITS ELEMENTS FORM ANY SEQUENCE.
THERE ARE BASICALLY TWO WAYS OF REPRESENTING SUCH LINEAR STRUCTURE IN
MEMORY.
A) ONE WAY IS TO HAVE THE LINEAR RELATIONSHIPS BETWEEN THE ELEMENTS
REPRESENTED BY MEANS OF SEQUENTIAL MEMORY LOCATION. THESE LINEAR
STRUCTURES ARE CALLED ARRAYS.
B) THE OTHER WAY IS TO HAVE THE LINEAR RELATIONSHIP BETWEEN THE ELEMENTS
REPRESENTED BY MEANS OF POINTERS OR LINKS. THESE LINEAR STRUCTURES ARE
CALLED LINKED LISTS. EG. ARRAYS, QUEUES, STACKS AND LINKED LISTS.
• NON-LINEAR DATA STRUCTURE
THIS STRUCTURE IS MAINLY USED TO REPRESENT DATA CONTAINING A HIERARCHICAL
13. A BRIEF DESCRIPTION OF DATA
STRUCTURES
• ARRAY
THE SIMPLEST TYPE OF DATA STRUCTURE IS A LINEAR (OR ONE DIMENSIONAL) ARRAY. A LIST OF A FINITE NUMBER N
OF SIMILAR DATA REFERENCED RESPECTIVELY BY A SET OF N CONSECUTIVE NUMBERS, USUALLY
1, 2, 3 . . . . . . . N. IF WE CHOOSE THE NAME A FOR THE ARRAY, THEN THE ELEMENTS OF A ARE DENOTED BY
SUBSCRIPT NOTATION A 1, A 2, A 3 . . . . A N
A [1], A [2], A [3] . . . . . . A [N]
• LINKED LIST
A LINKED LIST OR ONE WAY LIST IS A LINEAR COLLECTION OF DATA ELEMENTS, CALLED NODES, WHERE THE LINEAR
ORDER IS GIVEN BY MEANS OF POINTERS. EACH NODE IS DIVIDED INTO TWO PARTS:
•THE FIRST PART CONTAINS THE INFORMATION OF THE ELEMENT/NODE
• THE SECOND PART CONTAINS THE ADDRESS OF THE NEXT NODE (LINK /NEXT POINTER FIELD) IN THE LIST. THERE
IS A SPECIAL POINTER START/LIST CONTAINS THE ADDRESS OF FIRST NODE IN THE LIST
14. A BRIEF DESCRIPTION OF DATA
STRUCTURES
• TREE
DATA FREQUENTLY CONTAIN A HIERARCHICAL RELATIONSHIP BETWEEN VARIOUS ELEMENTS
THE DATA STRUCTURE WHICH REFLECTS THIS RELATIONSHIP IS CALLED A ROOTED TREE
GRAPH OR, SIMPLY, A TREE.
• GRAPH
DATA SOMETIMES CONTAINS A RELATIONSHIP BETWEEN PAIRS OF ELEMENTS WHICH IS
NOT NECESSARILY HIERARCHICAL IN NATURE, E.G. AN AIRLINE FLIGHTS ONLY BETWEEN
THE CITIES CONNECTED BY LINES. THIS DATA STRUCTURE IS CALLED GRAPH.
15. A BRIEF DESCRIPTION OF DATA
STRUCTURES
• QUEUE
A QUEUE, ALSO CALLED FIFO SYSTEM, IS A LINEAR LIST IN WHICH DELETIONS CAN TAKE
PLACE ONLY AT ONE END OF THE LIST, THE FONT OF THE LIST AND INSERTION CAN TAKE
PLACE ONLY AT THE OTHER END REAR.
• STACK
IT IS AN ORDERED GROUP OF HOMOGENEOUS ITEMS OF ELEMENTS. ELEMENTS ARE ADDED
TO AND REMOVED FROM THE TOP OF THE STACK (THE MOST RECENTLY ADDED ITEMS ARE
AT THE TOP OF THE STACK). THE LAST ELEMENT TO BE ADDED IS THE FIRST TO BE REMOVED
(LIFO: LAST IN, FIRST OUT).
16. DATA STRUCTURES OPERATIONS
• THE DATA APPEARING IN OUR DATA STRUCTURES ARE PROCESSED BY MEANS OF CERTAIN
OPERATIONS. IN FACT, THE PARTICULAR DATA STRUCTURE THAT ONE CHOOSES FOR A
GIVEN SITUATION DEPENDS LARGELY IN THE FREQUENCY WITH WHICH SPECIFIC
OPERATIONS ARE PERFORMED. THE FOLLOWING FOUR OPERATIONS PLAY A MAJOR ROLE
IN THIS TEXT:
• TRAVERSING: ACCESSING EACH RECORD/NODE EXACTLY ONCE SO THAT CERTAIN ITEMS
IN THE RECORD MAY BE PROCESSED. (THIS ACCESSING AND PROCESSING IS SOMETIMES
CALLED “VISITING” THE RECORD.)
• SEARCHING: FINDING THE LOCATION OF THE DESIRED NODE WITH A GIVEN KEY VALUE, OR
FINDING THE LOCATIONS OF ALL SUCH NODES WHICH SATISFY ONE OR MORE CONDITIONS.
• INSERTING: ADDING A NEW NODE/RECORD TO THE STRUCTURE.
• DELETING: REMOVING A NODE/RECORD FROM THE STRUCTURE.
17. ARRAYS: DEFINITION
• C PROGRAMMING LANGUAGE PROVIDES A DATA STRUCTURE CALLED THE ARRAY, WHICH CAN STORE A
FIXED-SIZE SEQUENTIAL COLLECTION OF ELEMENTS OF THE SAME TYPE.
• AN ARRAY IS USED TO STORE A COLLECTION OF DATA, BUT IT IS OFTEN MORE USEFUL TO THINK OF AN
ARRAY AS A COLLECTION OF VARIABLES OF THE SAME TYPE.
• INSTEAD OF DECLARING INDIVIDUAL VARIABLES, SUCH AS NUMBER0, NUMBER1, ..., AND NUMBER99, YOU
DECLARE ONE ARRAY VARIABLE SUCH AS NUMBERS AND USE NUMBERS[0], NUMBERS[1], AND
...,NUMBERS[99] TO REPRESENT INDIVIDUAL VARIABLES. THE ARRAY MAY BE CATEGORIZED INTO –
• ONE DIMENSIONAL ARRAY
• TWO DIMENSIONAL ARRAY
• MULTIDIMENSIONAL ARRAY REPRESENT INDIVIDUAL VARIABLES.
18. SPARSE MATRIX
MATRIX WITH MAXIMUM ZERO ENTRIES IS TERMED AS SPARSE MATRIX .
• IT CAN BE REPRESENTED AS:
LOWER TRIANGULAR MATRIX: IT HAS NON-ZERO ENTRIES ON OR BELOW DIAGONAL.
UPPER TRIANGULAR MATRIX: IT HAS NON-ZERO ENTRIES ON OR ABOVE DIAGONAL.
TRI-DIAGONAL MATRIX: IT HAS NON-ZERO ENTRIES ON DIAGONAL AND AT THE PLACES
IMMEDIATELY ABOVE OR BELOW DIAGONAL.
19. LINKED LIST
• A LINKED LIST OR ONE WAY LIST IS A LINEAR COLLECTION OF DATA ELEMENTS, CALLED
NODES, WHERE THE LINEAR ORDER IS GIVEN BY MEANS OF “POINTERS”. EACH NODE IS
DIVIDED INTO TWO PARTS. THE FIRST PART CONTAINS THE INFORMATION OF THE
ELEMENT. THE SECOND PART CALLED THE LINK FIELD CONTAINS THE ADDRESS OF THE
NEXT NODE IN THE LIST.
20. TYPES OF LINKED LISTS
SINGLY LINKED LIST
• BEGINS WITH A POINTER TO THE FIRST NODE
• TERMINATES WITH A NULL POINTER
• ONLY TRAVERSED IN ONE DIRECTION
CIRCULAR, SINGLY LINKED • POINTER IN THE LAST NODE POINTS BACK TO THE FIRST NODE
DOUBLY LINKED LIST
• TWO “START POINTERS” – FIRST ELEMENT AND LAST ELEMENT
• EACH NODE HAS A FORWARD POINTER AND A BACKWARD POINTER
• ALLOWS TRAVERSALS BOTH FORWARDS AND BACKWARDS
CIRCULAR, DOUBLY LINKED LIST
• FORWARD POINTER OF THE LAST NODE POINTS TO THE FIRST NODE AND BACKWARD POINTER OF THE FIRST
NODE POINTS TO THE LAST NODE
HEADER LINKED LIST
• LINKED LIST CONTAINS A HEADER NODE THAT CONTAINS INFORMATION REGARDING COMPLETE LINKED LIST.
21. RECURSION
• RECURSION IS A PROGRAMMING TECHNIQUE THAT ALLOWS THE PROGRAMMER TO EXPRESS
OPERATIONS IN TERMS OF THEMSELVES.
• A FUNCTION THAT CALLS ITSELF. A USEFUL WAY TO THINK OF RECURSIVE FUNCTIONS IS TO IMAGINE
THEM AS A PROCESS BEING PERFORMED WHERE ONE OF THE INSTRUCTIONS IS TO "REPEAT THE
PROCESS".
• VOID RECURSE()
• {
• RECURSE(); /* FUNCTION CALLS ITSELF */
• }
• INT MAIN()
• {
• RECURSE(); /* SETS OFF THE RECURSION */
• RETURN 0;
• }
22. TAIL RECURSION
• TAIL RECURSION OCCURS WHEN THE
LAST-EXECUTED STATEMENT OF A
FUNCTION IS A RECURSIVE CALL TO
ITSELF. IF THE LAST-EXECUTED
STATEMENT OF A FUNCTION IS A
RECURSIVE CALL TO THE FUNCTION
ITSELF , THEN THIS CALL CAN BE
ELIMINATED BY REASSIGNING THE
CALLING PARAMETERS TO THE
VALUES SPECIFIED IN THE
RECURSIVE CALL, AND THEN
REPEATING THE WHOLE FUNCTION.